Technical details

The FFER is an acronym for "Fundamental Fitted Estimate Ratio". It is the ratio between the actual price and a ML-derived expected price. The ML-derived expected price is appropriately named the FFE which stands for Fundamental Fitted Estimate (e.g. FFER minus the R). The FFE algorithm estimates a single company's price with an ML model (XGBoost), which uses 16 fundamental financial dimensions as inputs. Each individual FFE model is trained on data from every stock in the training set EXCEPT the stock it is doing its estimate on. I chose XGBoost for its high quality and internal observability. For simplicity, internally, the FFE models utilize the market cap instead of the price. In practice, the difference is trivial.

The 16 dimensions are:

Iterative dimension filtering

I found these dimensions through an iterative process. First, I trained the XGBoost model with 60+ dimensions. This high-dimensional model was overfit. To address this, I used an iterative script to eliminate redundant dimensions. One-by-one, the script removed the dimension identified by XGBoost as being the "least important" and re-trained the model. Each iteration generally reduced the test error until a point after-which the test error started to increase again. The model which used 16 dimensions had the loIst test error.

Using this 16-dimensional model as a template, I then swapped out some of these dimensions for clarity. For example, while "Sales on Equity" was empirically slightly more important than the similar "Sales on Assets," the script had also placed "Return on Assets" in the top 16. Given that "Return on Assets" is already a standard valuation metric, pairing "Return on Assets" with "Sales on Assets" made the model cleaner.

Bagged ensemble method

A standard 90/10 test/train split on the model results in a somewhat low R^2 (~0.50). There are many plausible reasons for this including a limited training set (the S&P 1500) and the inherent unpredictability of the stock market. To reduce this variance, the FFER utilizes a bagged ensemble method. Instead of a single training, the same model is trained 100 times with different train/test splits. The FFER algorithm then averages the output of all 100 trained models. This ensemble method provides a significantly lower variance and higher r^2 (>0.98) while still providing an intuitive and reasonable FFER distribution (e.g., most stocks are between -2.0 and 2.0 FFERs and "growth" stocks tend to have a high FFER).