QuantDesk® Machine Learning Forecast

for the Week of May 21th, 2018

Timeseries Data To Forecast Stock Prices Using RNNs (Recurrent Neural Networks)

Erez Katz writes about Betting On US Defense Stocks

by Erez Katz, CEO and Co-founder of Lucena Research.


Recap

In the past few weeks we’ve discussed why it’s important to configure deep net infrastructure to accommodate timeseries data as a trend formation vs. a single point-in-time. Indeed, there is a vast difference in how we as humans make decisions for different situations. Some decisions rely on a static state (image classification, for example). When we feed an image to a network, it mainly relies on its final state. To put it simply, how the image was drawn or how the picture was formed is not relevant to identifying whether the image contains a cat or a dog.

In contrast, forecasting an outcome relies on some form of historical context. For example, the autocomplete function we have on our smartphones is a function of memorizing the previous sequence of letters in a word or the previous sequence of words in a sentence. A sentence that starts with “The sky is ____.” would predict with high confidence the next word to be “blue” but if we only gave the deep network the last word “is,” it would have no relevant information based on which it will be able to discern what comes next. The very same concept applies to stocks’ data. A traditional artificial neural network may learn to forecast the price of a stock based on several factors:

  • Daily Volume
  • Price to Earnings Ratio (PE)
  • Analyst Recommendations Consensus

While the future price of the stock may heavily depend on these factors, their static values at a point-in-time only tells part of the story. A much richer approach to forecasting a stock price would be to determine how the trend of the above factors formed over time.

We’ve discussed passing trend information to a fully connected deep neural network by adding historical values as additional input neurons.

Image 1: A simplified version of a fully connected deep network. The PE ratio factor is passed as a sequence of values over some time in the past. PE today, PE-1 yesterday, PE-2 the day before yesterday, etc. This network classifies 4 potential outcomes -- if the stock will move higher, lower, above the market, or below the market.

We’ve also discussed converting historical trends into two-dimensional images and passing the images as input channels into a CNN (Convolutional Neural Network). CNNs are an excellent vehicle to classify images. We’ve covered CNNs in a previous article.

Image 2: A typical convolutional layered diagram. Credit: Adit Deshpande CS Undergraduate UCLA(’19)

Here is an example of a one-dimensional trend data converted into a richer two dimensional image. In this case we’ve used a transformation method called Recurrence Plot.

Shared outstanding change Rank
Volatility rank

The idea behind this approach is to represent trends as rich images and use CNN architecture to classify what elements in the image are most reliable in forecasting the price trajectory of a stock. (Please note how we have used in this example two ranked features . Ranked features are more descriptive in the context of evaluating a stocks in relationship to their peers.)

The methods above seem compelling but they both suffer one major drawback -- it’s very hard to determine the length of time that most effectively describes the trends. The CNN example above, for example, depicts a 21-day timeseries but, come to think of it, why 21 days? Why not 63 days, or 252 days? One suggestion may be to allow the network to determine the timeframe of a trend empirically through a hyper-parameter grid search. In other words, let the network try a bunch of timeframes and determine which one works best during a cross validation period. Well, it turns out that the more parameters you add to the grid search, the more you are susceptible to overfitting. Not to mention that a trend’s timeframe is not necessarily a constant. In some cases, a trend of 21 days is more predictive while in other cases 63 days maybe more suitable.

RNN (Recurrent Neural Network) is a deep neural network designed specifically to tackle this kind of shortcoming. It is able to determine on the fly what type of historical information should be considered or discarded for a high probability classification.

What Is Recurrent Neural Network (RNN)?

Recurrent Neural Network (RNN) is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a snapshot before it tries to determine what’s next. In other words, it operates on trend representations via matrices of historical states. RNNs have some form of internal memory, so it remembers what it saw previously. In contrast to the fully connected neural nets and the convolutional neural nets, which are feed-forward classifiers by which a layer of neurons is used as input to it adjacent and subsequent layer in the hierarchy, RNNs can use the output of a neuron as an input to the very same neuron.

Image 4: A diagram representing a recurrent neuron X(t) is an input neuron, A is the class that determines what information to preserve and what to discard and h(t) is the output neuron which feeds back into the network. Credit: http://colah.github.io/posts/2015-08-Understanding-LSTMs/

The above diagram can be greatly simplified by unfolding (unrolling) the recurrent instances as follows:

Image 5: An unrolled recurrent neuron A.
Credit: http://c http://colah.github.io/posts/2015-08-Understanding-LSTMs/

This is not much different than a normal feed-forward network but with one exception. The RNN is able to determine dynamically how deep the network is ought to be. In the context of stocks’ historical feature values, consider each vertical formation of x-0 to h-0 a historical snapshot of a state (PE ratio today, PE ratio yesterday, etc…).

Taking The Concept Of RNN One Step Further LSTM (Long/Short Term Memory)

Long Short-Term Memory networks – usually just called “LSTMs” – are a special kind of RNN, capable of learning long-term dependencies. All recurrent neural networks have some form of repeating network structure. In a standard RNN the repeating infrastructure is rather straightforward with a single activation function into an output layer of neurons. In contrast LSTM contains a more robust infrastructure designed specifically to determine which information is ought to be preserved or discarded. Common to LSTM infrastructures is a Cell State layer also called a “conveyor belt”.

Image 6: A cell state/conveyor belt of the RNN infrastructure, tasked with determining which information should be preserved or discarded.

Instead of having a single neural network layer as in a typical RNN, LSTM holds multiple components, tasked with discriminately adding or removing information to be passed through the “conveyor belt”.

Image 7: A typical LSTM Cell holding four components tasked with determining which information is discarded, added and outputted to the cell state layer (conveyor belt).

I will not get too deep into the inner structure of an LSTM cell, but it’s important to note that the optionality of letting information flow through is managed by three gates activated by either Sigmoid or tanh functions:

  1. Forget Gate
  2. Input Gate
  3. Output Gate

Under the hood RNNs and LSTMs are not much different than a typical multi-layered neural net. The activation functions force the cell’s outcome to conform to a nonlinear representation, Sigmoid to a value between 0 and 1 and tanh to a value between -1 and 1. This is done mainly to enable the typical deep net’s error-minimization discovery through backpropagation and gradient descent.

Conclusion

LSTM cells effectively learn to memorize long-term dependencies and performs well. To the untrained eye, the results may seem somewhat incredible or even magical. One drawback of RNNs and in particular LSTMs is how taxing they are from computational resources demand perspective. At Lucena we spent significant $s on a robust GPUs infrastructure and in the coming weeks, I intend to bring forth to you directly from our research labs how we put all this theory into practice. We are extending our AI libraries with new offerings powered by the very same technology.

Stay tuned! More to come on this topic next week.

Strategies Update

As in the past, we will provide weekly updates on how the model portfolios and the theme-based strategies we cover in this newsletter are performing.

Tiebreaker – Lucena’s Long/Short Equity Strategy - Since Inception 51.24% vs. benchmark of 6.08%
Image 1: Tiebreaker YTD– benchmark is VMNIX (Vanguard Market Neutral Fund Institutional Shares)
Past performance is no guarantee of future returns.

Tiebreaker has been forward traded since 2014 and to date it has enjoyed remarkably low volatility and boasts an impressive return of 51.24%, low volatility as expressed by its max-drawdown of only 6.16%, and a Sharpe of 1.68! (You can see a more detailed view of Tiebreaker’s performance below in this newsletter.)

BlackDog – Lucena’s Risk Parity - Since Inception 47.13% vs. benchmark of 20.32%

We have recently developed a sophisticated multi-sleeve optimization engine set to provide the most suitable asset allocation for a given risk profile, while respecting multi-level allocation restriction rules.

Essentially, we strive to obtain an optimal decision while taking into consideration the trade-offs between two or more conflicting objectives. For example, if you consider a wide universe of constituents, we can find a subset selection and their respective allocations to satisfy the following:

  • Maximizing Sharpe
  • Widely diversified portfolio with certain allocation restrictions across certain asset classes, market sectors and growth/value classifications
  • Restricting volatility
  • Minimizing turnover

We can also determine the proper rebalance frequency and validate the recommended methodology with a comprehensive backtest.

Image 2: BlackDog YTD– benchmark is AQR’s Risk Parity Fund Class B
Past performance is no guarantee of future returns.

Forecasting the Top 10 Positions in the S&P

Lucena’s Forecaster uses a predetermined set of 10 factors that are selected from a large set of over 500. Self-adjusting to the most recent data, we apply a genetic algorithm (GA) process that runs over the weekend to identify the most predictive set of factors based on which our price forecasts are assessed. These factors (together called a “model”) are used to forecast the price and its corresponding confidence score of every stock in the S&P. Our machine-learning algorithm travels back in time over a look-back period (or a training period) and searches for historical states in which the underlying equities were similar to their current state. By assessing how prices moved forward in the past, we anticipate their projected price change and forecast their volatility.

The charts below represent the new model and the top 10 positions assessed by Lucena’s Price Forecaster.

Image 3: Default model for the coming week.

The top 10 forecast chart below delineates the ten positions in the S&P with the highest projected market-relative return combined with their highest confidence score.

Image 4: Forecasting the top 10 position in the S&P 500 for the coming week. The yellow stars (0 stars meaning poorest and 5 stars meaning strongest) represent the confidence score based on the forecasted volatility, while the blue stars represent backtest scoring as to how successful the machine was in forecasting the underlying asset over the lookback period -- in our case, the last 3 months.

To view a brief video of all the major functions of QuantDesk, please click on the following link:
Forecaster
QuantDesk Overview

Analysis

The table below presents the trailing 12-month performance and a YTD comparison between the two model strategies we cover in this newsletter (BlackDog and Tiebreaker), as well as the two ETFs representing the major US indexes (the DOW and the S&P).

12 Month Performance BlackDog and Tiebreaker
Image 5: Last week’s changes, trailing 12 months, and year-to-date gains/losses.
Past performance is no guarantee of future returns.

Appendix

For those of you unfamiliar with BlackDog and Tiebreaker, here is a brief overview: BlackDog and Tiebreaker are two out of an assortment of model strategies that we offer our clients. Our team of quants is constantly on the hunt for innovative investment ideas. Lucena’s model portfolios are a byproduct of some of our best research, packaged into consumable model-portfolios. The performance stats and charts presented here are a reflection of paper traded portfolios on our platform, QuantDesk®. Actual performance of our clients’ portfolios may vary as it is subject to slippage and the manager’s discretionary implementation. We will be happy to facilitate an introduction with one of our clients for those of you interested in reviewing live brokerage accounts that track our model portfolios.

Tiebreaker: Tiebreaker is an actively managed long/short equity strategy. It invests in equities from the S&P 500 and Russell 1000 and is rebalanced bi-weekly using Lucena’s Forecaster, Optimizer and Hedger. Tiebreaker splits its cash evenly between its core and hedge holdings, and its hedge positions consist of long and short equities. Tiebreaker has been able to avoid major market drawdowns while still taking full advantage of subsequent run-ups. Tiebreaker is able to adjust its long/short exposure based on idiosyncratic volatility and risk. Lucena’s Hedge Finder is primarily responsible for driving this long/short exposure tilt.

Tiebreaker Model Portfolio Performance Calculation Methodology Tiebreaker's model portfolio’s performance is a paper trading simulation and it assumes opening account balance of $1,000,000 cash. Tiebreaker started to paper trade on April 28, 2014 as a cash neutral and Bata neutral strategy. However, it was substantially modified to its current dynamic mode on 9/1/2014. Trade execution and return figures assume positions are opened at the 11:00AM EST price quoted by the primary exchange on which the security is traded and unless a stop is triggered, the positions are closed at the 4:00PM EST price quoted by the primary exchange on which the security is traded. In the case of a stop loss, a trailing 5% stop loss is imposed and is measured from the intra-week high (in the case of longs) and low (in the case of shorts). If the stop loss was triggered, an exit from the position 5% below, in the case of longs, and 5% above, in the case of shorts. Tiebreaker assesses the price at which the position is exited with the following modification: prior to March 1st, 2016, at times but not at all times, if, in consultation with a client executing the strategy, it is found that the client received a less favorable price in closing out a position when a stop loss is triggered, the less favorable price is used in determining the exit price. On September 28, 2016 we have applied new allocation algorithms to Tiebreaker and modified its rebalancing sequence to be every two weeks (10 trading days). Since March 1st, 2016, all trades are conducted automatically with no modifications based on the guidelines outlined herein. No manual modifications have been made to the gain stop prices. In instances where a position gaps through the trigger price, the initial open gapped trading price is utilized. Transaction costs are calculated as the larger of 6.95 per trade or $0.0035 * number of shares trades.

BlackDog: BlackDog is a paper trading simulation of a tactical asset allocation strategy that utilizes highly liquid ETFs of large cap and fixed income instruments. The portfolio is adjusted approximately once per month based on Lucena’s Optimizer in conjunction with Lucena’s macroeconomic ensemble voting model. Due to BlackDog’s low volatility (half the market in backtesting) we leveraged it 2X. By exposing twice its original cash assets, we take full advantage of its potential returns while maintaining market-relative low volatility and risk. As evidenced by the chart below, BlackDog 2X is substantially ahead of its benchmark (S&P 500).

In the past year, we covered QuantDesk's Forecaster, Back-tester, Optimizer, Hedger and our Event Study. In future briefings, we will keep you up-to-date on how our live portfolios are executing. We will also showcase new technologies and capabilities that we intend to deploy and make available through our premium strategies and QuantDesk® our flagship cloud-based software.
My hope is that those of you who will be following us closely will gain a good understanding of Machine Learning techniques in statistical forecasting and will gain expertise in our suite of offerings and services.

Specifically:

  • Forecaster - Pattern recognition price prediction
  • Optimizer - Portfolio allocation based on risk profile
  • Hedger - Hedge positions to reduce volatility and maximize risk adjusted return
  • Event Analyzer - Identify predictable behavior following a meaningful event
  • Back Tester - Assess an investment strategy through a historical test drive before risking capital

Your comments and questions are important to us and help to drive the content of this weekly briefing. I encourage you to continue to send us your feedback, your portfolios for analysis, or any questions you wish for us to showcase in future briefings.
Send your emails to: info@lucenaresearch.com and we will do our best to address each email received.

Please remember: This sample portfolio and the content delivered in this newsletter are for educational purposes only and NOT as the basis for one's investment strategy. Beyond discounting market impact and not counting transaction costs, there are additional factors that can impact success. Hence, additional professional due diligence and investors' insights should be considered prior to risking capital.

If you have any questions or comments on the above, feel free to contact me: erez@lucenaresearch.com

Have a great week!

Erez Katz Signature

erez@lucenaresearch.com


Disclaimer Pertaining to Content Delivered & Investment Advice

This information has been prepared by Lucena Research Inc. and is intended for informational purposes only. This information should not be construed as investment, legal and/or tax advice. Additionally, this content is not intended as an offer to sell or a solicitation of any investment product or service.

Please note: Lucena is a technology company and neither manages funds nor functions as an investment advisor. Do not take the opinions expressed explicitly or implicitly in this communication as investment advice. The opinions expressed are of the author and are based on statistical forecasting on historical data analysis.
Past performance does not guarantee future success. In addition, the assumptions and the historical data based on which opinions are made could be faulty. All results and analyses expressed are hypothetical and are NOT guaranteed. All Trading involves substantial risk. Leverage Trading has large potential reward but also large potential risk. Never trade with money you cannot afford to lose. If you are neither a registered nor a certified investment professional this information is not intended for you. Please consult a registered or a certified investment advisor before risking any capital.
The performance results for active portfolios following the screen presented here will differ from the performance contained in this report for a variety of reasons, including differences related to incurring transaction costs and/or investment advisory fees, as well as differences in the time and price that securities were acquired and disposed of, and differences in the weighting of such securities. The performance results for individuals following the strategy could also differ based on differences in treatment of dividends received, including the amount received and whether and when such dividends were reinvested. Historical performance can be revisited to correct errors or anomalies and ensure it most accurately reflects the performance of the strategy.