Lucena Banner
Is this email not displaying correctly?
View it in your browser.

Week Ending 3-12-18

Deep Reinforcement Learning (DRL)

Erez Katz writes about Betting On US Defense Stocks

by Erez Katz, CEO and Co-founder of Lucena Research.

Last week, we covered machine learning in the context of regression models. We discussed how the machine tries to solve the following problem with a mathematical equation: “Given a certain state of a publically traded instrument (stock, ETF, mutual fund, etc.), identify where it’s heading.”

Image 1: Regression analysis: Given the state of a stock based on X1, X2, X3 factors, compute where its heading - Y.

X(i) could be a technical, fundamental, macro or alternative data feature and its point in time value (i.e., PE Ratio of 12).

We described how the machine travels back in time and gathers lots and lots of sample data points of states and outcomes and, when it faces a new state, it consults its databank and looks to find closely matched states (near neighbors, from which the term Knn is derived). By inspecting how consistent the outcome was from similar states, it can then project a new outcome. Furthermore, based on how consistent the outcome was historically, it can even assign a confidence score to such projections.

Image 2: Knn output for Progressive Insurance (PGR) on QuantDesk®

The vertical blue line just to the left of the cone represents today, while the cone represents the future price distribution of all the neighbors’ outcomes. The bolded mean line represents our forecast, while the confidence score is based on how wide or narrow the cone is. The cone is nothing more than the distribution of all the projections from the nearest neighbors we gathered during the lookback period. A narrower cone represents a higher confidence.

The shortfall in regression-based machine learning is that when used alone it can’t provide a clear answer as to how we should act on the model’s output. For example, imagine we have two scenarios:

  1. A low return with high confidence.
  2. A high return with low confidence.

Which Scenario is More Favorable?

Regression models alone are not suitable to answer these types of questions. However, policies stemming from behavioral science are in the wheelhouse of Deep Reinforcement Learning (DRL).

Deep Reinforcement Learning (DRL)

To understand DRL, we have to make a clear distinction between Deep Learning and Reinforcement Learning.

What is Deep Learning?

Deep learning (also called deep nets) is a subset of machine learning that utilizes a hierarchical level of artificial neural networks to carry out many connected machine learning tasks. The artificial neural networks are built like the human brain, with neuron nodes connected together like a web. Each neuron is responsible for solving a narrowly defined problem, whereby together the web of neurons is able to solve complex problems.

For the purpose of today’s discussion, I’d like to conceptualize the attributes of deep learning at a high level. Specifically, it’s important to recognize that like other forms of artificial intelligence, deep learning holds three important attributes:

  • Generalization –Inferring an outcome from a new, yet to be seen state (or situational input) by generalizing the solution. By generalizing the way an outcome is derived from an input (for example, through a formula), we don’t have to have a reference for every single situation. Rather, all we have to do is input a state into a formula and “voila!” the formula returns our outcome.
  • Randomness – Assessing and qualifying outcomes based on random inputs. In other words, given a random state, quantitatively qualify how far off an output is from its expected outcome, the desired outcome if we had perfect information.
  • Self-Adjustment – The ability to mathematically directionally adjust a model in order to get closer and closer to the desired outcome. This is done through a process called error minimization, a learning process which gets on target through adjusting a model over many random points of reference.

Deep learning has been used traditionally for image and speech recognition. However, with the meteoric growth in alternative data, machine learning technology, and accessible computing power it is now often used in finance.

Reinforcement Learning

Reinforcement learning is derived from century old research in psychology, where learning is the process of mapping situations to actions in order to maximize a certain reward or minimize a certain punishment. Similar to a rat learning to find the right path in a maze, the learner is not told which action to take, but instead must discover which actions yield the highest reward through trial and error. 

Image 3: Rather than learning a model to make a prediction, Reinforcement Learning (RL) learns a policy.

A policy considers an input state in order to recommend the best action (or an output). For example, consider a state of a stock in the context of its technical and fundamental factors. Reinforcement learning will determine a policy of a buy, hold or sell. One additional important characteristic of reinforcement learning is the concept of a reward. Reinforcement learning is trained by rolling back time and making predictions based on various situational states. It then assesses the outcome in the context of reward (positive or negative daily return, for example).

Image 4: Reinforcement Learning (RL) learns a policy by quantifying the outcome as reward.

As you can imagine, there are endless types of situations a single stock can be in and the process of deep reinforcement learning dynamically creates and modifies a policy table (Q-table) that can be consulted with for various situations (states) with the goal of determining an action that will maximize a reward (investment return, in our case).

Image 5: The Q-table continuously updates as it learns how a policy yields consistent rewards.

The reinforcement learning process is the formation and adjustment of the policies (Q-table) by inspecting an action and assessing its reward. For example, imagine we have two identical states based on certain technical and fundamental factors for Apple (AAPL). In one case, the decision to buy AAPL generated a reward of $100. In the other case, buying AAPL caused us to lose $200. An inconsistent outcome would typically lead to splitting a single policy into two more granular policies. The learner can identify an additional factor (like a social media sentiment score) that was vastly different when compared to the two above states. In essence, what the machine has done is update the policy to be more granular so that the two seemingly identical states can be distinguished the next time it is assessed.


To the naked eye, the concepts above may be abstract and hard to relay, but the evolution of deep learning and reinforcement learning is very exciting for us -- particularly since reinforcement learning seems so perfectly aligned with investment objectives. Here are just a few reference points:

  • The learner can be trained to optimize the same objective as investors (risk adjusted return, for example).
  • The learner tries to distinguish between different states and appropriately consider a policy for:
    • Risk of loss
    • Possible, but unlikely, large return
    • Probable small return
    • Or
    • An inconclusive state by which doing nothing (staying in cash) is smart.

For those of you who wish to learn more about the application of DLR, feel free to listen to Dr. Tucker Balch, co-founder and chief scientist, in last week’s webinar with our friends at the Nasdaq Analytics Hub.

Strategies Update

As in the past, we will provide weekly updates on how the model portfolios and the theme-based strategies we cover in this newsletter are performing.

Tiebreaker – Lucena’s Long/Short Equity Strategy - YTD return of -0.55% vs. benchmark of 0.34%
Image 1: Tiebreaker YTD– benchmark is VMNIX (Vanguard Market Neutral Fund Institutional Shares)
Past performance is no guarantee of future returns.

Tiebreaker has been forward traded since 2014 and to date it has enjoyed remarkably low volatility and boasts an impressive return of 50.65%, low volatility as expressed by its max-drawdown of only 6.16%, and a Sharpe of 1.74! (You can see a more detailed view of Tiebreaker’s performance below in this newsletter.)

BlackDog – Lucena’s Risk Parity - YTD return of -0.54% vs. benchmark of -1.63%

We have recently developed a sophisticated multi-sleeve optimization engine set to provide the most suitable asset allocation for a given risk profile, while respecting multi-level allocation restriction rules.

Essentially, we strive to obtain an optimal decision while taking into consideration the trade-offs between two or more conflicting objectives. For example, if you consider a wide universe of constituents, we can find a subset selection and their respective allocations to satisfy the following:

  • Maximizing Sharpe
  • Widely diversified portfolio with certain allocation restrictions across certain asset classes, market sectors and growth/value classifications
  • Restricting volatility
  • Minimizing turnover

We can also determine the proper rebalance frequency and validate the recommended methodology with a comprehensive backtest.

Image 2: BlackDog YTD– benchmark is AQR’s Risk Parity Fund Class B
Past performance is no guarantee of future returns.

Forecasting the Top 10 Positions in the S&P

Lucena’s Forecaster uses a predetermined set of 10 factors that are selected from a large set of over 500. Self-adjusting to the most recent data, we apply a genetic algorithm (GA) process that runs over the weekend to identify the most predictive set of factors based on which our price forecasts are assessed. These factors (together called a “model”) are used to forecast the price and its corresponding confidence score of every stock in the S&P. Our machine-learning algorithm travels back in time over a look-back period (or a training period) and searches for historical states in which the underlying equities were similar to their current state. By assessing how prices moved forward in the past, we anticipate their projected price change and forecast their volatility.

The charts below represent the new model and the top 10 positions assessed by Lucena’s Price Forecaster.

Image 3: Default model for the coming week.

The top 10 forecast chart below delineates the ten positions in the S&P with the highest projected market-relative return combined with their highest confidence score.

Image 4: Forecasting the top 10 position in the S&P 500 for the coming week. The yellow stars (0 stars meaning poorest and 5 stars meaning strongest) represent the confidence score based on the forecasted volatility, while the blue stars represent backtest scoring as to how successful the machine was in forecasting the underlying asset over the lookback period -- in our case, the last 3 months.

To view a brief video of all the major functions of QuantDesk, please click on the following link:
QuantDesk Overview


The table below presents the trailing 12-month performance and a YTD comparison between the two model strategies we cover in this newsletter (BlackDog and Tiebreaker), as well as the two ETFs representing the major US indexes (the DOW and the S&P).

12 Month Performance BlackDog and Tiebreaker
Image 5: Last week’s changes, trailing 12 months, and year-to-date gains/losses.
Past performance is no guarantee of future returns.

Model Tiebreaker: Lucena’s Active Long/Short US Equities Strategy:

12 Month Performance BlackDog and Tiebreaker
Tiebreaker: Paper trading model portfolio performance compared to Vanguard Market Neutral Fund since 9/1/2014. Past performance is no guarantee of future returns.

Model BlackDog 2X: Lucena’s Tactical Asset Allocation Strategy:

12 Month Performance BlackDog and Tiebreaker
BlackDog: Paper trading model portfolio performance compared to the SPY and Vanguard Balanced Index Fund since 4/1/2014. Past performance is no guarantee of future returns.


For those of you unfamiliar with BlackDog and Tiebreaker, here is a brief overview: BlackDog and Tiebreaker are two out of an assortment of model strategies that we offer our clients. Our team of quants is constantly on the hunt for innovative investment ideas. Lucena’s model portfolios are a byproduct of some of our best research, packaged into consumable model-portfolios. The performance stats and charts presented here are a reflection of paper traded portfolios on our platform, QuantDesk®. Actual performance of our clients’ portfolios may vary as it is subject to slippage and the manager’s discretionary implementation. We will be happy to facilitate an introduction with one of our clients for those of you interested in reviewing live brokerage accounts that track our model portfolios.

Tiebreaker: Tiebreaker is an actively managed long/short equity strategy. It invests in equities from the S&P 500 and Russell 1000 and is rebalanced bi-weekly using Lucena’s Forecaster, Optimizer and Hedger. Tiebreaker splits its cash evenly between its core and hedge holdings, and its hedge positions consist of long and short equities. Tiebreaker has been able to avoid major market drawdowns while still taking full advantage of subsequent run-ups. Tiebreaker is able to adjust its long/short exposure based on idiosyncratic volatility and risk. Lucena’s Hedge Finder is primarily responsible for driving this long/short exposure tilt.

Tiebreaker Model Portfolio Performance Calculation Methodology Tiebreaker's model portfolio’s performance is a paper trading simulation and it assumes opening account balance of $1,000,000 cash. Tiebreaker started to paper trade on April 28, 2014 as a cash neutral and Bata neutral strategy. However, it was substantially modified to its current dynamic mode on 9/1/2014. Trade execution and return figures assume positions are opened at the 11:00AM EST price quoted by the primary exchange on which the security is traded and unless a stop is triggered, the positions are closed at the 4:00PM EST price quoted by the primary exchange on which the security is traded. In the case of a stop loss, a trailing 5% stop loss is imposed and is measured from the intra-week high (in the case of longs) and low (in the case of shorts). If the stop loss was triggered, an exit from the position 5% below, in the case of longs, and 5% above, in the case of shorts. Tiebreaker assesses the price at which the position is exited with the following modification: prior to March 1st, 2016, at times but not at all times, if, in consultation with a client executing the strategy, it is found that the client received a less favorable price in closing out a position when a stop loss is triggered, the less favorable price is used in determining the exit price. On September 28, 2016 we have applied new allocation algorithms to Tiebreaker and modified its rebalancing sequence to be every two weeks (10 trading days). Since March 1st, 2016, all trades are conducted automatically with no modifications based on the guidelines outlined herein. No manual modifications have been made to the gain stop prices. In instances where a position gaps through the trigger price, the initial open gapped trading price is utilized. Transaction costs are calculated as the larger of 6.95 per trade or $0.0035 * number of shares trades.

BlackDog: BlackDog is a paper trading simulation of a tactical asset allocation strategy that utilizes highly liquid ETFs of large cap and fixed income instruments. The portfolio is adjusted approximately once per month based on Lucena’s Optimizer in conjunction with Lucena’s macroeconomic ensemble voting model. Due to BlackDog’s low volatility (half the market in backtesting) we leveraged it 2X. By exposing twice its original cash assets, we take full advantage of its potential returns while maintaining market-relative low volatility and risk. As evidenced by the chart below, BlackDog 2X is substantially ahead of its benchmark (S&P 500).

In the past year, we covered QuantDesk's Forecaster, Back-tester, Optimizer, Hedger and our Event Study. In future briefings, we will keep you up-to-date on how our live portfolios are executing. We will also showcase new technologies and capabilities that we intend to deploy and make available through our premium strategies and QuantDesk® our flagship cloud-based software.
My hope is that those of you who will be following us closely will gain a good understanding of Machine Learning techniques in statistical forecasting and will gain expertise in our suite of offerings and services.


  • Forecaster - Pattern recognition price prediction
  • Optimizer - Portfolio allocation based on risk profile
  • Hedger - Hedge positions to reduce volatility and maximize risk adjusted return
  • Event Analyzer - Identify predictable behavior following a meaningful event
  • Back Tester - Assess an investment strategy through a historical test drive before risking capital

Your comments and questions are important to us and help to drive the content of this weekly briefing. I encourage you to continue to send us your feedback, your portfolios for analysis, or any questions you wish for us to showcase in future briefings.
Send your emails to: and we will do our best to address each email received.

Please remember: This sample portfolio and the content delivered in this newsletter are for educational purposes only and NOT as the basis for one's investment strategy. Beyond discounting market impact and not counting transaction costs, there are additional factors that can impact success. Hence, additional professional due diligence and investors' insights should be considered prior to risking capital.

If you have any questions or comments on the above, feel free to contact me:

Have a great week!

Erez Katz Signature

Disclaimer Pertaining to Content Delivered & Investment Advice

This information has been prepared by Lucena Research Inc. and is intended for informational purposes only. This information should not be construed as investment, legal and/or tax advice. Additionally, this content is not intended as an offer to sell or a solicitation of any investment product or service.

Please note: Lucena is a technology company and neither manages funds nor functions as an investment advisor. Do not take the opinions expressed explicitly or implicitly in this communication as investment advice. The opinions expressed are of the author and are based on statistical forecasting on historical data analysis.
Past performance does not guarantee future success. In addition, the assumptions and the historical data based on which opinions are made could be faulty. All results and analyses expressed are hypothetical and are NOT guaranteed. All Trading involves substantial risk. Leverage Trading has large potential reward but also large potential risk. Never trade with money you cannot afford to lose. If you are neither a registered nor a certified investment professional this information is not intended for you. Please consult a registered or a certified investment advisor before risking any capital.
The performance results for active portfolios following the screen presented here will differ from the performance contained in this report for a variety of reasons, including differences related to incurring transaction costs and/or investment advisory fees, as well as differences in the time and price that securities were acquired and disposed of, and differences in the weighting of such securities. The performance results for individuals following the strategy could also differ based on differences in treatment of dividends received, including the amount received and whether and when such dividends were reinvested. Historical performance can be revisited to correct errors or anomalies and ensure it most accurately reflects the performance of the strategy.

Lucena Research
10 10th Street NW, Suite #410
Atlanta, GA 30309
p. 404-907-1702 Ext: 101 | f. 404-751-0132
To ensure you receive our future e-mails, make sure you add to your address book.

Unsubscribe from this list

Copyright © 2018 Lucena Research Inc, All rights reserved.