Before I update the recession forecast, I want to share some new developments. First, I have decided not to return to teach in the MBA program at Carolina next year. I had hoped that teaching part-time would still allow me to continue all of my research and proprietary trading efforts, but that has not proven to be the case. While it is personally rewarding to work with the students, the opportunity cost of teaching is currently too high.

After completing my MBA derivatives class in December, I devoted the last six weeks to implementing new improvements to the recession models and to setting up a new high-powered Windows 10 laptop to replace my two Windows 7 desktop computers. Due to compatibility issues with Windows 10, this required new neural network software, which I use to make recession forecasts.

I began designing neural networks over 20 years ago, but I do not create new neural network models on a regular basis. As a result, I took this opportunity to get up to speed on the latest developments in AI, particularly deep learning and the corresponding new types of network layers, activation functions, and optimization algorithms. I also took two online classes and experimented with several different software packages. After researching AI platforms, I purchased one AI software suite and am also using a separate deep learning package that is currently available for free (and integrates with Python).

I have made several important improvements to the recession forecasting models. Over the past several years, I added a number of new explanatory variables and dropped a few others. As a result, I did not need to add any new variables at this time. However, I did implement a new approach to quantifying the trend in every explanatory variable. This approach smooths the trend calculation, which further reduces the impact of data outliers – an issue I discussed after the most recent Government shutdown. It also makes all of the trend calculations more robust by reducing the potential for over-fitting.

The original peak-trough neural network models were derived from the initial diffusion index. All of the new models are based on four explanatory variables: the original diffusion index, the 0.5 sigma diffusion index, the median recession slack index, and a moving average of the percentage of explanatory variables with increasing slack. I have discussed each of these metrics in past recession model reports. In addition to the latest values for each of these variables, the trends in these four variables (using the same approach used for the individual variables) are also input into the models. This results in a maximum of eight variables used in the new neural network peak-trough models.

Due to the complexity of the problem, I used neural network models exclusively to build the peak-trough models. As is always advisable with neural networks, I aggregate the results from a number of different neural network models to arrive at the peak-trough forecast. Each neural network has a different architecture, training set, activation function, or optimization algorithm, etc. In addition, I went to great pains to prevent the neural network models from over-fitting the data, including withholding validation and testing data sets and limiting the size of the network.

I also re-estimated the probit and logit functions (from a subset of the eight variables used as inputs for the neural networks) for the standard recession model (which forecasts the probability the U.S. economy is *currently* in a recession). This is a much easier problem than the peak-trough estimation. As a result, neural networks are not required for the standard recession model. The probit and logit functions were sufficiently powerful.

I am excited about the recession model improvements, which combine all of the new metrics I have implemented over the past few years, plus a number of new cutting-edge tools and techniques. All of the current and historical forecasts presented going forward will be based on the new models.

## December Update

This article updates the diffusion indices, recession slack index, aggregate recession model, and aggregate peak-trough model through December 2019. The current *26-variable* model has a diverse set of explanatory variables and is quite robust. Each of the explanatory variables has predictive power individually; when combined, the group of indicators is able to identify early recession warnings from a wide range of diverse market-based, fundamental, technical, and economic sources.

Several of the explanatory variables are market-based. These variables are available in real-time (no lag), which means they respond very quickly to changing market conditions. In addition, they are never revised. This makes the Trader Edge recession model more responsive than many recession models. The current *and* historical data in this report reflect the current model configuration with all *26 variables*. Continue reading →

## Recession Model Forecast: 01-01-2020

Before I update the recession forecast, I want to share some new developments. First, I have decided not to return to teach in the MBA program at Carolina next year. I had hoped that teaching part-time would still allow me to continue all of my research and proprietary trading efforts, but that has not proven to be the case. While it is personally rewarding to work with the students, the opportunity cost of teaching is currently too high.

After completing my MBA derivatives class in December, I devoted the last six weeks to implementing new improvements to the recession models and to setting up a new high-powered Windows 10 laptop to replace my two Windows 7 desktop computers. Due to compatibility issues with Windows 10, this required new neural network software, which I use to make recession forecasts.

I began designing neural networks over 20 years ago, but I do not create new neural network models on a regular basis. As a result, I took this opportunity to get up to speed on the latest developments in AI, particularly deep learning and the corresponding new types of network layers, activation functions, and optimization algorithms. I also took two online classes and experimented with several different software packages. After researching AI platforms, I purchased one AI software suite and am also using a separate deep learning package that is currently available for free (and integrates with Python).

I have made several important improvements to the recession forecasting models. Over the past several years, I added a number of new explanatory variables and dropped a few others. As a result, I did not need to add any new variables at this time. However, I did implement a new approach to quantifying the trend in every explanatory variable. This approach smooths the trend calculation, which further reduces the impact of data outliers – an issue I discussed after the most recent Government shutdown. It also makes all of the trend calculations more robust by reducing the potential for over-fitting.

The original peak-trough neural network models were derived from the initial diffusion index. All of the new models are based on four explanatory variables: the original diffusion index, the 0.5 sigma diffusion index, the median recession slack index, and a moving average of the percentage of explanatory variables with increasing slack. I have discussed each of these metrics in past recession model reports. In addition to the latest values for each of these variables, the trends in these four variables (using the same approach used for the individual variables) are also input into the models. This results in a maximum of eight variables used in the new neural network peak-trough models.

Due to the complexity of the problem, I used neural network models exclusively to build the peak-trough models. As is always advisable with neural networks, I aggregate the results from a number of different neural network models to arrive at the peak-trough forecast. Each neural network has a different architecture, training set, activation function, or optimization algorithm, etc. In addition, I went to great pains to prevent the neural network models from over-fitting the data, including withholding validation and testing data sets and limiting the size of the network.

I also re-estimated the probit and logit functions (from a subset of the eight variables used as inputs for the neural networks) for the standard recession model (which forecasts the probability the U.S. economy is

currentlyin a recession). This is a much easier problem than the peak-trough estimation. As a result, neural networks are not required for the standard recession model. The probit and logit functions were sufficiently powerful.I am excited about the recession model improvements, which combine all of the new metrics I have implemented over the past few years, plus a number of new cutting-edge tools and techniques. All of the current and historical forecasts presented going forward will be based on the new models.

## December Update

This article updates the diffusion indices, recession slack index, aggregate recession model, and aggregate peak-trough model through December 2019. The current

26-variablemodel has a diverse set of explanatory variables and is quite robust. Each of the explanatory variables has predictive power individually; when combined, the group of indicators is able to identify early recession warnings from a wide range of diverse market-based, fundamental, technical, and economic sources.Several of the explanatory variables are market-based. These variables are available in real-time (no lag), which means they respond very quickly to changing market conditions. In addition, they are never revised. This makes the Trader Edge recession model more responsive than many recession models. The current

andhistorical data in this report reflect the current model configuration with all26 variables. Continue reading →