Skip to Content
Rekenthaler Report

Does Academic Research Kill Returns?

Two professors study whether publishing investment-strategy research dilutes the alpha of those strategies.

Spreading the Word
Academic research should hurt active fund managers. Professional managers use their training and resources to discover what others cannot. If an academic paper lifts the veil to expose their findings, the untrained can easily bum a free ride, thereby squeezing--or perhaps even eliminating--those trades' profits.

Two professors, David McLean and Jeffrey Pontiff, have put that notion to the test. In "Does Academic Research Destroy Stock Return Predictability?" they study the performance of 82 investment strategies (which they unromantically call "cross-sectional relations," or "characteristics") that were found to bring positive risk-adjusted returns in previous academic articles.

Examples of such strategies would be buying value, high-momentum, or smaller-company stocks; making trades based on changes in credit ratings or dividend announcements; devising an approach based on insider transactions; and expecting mean reversal in securities prices over the long term. The papers were published over a 40-year period, from 1971 to 2011.

The authors hypothesized that the out-of-sample results for those 82 strategies would be less than the original, in-sample findings. 

My immediate thought was "of course." After all, decay is the nature of the data-mining beast. Assume that the authors did find erosion. How could they determine the cause of the problem? The investment strategies' loss of effectiveness could be because investors noticed the academic papers and changed their investment habits accordingly. Or it could be, well, because.

Such is the value of an immediate thought. It might be good for spotting a potential date (or not), but it is likely to be of little use when evaluating an academic paper that has previous been vetted at several conferences and by dozens of preliminary readers. Yes, the authors were well aware of that danger. Consequently, they designed their performance test to cover three time periods: 1) the original in-sample period; 2) the time immediately following the in-sample data, but before the paper was published; and 3) the time after the paper's publication. If academic research helps to cause a strategy's demise, then the third period's performance will lag that of the second.

(As the authors acknowledge, the line separating the second and third segments can be fuzzy. Because academic papers tend to be passed around before they are published, their discoveries leak ahead of the official publication date. Then again, even the strongest article does not immediately change investment habits. It takes time for the words to create actions. Thus, argue the authors, using the publication date is a reasonable compromise.)

That is indeed what the professors found. On average, long-short versions of the 82 investment strategies enjoyed risk-adjusted returns (alpha) of 9.5% per year within the data sample. In the second period, after the initial study was concluded but before the paper was published, the alpha dipped by 13%, to 8.5% annualized. After publication, alpha fell much by the much larger percentage of 44%, landing at 4.8%.

(These figures appear in an unpublished version of the paper that the authors will post next week. This column links to the most recent version that is publicly available, which contains somewhat different numbers.)

In other words, the erosion caused by the publication was a larger effect than the erosion that occurred merely by moving from in-sample to out-of-sample. Not just in raw numbers, either. The authors found the publication effect to be statistically significant, while the second period's modest point drop was not.

I then had another idea. Surely the financial markets have become more efficient over time. (This column has previously touched on several arguments to that effect, coming both from the academic and institutional-investment communities.) Is it not natural, then, that the largest gains should occur during the in-sample periods; that results would slip modestly for the short time that the papers await publication; and that gains would decline more steeply in the long years after publication? Have the authors discovered nothing more than a time effect?  

Yes, you guessed it--the professors thought of that, too. (Although, they tend to couch the time effect in terms of lower trading costs rather than greater market efficiency.) If time abrades alpha, the authors reason, one should see a trend within the in-sample and post-publication periods. Performance should be strongest at the beginning of each of those periods and then gradually tail away, being weaker at the end. However, the numbers indicate otherwise. The strategy gains actually improve over time during the in-sample period, and while the figures do slip a bit post-publication, that movement is not statistically significant.

In short, at least on an initial reading--and recognizing all the dangers associated with that word "initial"--the paper's findings seem quite credible.

Those numbers quite surprised me. I would have expected about half the alpha to disappear in the move from in-sample to out-of-sample, and then the rest to fade in the years after publication. After all, few active mutual fund managers post consistently positive alphas even when using secret tactics. Surely, then, the pickings must be slim indeed for those following publicly known strategies.

Upon further reflection, though, the figures start to make more sense. They are, after all, theoretical returns, generated on long-short portfolios, in the absence of taxes, trading costs, or investment-management fees. In that context, alphas of just under 5% even after publication do seem possible.

Also, although the totals are risk-adjusted, many of the strategies contain perils (illiquidity, economic sensitivity, possible violations of institutional mandates) that are not necessarily captured by standard-deviation measures. The strategies' higher returns may well reflect that their lunch is not entirely free.

In short, there is certainly damage done by academic publications, but it does not appear to be fatal. That is good news not only for active managers, but for the many exchange-traded funds that now mimic academically derived strategies. 

John Rekenthaler has been researching the fund industry since 1988. He is now a columnist for Morningstar.com and a member of Morningstar's investment research department. John is quick to point out that while Morningstar typically agrees with the views of the Rekenthaler Report, his views are his own.

Sponsor Center