Two professors study whether publishing investment-strategy research dilutes the alpha of those strategies.
Spreading the Word
Academic research should hurt active fund managers. Professional managers use their training and resources to discover what others cannot. If an academic paper lifts the veil to expose their findings, the untrained can easily bum a free ride, thereby squeezing--or perhaps even eliminating--those trades' profits.
Two professors, David McLean and Jeffrey Pontiff, have put that notion to the test. In "Does Academic Research Destroy Stock Return Predictability?" they study the performance of 82 investment strategies (which they unromantically call “cross-sectional relations,” or “characteristics”) that were found to bring positive risk-adjusted returns in previous academic articles.
Examples of such strategies would be buying value, high-momentum, or smaller-company stocks; making trades based on changes in credit ratings or dividend announcements; devising an approach based on insider transactions; and expecting mean reversal in securities prices over the long term. The papers were published over a 40-year period, from 1971 to 2011.
The authors hypothesized that the out-of-sample results for those 82 strategies would be less than the original, in-sample findings.
My immediate thought was “of course.” After all, decay is the nature of the data-mining beast. Assume that the authors did find erosion. How could they determine the cause of the problem? The investment strategies' loss of effectiveness could be because investors noticed the academic papers and changed their investment habits accordingly. Or it could be, well, because.
Such is the value of an immediate thought. It might be good for spotting a potential date (or not), but it is likely to be of little use when evaluating an academic paper that has previous been vetted at several conferences and by dozens of preliminary readers. Yes, the authors were well aware of that danger. Consequently, they designed their performance test to cover three time periods: 1) the original in-sample period; 2) the time immediately following the in-sample data, but before the paper was published; and 3) the time after the paper's publication. If academic research helps to cause a strategy's demise, then the third period's performance will lag that of the second.
(As the authors acknowledge, the line separating the second and third segments can be fuzzy. Because academic papers tend to be passed around before they are published, their discoveries leak ahead of the official publication date. Then again, even the strongest article does not immediately change investment habits. It takes time for the words to create actions. Thus, argue the authors, using the publication date is a reasonable compromise.)
That is indeed what the professors found. On average, long-short versions of the 82 investment strategies enjoyed risk-adjusted returns (alpha) of 9.5% per year within the data sample. In the second period, after the initial study was concluded but before the paper was published, the alpha dipped by 13%, to 8.5% annualized. After publication, alpha fell much by the much larger percentage of 44%, landing at 4.8%.