Predicting Mutual Fund Returns With the Ownership Lens
A measure invented in 2005 gets retested--and passes with flying colors.
An Unlikely Victory
Tuesday’s column discussed a 2005 academic paper, “Judging Fund Managers by the Company They Keep.” (Journal of Finance 60, by Randolph Cohen, Joshua Coval, and Lubos Pastor.) The study found that sorting U.S. equity funds according to the quality of their portfolio holdings gave better future results than selecting funds based on their past performances.
The authors evaluated that quality by determining what other funds possessed similar stocks. If those rivals had strong track records, the fund being appraised received a high score. Conversely, if the rivals had not performed well, the fund scored poorly. Thus, the authors ultimately did rank funds according to their returns. Only those returns came from peer funds, not the fund itself.
The process struck me as convoluted. (Which goes a long way toward explaining why the three authors are at Harvard, Harvard, and UChicago, respectively, and I’m blogging on the Internet.) To be sure, calculating the performance for a group of funds rather than a single fund reduces noise. That is a good thing. But placing funds into quality bins because of their (usually) slight resemblances to other fund is ... quite the stretch.
Never mind my intuition; the sorting worked, as the “best” funds according to the authors' measure subsequently thrashed the worst funds, more predictably than such traditional sorting methods as past total returns, past risk-adjusted returns, or expense ratios. Better, it turned out, to operate sideways than directly.
"Judging Fund Managers" has languished, presumably because of its circuitous approach and computational challenges. (The calculations require complete portfolio holdings--far more data than standard performance figures.) Morningstar would like to change that! Its quantitative team recently updated the study, and it found that the conclusion remains intact. Selecting mutual funds by their portfolios, instead of their performances, improves predictability.
The paper is Morningstar Ownership Lens, by Taylor Hess and Polaris Jhandi. It duplicated the research from "Judging Fund Managers," but over the entirely different time frame of 2004-19 (the first paper ended in 2002) and with a far, far larger fund universe. Rather than assess only diversified U.S. equity funds, sold within the U.S. marketplace, Morningstar’s study evaluated all equity funds from across the globe. The professors’ data set concluded its final year, 2002, with 1,526 funds. In contrast, when Morningstar’s data set ended, it contained almost 100,000 funds.
In summary: One couldn’t ask for a stronger out-of-sample experiment, with the time period in Morningstar’s paper completely new and the investment universe greatly expanded. Nor could one ask for stronger results. Despite the radical difference in test subjects, the outcome in Morningstar’s paper closely matches that of the original article.
First, let’s see what happens when one chooses funds by returns alone. As with "Judging Fund Managers," Morningstar’s paper measured past performance by using the Carhart four-factor model, which not only risk-adjusts the numbers but also considers three style effects (size, value/growth, and momentum). The paper sorted funds into quintiles based on those alphas, then calculated the returns for each quintile going forward.
The good news is that the funds with the lowest past alphas posted the lowest future returns, and that the results steadily improved over the next three alpha rungs. The bad news is that the top alpha group was merely middling, making the return spread between the top and bottom groups only 76 basis points. Most of that disparity, one suspects, owes solely to differences in expense ratios.
The Ownership Lens
None of this is remarkable; you knew that yesterday’s performance is only a very rough guide for tomorrow’s. The following table, however, comes as a surprise.
In this test, the quintiles were formed by Ownership Lens rankings. Once again, the funds with the lowest scores appear at the chart’s top and those with the highest at the bottom. Only this time, rather than coming from past performance, the quintiles were based on portfolio analysis, with the funds that shared holdings with the least successful competitors placed in the lowest quintile and those that shared holdings with the most successful competitors landing in the top quintile.
Look at that gap! A whopping 330 basis points separate the future total returns between the fund quintile with the highest Ownership Lens score and that with the lowest. In addition, the returns march in lock step; as each Ownership Lens quintile increases, so do the subsequent returns. Let’s recall, these figures come from a huge sample size, as each quintile currently possesses 20,000 funds.
When faced with such overwhelming evidence, all I can do is bow in admiration.
Two caveats must be offered. One is that, with both "Judging Fund Managers"and Morningstar’s followup, the methodology uses information that would be unavailable to investors were they to attempt to realize those results. The Ownership Lens scores are assigned on a fund’s portfolio date, but those portfolios are not publicly filed until after the fact. One could not invest exactly as the papers suggest.
Acknowledging that issue, both publications checked their findings by lagging the portfolios, so that a real-life investor could fully implement the methodology. The original article tested a three-month lag, while Morningstar’s paper assessed three-, six-, and 12-month versions. No major effect. The conclusions weakened as the lag time increased, but even when using the 12-month period, the pattern remained strong and inescapable.
The second caveat cannot be dismissed quite so quickly. It is that, while instructive, the academic approach of investing in all available funds, rescoring with each new portfolio, and rebalancing the Ownership Lens buckets monthly is deeply impractical. Nobody invests that way, and nobody ever will. It’s great to have a mutual fund calculation that appears to be powerfully predictive, but the statistic must nevertheless be implemented. How to accomplish that task?
Next Tuesday’s column, the third and final in this series, will address that question.
John Rekenthaler has been researching the fund industry since 1988. He is now a columnist for Morningstar.com and a member of Morningstar's investment research department. John is quick to point out that while Morningstar typically agrees with the views of the Rekenthaler Report, his views are his own.