In Good Company? In 2005, three academics published a remarkable paper. "Judging Fund Managers by the Company They Keep", by Randolph Cohen, Joshua Coval, and Lubos Pastor, debuted a powerful method of forecasting the future performance of U.S. equity funds. When the authors sorted funds by the paper's new measure, the bucket that contained the lowest-decile funds recorded the single lowest average future alpha, the bucket with the second-lowest decile posted the second-lowest alpha, and so forth. With one minor exception, the pattern was unbroken.
(The authors' alpha calculations were based on the Carhart Four Factor Model, which adjusts for investment style. Using the Carhart model doesn't eliminate the possibility of an accidental finding, but it addresses the most obvious issues, such as the performance of small versus large companies or growth versus value stocks.)
The effect was almost too large to be believed, except that the paper had been thoroughly vetted and accepted by the leading academic publication of its field, Journal of Finance. The strongest version of the formula found that the top-decile funds subsequently beat the bottom-decile funds (again, as measured by Carhart alphas) by an annualized 736 basis points. That is … big. For comparison, sorting funds by their expense ratios or Morningstar Analyst Ratings leads to, at most, at difference of 200 basis points.
The authors' secret: Rather than rank funds by their past performances--which are unreliable indicators at best, and outright unavailable for funds that have new managers--they instead evaluated their portfolios. Does a fund hold mostly good stocks, or duds? The response to that question doesn't require manager history; it can be determined directly from the fund's most recent SEC filing.
Cleverly Done Wait a moment, you may (and should) object: Who is to say which stocks are best? After all, if we knew that answer in advance we wouldn't need mutual funds. We would simply buy the future winners, ignore the also-rans, and short the losers. Our professors, it seems, assumed the can opener.
But no. The authors were too sophisticated to make that mistake. When I first heard about this paper, I liked the idea of judging funds according to the quality of their portfolios. Besides being timely, it is a more direct technique for assessing funds than studying the tea leaves of past returns. But I didn't understand how the article could possibly assess portfolio quality. The writers surely would not use their own views. But what impartial source could be found?
The answer lies in the paper's title. The authors wrote that they would judge fund managers by the company that the managers kept, and that is exactly what they did. The authors tested each stock in a fund's portfolio to see what other funds held that security. If the stock was possessed mostly by funds that had excellent track records, then it was scored highly. If, on the other hand, the security was favored by second- and third-rate funds, it received a poor score.
The paper's calculation then averaged each fund's individual stock scores, weighting the securities by their importance to the portfolio--that is, by counting a 5% position 5 times more heavily than a 1% position. The single final number represented the quality of the actively run funds that had similar holdings. The higher the number, the classier the fund's cousins, and therefore the better its future performance.
(In addition to examining portfolios, the authors also assessed the predictive power of fund trades. By comparing a fund's recent portfolio to its predecessor, they were able to back out its quarterly trades. They then calculated an average peer-fund quality score as before, using trade data rather than portfolio data. The resulting statistic was also predictive; however, the effect was not as strong as when they used portfolios.)
Puzzling Over the Logic The authors explain –
For example, consider two managers with equally impressive past returns, where one manager currently keeps a big chunk of his portfolio in the stock of Intel, while the other manager owns mostly Microsoft. Suppose that Intel is currently held especially by managers with good past performance, whereas Microsoft is held mostly by managers with undistinguished records. It seems reasonable to think that the first manager, whose decision to hold Intel is shared by a higher-caliber set of managers, has a superior ability to select stocks, while the second manager, whose decisions coincide with those of subpar managers, has been merely fortunate.
Hmmm. If I managed a mutual fund that had a mediocre 10-year record (which sounds about right), and then defended my performance by pointing out that I held many of Berkshire Hathaway's BRK.B stocks, along with those possessed by some of the most successful mutual funds, would you accept that justification? I think not. You would regard that as a flimsy explanation. I would have been better served answering as managers typically do when asked such questions, by blaming the market for not recognizing the brilliance of my selections.
The paper's logic might convince academics, but not many practitioners. For that reason, along with the difficulty of generating the calculation (which requires far more data than computing total returns) and the fact that the paper hasn't been updated, "Judging Fund Managers" hasn't affected investor behavior. Despite the strength of the paper's results, no investors to my knowledge use its measure when selecting funds--not retail financial advisors, nor institutional consultants.
Morningstar's Update That may soon change. Finally, Morningstar has done what it should have done years ago, had its quantitative department been better staffed and not managed by a complete idiot, by duplicating the study. The period for Morningstar's effort is different, running from 2004 through 2018, and it expands upon the original study by including international-stock funds. Morningstar's paper is an out-of-sample test, to ascertain whether the authors' results were robust.
They were indeed. Morningstar creates quintile buckets, rather than deciles, so there isn't as much separation from top to bottom. Nevertheless, its conclusion is striking. The subsequent alphas for the top-quintile funds (again, as calculated by the Carhart model) beat those of the bottom-quintile funds by an annualized 330 basis points, a larger gap than is created by sorting funds into expense quintiles. If there is a stronger predictor for future equity-fund performance, after adjusting for style effects, I have yet to encounter it.
The next column will link to Morningstar's new paper and discuss its findings in more detail.
John Rekenthaler has been researching the fund industry since 1988. He is now a columnist for Morningstar.com and a member of Morningstar's investment research department. John is quick to point out that while Morningstar typically agrees with the views of the Rekenthaler Report, his views are his own.
The opinions expressed here are the author’s. Morningstar values diversity of thought and publishes a broad range of viewpoints.