And how to overcome them.
This article originally appeared in the October/November 2012 issue of MorningstarAdvisor magazine. To subscribe, please call 1-800-384-4000.
In past issues, we’ve burrowed into the machinery of manager research. For example, we’ve examined the persistence of category rankings, the stability of peer groups, and the importance of firm stewardship, among other topics. But sometimes, all a manager researcher wants to do is dish a little. Forthwith, here are five bad habits, misconceptions, and inconvenient truths that, if not tawdry, can trip manager researchers like us in our quest to deliver value to clients, as well as some suggestions for overcoming them.
1 We act like peer-group rankings mean more than they really do.
Manager researchers tend to make a big deal out of relative performance rankings. That’s not too surprising—they’re easy to measure and convey; plus, clients can grasp them without too much trouble. They’re also often part of the price of admission; i.e., you need a three-year record to, among other things, be a candidate in a search or get a performance ranking. The better you stack-up versus peers, the likelier you are to get a look from gatekeepers like manager researchers. So, three-year rankings matter, big time.
But evidence suggests that they’re not very predictive. Take the popular U.S. largecap blend mutual fund category. We tallied up three-year rolling net returns for every large blend fund that’s existed over the past two decades ending August. Then, we grouped the funds into quartiles and tracked their performance over the subsequent three-year period. What we found is that there was a 24.7% likelihood that a top-quartile large-blend fund would repeat that feat in the subsequent three-year period and a 51.1% chance it would land in the top-half. This isn’t much better than you’d do choosing a fund at random.
Many of us also tend to trot out relative-return rankings and “batting average” statistics to prove our manager-selection chops. But is a fund’s percentile ranking, or the consistency with which it has outperformed, really all that relevant when one can simply invest in an ETF targeting a similar area or plying the same style? For example, if a passive U.S. stock market index-tracking fund can generate better risk-adjusted returns than 85% of its large blend category peers in the decade ending August, “top-quartile” isn’t all it’s cracked up to be. We should stop pretending it’s otherwise.
Solution: Cost-screen more stringently, use longer time periods, recommit to beating the benchmark, not the category average.
Cost advantages tend to be sticky and, thus, are historically one of the few reliable sources of persistent outperformance, especially in return-constrained or homogeneous areas. Quite a bit of research backs this up. For example, Russel Kinnel, who heads-up fund research at Morningstar, Inc., wrote a widely cited study that found expenses were the most-predictive of several variables he examined. Thus, when ranking, screening by cost is a sensible approach.