When researching fund managers, put relative performance measures in their proper context.
This article originally appeared in the June/July 2013 issue of MorningstarAdvisor magazine. To subscribe, please call 1-800-384-4000.
You can see it, but can you hit it?
It’s a baseball adage, but for manager researchers like us, it’s the pivotal question— we know that “good” managers are out there, but can we find them before it is too late? The historical record isn’t terribly encouraging in this regard. Examples of remarkably successful manager researchers—be they investment committees, gatekeepers, or private investors—do not exactly abound.
What’s the hitch? Well, manager researchers are human, after all. They chase returns; they misattribute outperformance; they rationalize underperformance; they get complacent; they dig in. That is, they act like the managers they’re paid to scout and monitor, whether they’re willing to admit it or not.
In our shop, we’re hardly strangers to such mistakes, though we try our best to minimize them. That’s why, when we reflect on our process and its blind spots, we consider not just the choices we make but also the context. Are we using the right yardsticks? Do they tell the full story? Do we have the proper frame of reference? Inevitably, our analysis has led us to reconsider that most ubiquitous of manager research tools—category peer rankings.
We’ve written previously about some of the quirks of peer groups (“Pitfalls of Peer Groups,”
August/September 2011). Our research found considerable flux within peer groups, with funds flitting between categories, raising questions about the substance of rankings themselves. We’ve also examined the fleeting nature of relative performance (“Performance Chasing, Evaluated,” April/May 2012), which tends to be mean-reverting (owing to stylistic biases, capacity constraints, “career risk,” or just plain luck). What we haven’t studied, however, is the convergence of the two and its implications on our ability to find successful managers in advance.
We compiled rolling five-year net returns of all U.S. open-end mutual funds (existing and obsolete; oldest share-class only) that were members of the Morningstar large-value, large-growth, mid-blend, small-value, or small-growth categories on March 31 of any year from 2003 through 2013. We then ranked the funds within their peer groups based on trailing five-year returns. (Thus, we studied 11 distinct five-year periods: April 1998 to March 2003; April 1999 to March 2004; April 2000 to March 2005; and so forth.)