And how to overcome them.
This article originally appeared in the October/November 2012 issue of MorningstarAdvisor magazine. To subscribe, please call 1-800-384-4000.
In past issues, we’ve burrowed into the machinery of manager research. For example, we’ve examined the persistence of category rankings, the stability of peer groups, and the importance of firm stewardship, among other topics. But sometimes, all a manager researcher wants to do is dish a little. Forthwith, here are five bad habits, misconceptions, and inconvenient truths that, if not tawdry, can trip manager researchers like us in our quest to deliver value to clients, as well as some suggestions for overcoming them.
1 We act like peer-group rankings mean more than they really do.
Manager researchers tend to make a big deal out of relative performance rankings. That’s not too surprising—they’re easy to measure and convey; plus, clients can grasp them without too much trouble. They’re also often part of the price of admission; i.e., you need a three-year record to, among other things, be a candidate in a search or get a performance ranking. The better you stack-up versus peers, the likelier you are to get a look from gatekeepers like manager researchers. So, three-year rankings matter, big time.
But evidence suggests that they’re not very predictive. Take the popular U.S. largecap blend mutual fund category. We tallied up three-year rolling net returns for every large blend fund that’s existed over the past two decades ending August. Then, we grouped the funds into quartiles and tracked their performance over the subsequent three-year period. What we found is that there was a 24.7% likelihood that a top-quartile large-blend fund would repeat that feat in the subsequent three-year period and a 51.1% chance it would land in the top-half. This isn’t much better than you’d do choosing a fund at random.
Many of us also tend to trot out relative-return rankings and “batting average” statistics to prove our manager-selection chops. But is a fund’s percentile ranking, or the consistency with which it has outperformed, really all that relevant when one can simply invest in an ETF targeting a similar area or plying the same style? For example, if a passive U.S. stock market index-tracking fund can generate better risk-adjusted returns than 85% of its large blend category peers in the decade ending August, “top-quartile” isn’t all it’s cracked up to be. We should stop pretending it’s otherwise.
Solution: Cost-screen more stringently, use longer time periods, recommit to beating the benchmark, not the category average.
Cost advantages tend to be sticky and, thus, are historically one of the few reliable sources of persistent outperformance, especially in return-constrained or homogeneous areas. Quite a bit of research backs this up. For example, Russel Kinnel, who heads-up fund research at Morningstar, Inc., wrote a widely cited study that found expenses were the most-predictive of several variables he examined. Thus, when ranking, screening by cost is a sensible approach.
If ranking based on performance, use longer time periods, such as five years. When we tweaked our study to replace three-year rolling return rankings with five-year rankings, we found that the five-year rankings were a bit more predictive. It’s no silver bullet, mind you, but seemed to alleviate some of the problems with relying on three-year rankings alone.
Finally, we ought to re-boot manager research to de-emphasize relative returns in favor of absolute performance. Nuanced as that might sound, it means rethinking the way many of us have traditionally selected managers. For example, cost-screening becomes far more important, as your opponent isn’t the other funds in the peer group, but a more-implacable foe—the costless benchmark. Slicing-anddicing into discrete categories or styles becomes more passé—the focus would shift to finding managers that have a clearly differentiated process and a durable advantage (in firm structure, culture, economics, or execution).
2 We trade too much.
For all of the talk you hear about the importance of long-term investing, many of us transact quite a bit. Why? Let’s face it—clients might only think we’re doing our job if we hire and fire managers. “Throw the bums out and get me somebody who knows what he’s doing,” some of us imagine you bellowing.
The problem is less nefarious than it sounds; the mechanism for hiring and firing managers is so engrained in the manager-research process that it’s almost indistinguishable from manager research itself. Clients are seeking reassurance that there’s a methodical approach to conducting manager diligence, which many define as the process of hiring and weeding-out. So, manager research can come to turn on furnishing that comfort, in the form of a system that spits out actionable ratings like buy, watch, and sell.
There’s nothing wrong with this—we use a similar type of system ourselves in managing mutual-fund portfolios for our clients. But the bells, whistles, and infrastructure of manager research can have a self-reinforcing effect—clients seek manager-researchers out at least partly because of the value they place on the “widget” they use in making hiring and firing decisions; many of us, deferring to our clients, obediently follow the rules.
Solution: Explain the buy/sell discipline, put the manager-research system in context, avoid hard-and-fast rules, and educate clients on the “lifecycle” of outperformers.
One simple way to avoid getting caught in the trap of managing to clients’ preconceived notions of manager research—i.e., hiring and firing—is to explain the buy and sell discipline upfront. Ideally, illustrate it, perhaps by time-lapsing the portfolio or strategy concerned to show which managers were added or removed from the mix, and the rationale for those decisions. By making the process less abstract, and offering a glimpse of what actually took place, clients can better calibrate their expectations.
It’s also important to correctly position the manager-research methodology itself. Namely, that it’s a tool for organizing and systematizing research, not a divining rod. In that way, clients are less likely to misunderstand the essential role that judgment and intuition must play in any hiring or firing decision.
Along those lines, it’s prudent to avoid hard-and-fast “buy” and “sell” rules that can sow confusion about how rigidly hiring and firing decisions are made. One potential way to mechanize this is by using “buffers” that, in effect, give manager researchers the leeway they need to apply judgment in the event, say, a manager badly lags its benchmark and peer group.
3 We overrate the value of manager access.
Some clarification is needed here: There is high-quality, useful manager access and then there’s access that serves little discernible purpose. So, what’s the difference?
We think that firm culture is important when assessing managers. What we’re trying to understand is what makes the firm tick—why are the managers in the business, what’s their edge, how do they deploy their people and resources to press their advantages, how do they think about allocating capital, and what are the incentives and other tools they use to deliver good outcomes? We need access to make that assessment.
Manager access can be useful in a different, less-expected way—it hopefully signals the kind of shareholder we aspire to be. Indeed, one of the ways we can help mitigate career risk is by clearly communicating our expectations to managers and providing a sense of our time horizon when investing. In that way, they’re likelier to feel they have good partners in us and can execute the long-term strategy in a less-compromising way, without looking over their shoulder.
When is manager access overrated? When it’s frittered away on formulaic questions about changes to the manager’s team and portfolio, or to delve into the causes of “tracking error” that have no real bearing on the manager’s perspective or skill. In addition, while it can be useful to place a particular security within the context of the manager’s overall investment approach, it can also be wasteful to burrow into the thesis at a microscopic level, as such details rarely yield any insights into whether the investment is well-founded or not. The deficit in investment management is seldom intellectual, after all; it’s discipline, which a check of a manager’s prepayment assumptions or terminal-year price/earnings multiple won’t shed much light on.
Solution: Treat each manager visit like it’s your last; ask the “10-year” questions; know what you want to know, upfront.
Manager access is unlikely to meaningfully advance the research process if analysts don’t enter the meeting with a sense of dispatch. If your mindset is, “Well, I’ll check up on that in six months when I’m back here,” then it’s no wonder questions devolve to issues of the moment. But what if this was going to be the only opportunity to ask a question in the next 10 years? What would we want to know? Chances are we’d wade deeply into the firm’s process, what differentiates it, and how it will keep its edge going forward—not, “What’s your thesis for Facebook?”
It’s also important to define, in advance, what we want to know. True, every manager meeting ought to have a spontaneous, conversational quality, if possible. But that doesn’t preclude manager researchers from game-planning to come up with a list of questions on the issues likeliest to have the strongest bearing on performance through the years.
4 We downplay the impermanence of our top picks.
It probably seems unthinkable that our highest-conviction manager recommendations will lose their luster and eventually fade from view. But, alas, impermanence rules— even if it’s not quite here today, gone tomorrow, odds are that our short list will change a lot with time.
We can find a few examples close to home. For example, Morningstar, Inc.’s list of top-flight funds, which the firm called Fund Analyst Picks, changed fairly dramatically. In 2002, there were 113 diversified stocks, bond, and balanced funds on the list. By 2011 (when Morningstar, Inc. replaced the picks list with a new system), only 48 of the original picks remained. For our part, the mutual fund portfolios that we manage for clients, where we’re investing in our highest-conviction ideas, have changed significantly over time as well.
To be sure, manager researchers aren’t selling stability, let alone permanence; and there’s nothing wrong per se with making changes. But there’s also a certain reassurance that comes from knowing that a recommendation is built to last: It means clients don’t have to endure the vexing, and error-prone, process of picking a replacement in one, three, or five years.
But would clients still feel that way if you knew the list was likely to turn over dramatically in the span of less than a decade? And might many of us, as manager researchers, be more likely to qualify our top recommendations if we faced the reality that many of our top recommendations were perishable?
Solution: Face reality; let failure inform the process.
Face it—our highest-conviction picks have a far shorter shelf-life than we’d probably care to admit.
We then should try to make that knowledge power. That means being more introspective about why our top picks often fall by the wayside. Did performance moderate? Did the fund get too large? Did the manager hit the bricks? It’s no fun to revisit mistakes, but doing so is likely to confer useful insights into where the manager-selection process breaks down and, more affirmatively, help us zero-in on the hallmarks of a durably successful manager.
5 We performance chase, artfully.
You’ve heard it before. We’re our own worst enemy, buying high and selling low, typically on impulse. Yet, many of us manager researchers tend to places ourselves above the fray. “What—we, performance chase?” you can practically hear us protest.
Trouble is, far too many of us do chase performance, though it tends to be so deeply enmeshed in our process that it can be easy to overlook. Take manager performance. Outperformance tends to be fleeting, often reflecting ephemera like stylistic tilts or luck, explaining why today’s hero is often tomorrow’s goat and vice versa. Yet, thoughtfully as we might construct it, the “performance” section of a typical manager assessment assumes the very opposite—it gives the highest marks to the best (often, recent) performers. In effect, many of us hardwire performance-chasing into the manager-selection process itself.
It’s present elsewhere. For example, assetallocation decisions tend to be framed with the not-so-distant past in mind. Thus, many of us start adding to foreign stocks after they’ve gone on a tear (that is, just before they roll over); trot out alternatives managers for diversification (just as the liquidity premium— on which they depend—is about to collapse); and embrace flexible, go-anywhere investors as a tonic for bear-market woes (just as traditional, “stay-at-home” managers are poised to gain the upper-hand).
Solution: Underweight performance; add contrarian “tripwires”; go the other way.
Performance has a tendency to overwhelm even the most thoughtful manager-research process, warping our assessment of qualitative factors like people and process to the point of unrecognizability. The formerly prudent process, or talented manager, can look pretty dumb under the glare of a bad three-year number. So, it’s important to devise ways to deemphasize or partition past performance as part of any manager-assessment process.
Short of that, however, it can also make sense to add “tripwires” that ensure we’re investing in a contrarian manner; for example, a manager assessment that, in the absence of deteriorating “people” and “process” scores, raises a fund’s grade when shorter-term performance slumps. By hard-coding contrarianism into the process, we invert the tendency to retreat from slumping managers.
Reflect and Adjust
Every business faces its inconvenient truths, and manager research is no exception. To be sure, those “truths” don’t indict the good-faith efforts we make on our clients’ behalf. But, to succeed, it’s critical that we further reflect on them and adjust by, among other things, de-emphasizing rankings, transacting less often, managing for alpha and not the top quartile, and better acknowledging the short shelf-life of our picks.