Investment graphics don't always mean what they seem.
Today's column follows up on Wednesday's topic of the persistence of mutual fund performance. In that article, I responded to a columnist (Dan Solin) who argued that backward-looking risk-return measures are not predictive. Solin's support material, courtesy of DFA, is shown below.
I'd write that alarm bells should be ringing in your head, but I've shown this material to several experienced and insightful people, and alarm bells were generally not ringing in their heads. So, I will reword. It would be nice if alarm bells were ringing. If not, perhaps they will in the future after you read this column.
Because these charts are not sufficient. Yes, they convey that the two performance measures being tested are very imperfect guides to the future. Anybody who expected to see a tight pattern, with most of the top-scoring funds remaining top-scoring and most of the bottom feeders continuing to sit firmly on the bottom, will quickly see that is not so.
But there is no context. Not only are there no numbers, no measures of statistical significance or average results, but there is no indication of what this chart might look like if plotted with other, presumably more reliable factors.
Consider the chart shown below, which on the y-axis ranks large-cap growth, large-cap blend, and large-cap value funds on the cleverly named Mystery Factor No. 1, from 1 at the bottom to 1,012 at the top. The x-axis plots the funds' five-year total-return rankings by percentile, through July 30, 2013. See any overwhelming pattern?
The Mystery Factor is fund expense ratios.
Which means that the test fails everything. That is a convenient attribute when the intent is to convince the audience to downplay the importance of a factor, which indeed is exactly what DFA intended. These materials did not come from DFA's research group; they came from its advisor communications group, and they were created to help advisors wean their clients from the habit of expecting funds with high star ratings and/or Sharpe ratios.
If that is the task, then creating a huge scatterplot with the factor on one axis and subsequent returns on the other is the appropriate response:
1) Putting all the dots on the plot, rather than summarizing the data, gives the impression of chaos and unpredictability.
2) Evaluating a single period, rather than the cumulative effects over multiple periods, minimizes the power of the factor.
4) Don't rebalance. When the factors change, the measurements do not. The calculations are based solely on how the funds were ranked at the start of the time period.
None of those items are wrong, or incorrect, or irresponsible. I have created scatterplot charts like that in the past, and I will do so again. However, that output tells only one aspect of the story, and should not be used to arrive at a conclusion.
Skeptical? I took a common performance measure (one of those useless backward-looking calculations), sorted funds into groups based on that measure, then tracked the performance of the groups. This approach had the opposite aim of showing the factor in the strongest possible light. The precepts were:
1) Don't show all those dots; instead summarize the data and form a handful of portfolios. Creating portfolios based on the factor, then tracking the value of the portfolio, leads to a pleasantly simple chart.
2) Don't look at a single period; track the portfolios over time. The compounding effect will greatly increase the separation of the lines, and thereby make the factor appear to be more powerful.
3) Account for survivorship bias by including all funds in the study.
4) Rebalance monthly. This not only is the easiest way to deal with survivorship bias, but it also keeps the factor portfolios as pure as possible.
5) Show the results not on a log scale, which shrinks the distance between the lines, but rather on a normal scale.
There is nothing wrong with this approach, either. Indeed, many academic studies are built along these lines, as academics create factor portfolios that are rebalanced monthly and tracked over time. The academic community is not terribly interested in whether a factor can be realistically implemented, and how much money it might make for investors. Rather, it wishes to know if there is any relationship between the factor and performance. So it wishes to maximize, rather than minimize, the appearance of the results.
This is the picture for Mystery Factor No. 2. Now you are a believer, eh?
I will return in future columns to the subject of return persistence, of both expense ratios and past-performance measures. I will also at some point reveal the identity of the second mystery factor. In the interim, be wary when reading graphs--mine included!
John Rekenthaler has been researching the fund industry since 1988. He is now a columnist for Morningstar.com and a member of Morningstar's investment research department. John is quick to point out that while Morningstar typically agrees with the views of the Rekenthaler Report, his views are his own.