Skip to Content
Big Picture

The Wall Street Journal’s Statistical Fog

This blog post was adapted from an article that originally appeared on Oct. 26, 2017, at Morningstar.com. Read the full article.

Throughout my career at Morningstar, I’ve realized that most investors lack a basic numerical grounding and are therefore vulnerable to misleading statistical analysis. I’ve even seen industry professionals fall prey to flawed use of stats. Unfortunately, these flaws are on clear display in  The Wall Street Journal’s recent high-profile piece on Morningstar and our star ratings.

The great irony is that the  Journal’s  own numbers show the efficacy of the Morningstar Ratings, but its writers fail to grasp this insight. As seen in the chart reproduced below, 5-star funds using the  Journal’s chosen methodology produce better future star ratings than do 4-star funds, which in turn are better future performers than 3-star funds, which in turn better 2-star funds, which better 1-star funds. A rational take on these numbers would say that the stars add value. Picking higher-rated funds leads to better future results.

Graphic source: The Wall Street Journal; data source: Morningstar.

That’s no small accomplishment. Take any group of 10,000 or so things, and put them into five ranked buckets. Do you think that 10 years later the subsequent performance of those buckets will be in the anticipated order and that there will be a meaningful difference between the top and bottom group? You might think so, but you’re probably kidding yourself. Yet, rather than praising this achievement, the  Journal faulted the fact that the average subsequent rating for 5-star funds declined while the average future rating of 1-star funds improved, as shown below.

Graphic source: The Wall Street Journal; data source: Morningstar.

Well, that’s the nature of numbers. Five-star funds can’t go to six; they can only decline. One-star funds can’t go to zero; they can only be buried or improve. The numbers statistically must move toward the mean.

The Fault in Our Stars?

Beyond the laws of numbers, this move to the middle is the very nature of capitalism. Good ideas get replicated, and the competitive advantages of 5-star funds tend to diminish. Similarly, 1-star managers either change their stripes or get fired. The very numbers the  Journal generates suggest that the star rating performs exactly as Morningstar suggests: It’s not a fully fledged conclusion, but as a first-stage screen, it meaningfully tilts the odds in investors’ favor. That’s a benefit that should not be taken lightly.

A second problem with the  Journal’s statistical analysis of Morningstar’s work comes in its casual dismissal of the efficacy of the newer Morningstar Analyst Ratings. It cites that Gold funds have delivered subsequent performance of 3.4 stars, while Silver funds generated 3.3-star performance, and Bronze funds 3.0 stars, and takes the position that the differences weren’t meaningful enough. How so? The difference in future performance between a bucket of funds with an average expense ratio of 0.75% and one of 0.25% isn’t large. Sometimes, over some periods, the higher-expense bucket will prevail. Does anybody wish to argue that those extra 50 basis points aren’t worth bothering about?

I didn’t think so.

The Importance of Tilting the Odds

The  Journal’s dismissal of small advantages is precisely the human dynamic that casinos and many financial-services companies use to exploit their customers. Casinos know we’ll overlook the small tilt the odds give them, not recognizing that their small benefits can lead to huge profits. Financial-services companies take big profits from small fees and floats from investors who are blind to the transfer. The  Journal’s article casually dismisses the advantages accrued by investors.

The  Journal’s measurement shows that the star ratings pointed in the right direction for that measurement period. (Such results always vary by time.) It also showed a similar pattern for the Analyst Ratings, which unlike the star ratings incorporate the analysts’ viewpoints, and which unlike the star ratings are intended to be predictive. The Analyst Ratings to date have gone even further than the stars in improving investors’ odds. It would seem foolish to dismiss that information.

The True Story

As always, a general precept is best understood by delving into the specifics. Let’s return to the Journal’s results, as documented in this article’s first chart. The numbers show that 14% of 5-star funds, on average, went on to deliver future 5-star performance. At first blush this looks like an 86% failure rate. But does picking from the list improve your odds over picking randomly? If it does, it is value added; if it doesn’t, it’s not. Morningstar awards 5-star ratings to 10% of funds. If choosing from the pool of 5-star funds gives a 14% chance of generating 5-star performance, then it has increased an investor’s chances of holding a future 5-star fund by 40%. Moreover, many of the former 5-star funds went on to deliver very desirable 4-star performances. That’s a sizeable win, but oddly the  Journal writers present that performance as a disservice to investors.

Pushing the Needle in the Right Direction

Investors have many parties trying to take a slice of their money, but few forces trying to tilt the odds in their favor. By the  Journal’s own analysis, Morningstar’s ratings push the needle in the correct direction--while costing absolutely nothing and being widely available. If that is a sin, then perhaps Wall Street needs more sinners.

Read a message from our CEO on Morningstar’s commitment to independence, integrity, and transparency.
Take Me There