Morningstar Stands Behind Its Fixed-Income Data and Fund Ratings
NBER white paper’s assertions are misguided, but spark a meaningful focus on calculated metrics
November 7, 2019
Update: Morningstar published a more comprehensive analysis. Read "Credit Quality, Bond Fund Classification, and Performance: An Analysis." (Dec. 19, 2019)
A white paper from the National Bureau of Economic Research titled “Don't Take Their Word For It: The Misclassification of Bond Mutual Funds” alleges Morningstar incorrectly classifies and rates fixed-income funds based on misreported information from fund managers.
We welcome debate that spurs continued improvement in information available to investors, especially in an area as complex as fixed income. We also fully support the authors’ focus on calculations based on funds’ underlying holdings. This focus aligns with Morningstar’s efforts to expand our capacity to provide calculated fixed-income analytics, including credit-quality measures, to enable apples-to-apples comparisons across funds, fund companies, and geographies. Since 2017, Morningstar has been calculating and surfacing holdings-based credit rating breakdowns for more than 50,000 managed investment vehicles globally at a share-class level. We have made it a priority to expand these datapoints and their adoption by the industry so that investors are not so reliant on aggregated portfolio information reported by asset managers.
However, the paper makes inaccurate assertions about the role of self-reported data in assigning Morningstar Categories and the Morningstar Rating for funds (also known as the star rating). We did not receive the paper prior to its publication, though we have since reached out to the authors to discuss our data and methodologies. Following inquiries seeking guidance on how to respond to investor concerns regarding the authors' assertions, we are publishing our response here for transparency and broad availability for anyone interested.
First, we agree there are variations in roll-ups of self-reported and calculated data. However, we see other explanations for the differences in these averages.
The authors’ calculations of average credit quality based on Morningstar-calculated data are lower than the average credit quality based on self-reported data. As such, the authors assert funds misreport to us the credit quality of bonds their funds hold. We do not agree with this leap in logic.
Differences between self-reported and Morningstar-calculated data roll-ups at grade levels are primarily due to holdings classified as “not rated” in our database. While asset managers typically have complete information on the credit ratings of each of their holdings, Morningstar may not be able to match certain of those securities with our current database of bond ratings from major credit ratings agencies. For example, bond issuers often have ratings while their securities do not. This might include relatively safer sovereign issuances or futures on these bonds. Asset managers can justifiably assign a rating to a holding in this case, but the holding would nevertheless appear as “not rated” in our matching process.
As a result, Morningstar’s calculated data generally shows higher levels of not-rated bonds than those self-reported by asset managers. Because Morningstar’s proprietary methodology for calculating Average Credit Quality particularly penalizes unrated holdings by assigning them a low rating (B or BB), it is not surprising that the authors would find Morningstar’s calculated data to produce a lower average credit quality than self-reported data. When we control for not-rated holdings, we do not find a similar pattern.
As we continue to invest in our fixed-income analytical capabilities, we will expand our ability to match ratings to this sizable number of holdings that lack a security-level rating.
Second, we see no evidence that Morningstar Categories for fixed-income funds have been systematically assigned incorrectly.
Throughout the paper, the authors conflate where a fund lands in the Fixed-Income Style Box with its Morningstar Category. In reality, Morningstar’s Fixed-Income Style Box assignment and Morningstar Categories are distinct.
Morningstar does not use a fund’s Morningstar Fixed-Income Style Box assignment to determine its category classification. Rather, we employ a separate methodology to assign fixed-income funds to a Morningstar Category (as described in our April 2019 paper), and it considers a number of inputs to make category assignments. For example, Morningstar examines a fund's historical allocation to non-investment grade and non-rated bonds—gleaned from examining self-reported data, Morningstar-calculated metrics, and individual holdings—as a starting point to determine a fund's category.
Finally, we see no evidence that fixed-income funds have been assigned incorrect star ratings.
The authors say a “misclassification” of funds allows them to receive higher star ratings than they should have. This is not true.
A fund’s star rating is calculated based on past performance relative to peers in its Morningstar Category – rather than relative to funds that share its Fixed-Income Style Box placement, as described above. We see no evidence funds have been incorrectly assigned to Morningstar Categories in a widespread way. Given this, we also see no evidence that funds have been assigned incorrect star ratings.
The reliability of our data and analytics is critical to our success, and we value transparency and collaboration to support investors. We welcome further discussion from the industry on the topic.