Battle of the 1990s Remember the two tournaments, Garry Kasparov versus Deep Blue? Surely you do. Those 1996 and 1997 events were the most publicized chess games of our time, save for the Cold War clash of Fischer versus Spassky. Since then, the sport has all but disappeared from mainstream American press coverage. (The New York Times hasn't mentioned the name of the current world champion, Magnus Carlsen, since November 2016.)
The game’s descent into obscurity has permitted an achievement greater than Deep Blue’s to pass largely unnoticed.
Deep Blue was old technology, turbocharged by a supercomputer. Smaller processing chips made Deep Blue stronger than what had preceded it--powerful enough to give Kasparov a stern challenge in the first contest, and (following an upgrade) to beat him in the rematch. Since then, chips have become ever-smaller, algorithms more refined, and Deep Blue a relic. Today's free downloads, running on a $500 laptop, would thrash it.
The New King
Enter new technology. Last year, a team at
potential to learn
. AlphaZero, as Google’s program was named, was provided with the rules of chess, and then instructed to teach itself.
Teach it did. After playing games against itself for several hours, AlphaZero was paired against the top existing chess-playing program in a series of 100 games. It was undefeated, winning 28 games and drawing the other 72. Said Magnus Carlsen's trainer, "I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know."
There are some caveats. Being a for-profit enterprise rather than a scholarly group, Google has not been fully transparent. It has been chary with the project's details and has released only 10 of the 100 games. This has led some to speculate that when the full story is told, AlphaZero's achievement will be less than at first glance. In addition, Google stacked the competition in its favor, by imposing time constraints that affected its rival's routine but not AlphaZero's.
On to Investments Still, the point remains: The machine that learned things appears to be superior to the machines that do not. (In 2016, Google's AlphaGo, similarly self-taught, became the first computer program to beat a top--actually, the top--human Go player.) As quantitative mutual funds (and exchange-traded funds) are almost exclusively the latter, in that they obey human-created routines rather than devise their own approaches, it is natural to wonder if the same revolution will occur in investing.
To which hedge fund followers will exclaim, “What planet does this guy occupy?” Yes, hedge funds have long used artificial intelligence, with their short-term trades. However, neither they nor mutual funds have used machine learning to such an extent for their intermediate- to long-term trades. The vast majority of active decisions continue to be made either by humans, or by programs that obey human instructions.
There was a notable mutual fund counterexample. In 1988, now-obscure
That approach worked beautifully for the first few years, when the fund reliably pasted the S&P 500, then faded. Eventually, Lewis was demoted and the fund’s mandate changed, so that it invested conventionally (and unsuccessfully, trailing just over three fourths of its large blend Morningstar Category competitors during the past 15 years, and more than 90% of them over the decade).
Thus, Disciplined Equity’s reach did not match its vision. However, as demonstrated by AlphaZero’s success, the fund was not necessarily on the wrong track. It may just have been too early.
A Stretched Analogy To be sure, mastering long-term investing is a much different task than winning at chess. All the factors that may affect security prices cannot be placed on a single sheet, as can all the ways in which chess pieces can be moved. In addition, the game of chess does not adapt. Its new, superior cyber-players do not change what humans do. Whereas with investing, the strongest participants (if they possess sufficient wealth) alter the game for everybody, because their trades affect both security prices, and how those prices are set.
Finally, and most critically, the artificial-intelligence program cannot learn by playing itself--at least, not to the extent that it did when teaching itself chess. With a constrained, rules-based contest such as chess, a computer can glean as much insight from a hypothetical match, played against itself, as it can from evaluating an actual contest. Not so for the financial markets. It is difficult, to say the least, for a computer program to postulate how all other market participants might behave, so that it can devise a successful counter-strategy.
Nonetheless, although the analogy is loose (to put the matter mildly), I do think that chess’s experience suggests where mutual funds are heading. Just as institutional databases once gave number-crunchers an edge on their competition (in the early 1980s, says former Vanguard CIO Gus Sauter, beating conventional funds by using a quantitative approach was “shooting fish in a barrel”), so at some point will the successful application of artificial-intelligence routines give active mutual fund management a much-needed boost. Those who run such funds will then have a legitimate claim on index-fund assets.
When that revolution will occur, and how long it will last--the financial markets being self-adjusting, such that eventually the imitators neutralize the innovators--I cannot say. But I will be surprised if such an event does not occur.
John Rekenthaler has been researching the fund industry since 1988. He is now a columnist for Morningstar.com and a member of Morningstar's investment research department. John is quick to point out that while Morningstar typically agrees with the views of the Rekenthaler Report, his views are his own.
The opinions expressed here are the author’s. Morningstar values diversity of thought and publishes a broad range of viewpoints.