A model called the Truncated Lévy Flight builds on Mandelbrot's work to accurately capture the market's fat tails.

The financial crisis rekindled great interest in "fat-tailed" distributions. (See "Deju Vu All Over Again," by Paul Kaplan, in the February/March 2009 issue.) Investors discovered once again that the odds of experiencing significant losses are much greater than common models of asset returns suggest. Most models assume returns are "normally," or Gaussian, distributed (Bachelier, 1900); when graphed, they look like a bell curve. The ends, or "tails," of the bell curve are thin, meaning that outlier events--the market's extreme gains and losses--should rarely occur.

The historical record presents a different picture: a curve with tails that are fatter than standard models predict. For example, a normal distribution model assumes that an asset return that is three standard deviations below its mean (commonly called a three-sigma event) has only a 0.13% probability of happening, or once every 1,000 return periods. From January 1926 to April 2009, however, the S&P 500 had a monthly mean return of 0.91% and a monthly standard deviation of 5.55%. A negative three-sigma event, therefore, means that the index would suffer a 15.74% monthly loss. In 83 years, the S&P 500 has suffered 10 monthly returns worse than that amount. The record implies that the probability of a three-sigma event is 1% rather than 0.13%, or eight times greater than an investor would expect from running a normal distribution model. A normal distribution fails to describe the fat tails of possible stock market returns.

Enter Mandelbrot and Fama That these outlier events occur frequently isn't exactly breaking news. Many academics have created statistical models to account for fat tails. Well-known examples are Benoit Mandelbrot's Lévy stable hypothesis (Mandelbrot, 1963), the Student's t-distribution (Blattberg and Gonedes, 1974), and the mixture-of-Gaussian distributions hypothesis (Clark, 1973). Each, however, has its drawbacks.

The latter two models possess fat tails and finite variance, but they lack scaling properties. In other words, the models do a good job of capturing the outliers and putting bookends around possible results, but the shapes of their distributions change at different time intervals.

A promising alternative is the Lévy stable distribution model (Lévy, 1925). In 1963, Mandelbrot modeled cotton prices with a Lévy stable process, and his finding was later supported by Eugene Fama in 1965. A Lévy stable distribution model has fat tails and obeys scaling properties, but it has an infinite variance--which greatly complicates things. How can an investor model a portfolio's risk if it has infinite downside or upside returns? In a nutshell, the Lévy model's tails are too fat.

Exhibit 1 displays this problem. In his article, Kaplan uses logarithms to graph stable and normal models over the returns distribution of the S&P 500. He illustrates that a log-stable distribution model fits the tails of the S&P 500 much better than does a lognormal model. In Exhibit 1, we change Kaplan's vertical axis to be in log scale with a base of 10. In this scale, it's easy to see the problem with the lognormal distribution. Beyond negative 15% returns (the S&P 500's three-sigma level), the lognormal curve dips toward zero, underneath the S&P 500's historical returns distribution. The log-stable distribution, on the other hand, fits the tail well, but it extends beyond the maximum historical loss of the S&P 500 (negative 30%) with significant probabilities--eventually resulting in an infinite variance and a tail that is too fat to have any practical application for investors.

(**View the related graphic here.**)

**A Better Model**

Do we have a better distribution model? Yes. A simple solution is to truncate the tails of the Lévy stable distribution. The resulting model is what is known as the Truncated Lévy Flight. The TLF distribution has finite variance, fat tails, and scaling properties.