NFL: Regression to the Mean, Sample Size, and In-Season Projections

Use your ← → (arrow) keys to browse more stories
NFL: Regression to the Mean, Sample Size, and In-Season Projections
(Photo by Stephen Dunn/Getty Images)

What if I told you that Adrian Peterson isn’t as good as his stats say?

My reasoning is the Curse of the Leading Rusher. You’ve never heard of it before, but it’s an obvious trend. Since 1980, the NFL’s leading rusher has seen his rushing yards fall by 489 yards and his YPC by almost half a yard just one season later. Only six of the 31 leading rushers even increased their rushing yards the following season, and nine had less than 1,000 yards.

Convinced? You shouldn’t be. Their decline is nothing more than regression to the mean and a lack of sample size. Let me explain.

Regression to the Mean

Regression to the mean—also known as the law of averages—is the phenomenon that explains why extreme seasons far from the average (such as a passer rating over 100 or below 70) tend to be closer to the mean the following year.

It’s why we see the leading rusher put up worse numbers the next year, why quarterbacks don’t put up 40 touchdowns in back-to-back years, and, yes, why the Curse of 370 is a myth. Of course, regression to the mean also affects those on the bottom end of the spectrum—Brett Favre won’t throw 22 interceptions again, partly from regression and partly because his past numbers have always been better than last year’s.

Which brings us to the next principle: True score theory. True score theory states that a player’s observed performance is a combination of his true talent level and random error (or “luck,” in layman’s terms). Any time we see a player’s production in a certain time frame (one year, three years, or 10 years, for instance), we expect that his actual true talent level is somewhere in between the observed performance and the league average.

Let’s look at an example. I split all quarterbacks since 1980 into four quartiles based on their fantasy points per attempt, such that the top passers went into one group and the worst went into another. I found the average fantasy points per attempt for each quartile, and then compared that number to the group’s collective next-season value.

The graph below shows this with fantasy points per attempt divided by the average to create an “index value” (where 1.00 is average).



As the blue lines indicate, the quartiles regressed 63 percent to the mean as a whole. In comparison, should Drew Brees regress that much this season, he would have a 50-point drop in fantasy points, which would’ve put him at No. 8 among passers last season. (Of course, that’s assuming no one else would regress.)

I know what you’re thinking. That’s one year of data; of course they’ll decline. Yet regression to the mean even occurs for running backs with three straight 1,000-yard seasons: Those 99 backs averaged over 1,340 yards each of their first three years—with a yards-per-carry rate between 4.33 and 4.36 each year—but they had just 1,161 yards with 4.21 YPC in the next year; almost one-third (32) had less than 1,000 yards to boot.

In other words, those 99 rushers regressed about 45 percent to the mean, which may seem large, but my calculations would’ve predicted a 42 percent regression. That said, their regression was 18 percent less than those of the quarterback example above.

The more sample size we have, the less regression to the mean there is and the more certain we are that observed performance is close to a player’s true talent level.

And that brings us to the next topic.

 

Sample Size, Past Performance, and Mid-season Projections

Most fantasy football players create their own set of rankings usually based on hours of research and trends. So, explain to me why their prognostications should change drastically halfway through the season?

It’s not just Average Joes that do this either. ESPN’s Christopher Harris, for example, ranked Peyton Manning as his No. 2 quarterback in the preseason, but after Week Seven he had dropped Manning all the way down to No. 7.

Manning was 13th in fantasy points (with a quarterback rating just over 80) after seven weeks, but in the final 10 weeks of the season, he was the No. 4 quarterback and had a 105.2 rating, highest among qualifiers.

The moral of the story: Half a season is not enough to warrant a major change in your preseason rankings!

Curious, I went about calculating the change in true talent for players who had a considerable difference in production during the first half of the season compared to their preseason projection. I’m only looking at overachievers, but these numbers also hold for first-half disappointments.

I chose one player at each skill position whose 2009 FEIN projection was equal to a low-end starter in fantasy leagues (Matt Ryan, Larry Johnson, and Eddie Royal) and assumed they were to exceed their preseason projection by 15 percent in the first half (yet with no difference in projected attempts or catches).

Then, based on their projected first-half attempts, I calculated the amount of regression for each player: With 250 first-half pass attempts, for instance, Ryan would be expected to regress about two-thirds back to his preseason forecast in terms of yards per attempt. (I regress to their preseason projection as a shortcut to calculating a whole new set of projections.)

The results indicate that, with 15 percent higher production than expected in the first half of the season, Ryan would only perform 5.4 fantasy points above his preseason projection in the second half, compared to 2.7 for Johnson—including receiving numbers—and 3.4 for Royal.

To further justify the results, the full-season difference between Ryan’s preseason and mid-season projections would not have moved him up a spot in last year’s rankings. (The 10.8-point difference—5.4 doubled—would have increased his 2008 fantasy points from 196 to 206.8, but he would have ranked No. 13 among passers either way.) For Johnson, his outburst would have moved him up just one spot, from 29 to 28, and Royal would have moved two spots to No. 17.

At what point should you adjust your estimate of a player’s true talent? Well, if they overachieved by 15 percent for the first 16 weeks, Ryan’s updated projection would be 16.3 points higher over a full season than his preseason projection (which wouldn’t have moved him up a spot last year), Johnson’s forecast would be 8.4 points higher than expected (a two-spot jump), and Royal’s 9.3-point disparity would have moved him from No. 19 to No. 15.

As a general rule, it takes 21 games for a quarterbacks’ fantasy points to regress 50 percent to their previous projection, as well as 29 to 30 games for a running back and 13 to 14 games for a wide receiver. (The values for quarterbacks and running backs were found assuming there wasn’t a newly calculated projection after 16 games.)

In other words, only for wide receivers should you weigh current-season production more than their preseason projection at any point in the season. That doesn’t mean you should disregard current performance, only that it shouldn’t have as much weight as it usually does when you make trades or waiver-wire pickups.

Remember: If you have a player like Peyton Manning, for whom we have years of stats, then in no way, shape, or form should your opinion of him change after half a season, much less 20 games. Sample size is not your friend, and neither is regression to the mean.


This article can be seen at FeinSports.com and FFWritersWithHair.com.

Load More Stories

Follow B/R on Facebook

Out of Bounds

NFL

Subscribe Now

We will never share your email address

Thanks for signing up.