Thursday, August 22, 2013

How Consistent is Fantasy Football Consistency?

The end of summer means the imminent start of the NFL season. And while players prepare with grueling workouts in 100-degree heat, fans are preparing by spending hours staring at fantasy football preview magazines and webpages and cheat sheets.

The problem facing the fantasy football player is one of prediction: which statistics from the previous season best predict value in the upcoming season. One such measure of performance is a player's consistency -- the variation in the number of points he scores in a given week. Pro Football Reference has previously shown that good teams should prefer more consistent lineups, while weaker teams should prefer less consistent lineups on the theory that their best chance of winning involves a few "lightning in a bottle" weeks.

But how do you determine which players are consistent?


Tristan Cockcroft at ESPN.com has developed a Consistency Rating (CR) system, tracking how many times each player was worth starting in a 10-team fantasy football league with standard ESPN scoring. For each week, players were ranked by total fantasy points scored. Individual players were then credited with a "start", a "stud", or a "stiff", depending how they compared to others at their position that week. The actual guidelines are below:

Pos.StartStudStiff
QBTop 10Top 221st+
RBTop 25Top 551st+
WRTop 25Top 551st+
TETop 10Top 221st+
KTop 10Top 221st+
DSTTop 10Top 221st+

Each player's CR is then calculated as the number of starts divided by the number of games his team has played.

For example, Aaron Rodgers last season had 9 weeks where he ranked in the top 10 among quarterbacks ("starts"), 4 weeks where he was in the top two ("studs"), and 3 weeks where he ranked 21st or worst. This earned him a 56.25% Consistency Rating, tying him with Matt Ryan for 6th among all QBs in 2012. Note that starts and studs aren't mutually exclusive: Adrian Peterson earned 15 starts and 8 studs in his 16 games last season.

Cockcroft also provides his CRs for the 2011 season. This sets up a natural experiment: given a player's 2011 CR, how well can we predict his 2012 CR?

I found 178 players across all positions who had consistency ratings in both 2011 and 2012. When I compared their CRs, the results were not promising. What follows are the R2 values of a linear best-fit curve for each component of the ratings (e.g., the 2011 CR vs. the 2012 CR).

StatR2
CR.1462
Start.1402
Stud.0926
Stiff.0867

For additional reference, here's a scatter plot comparing the 2011 and 2012 CRs.



I then tried to break the ratings down by position. Note that, in this table, "#" represents the number of players at that position in the database of 178, and the R2 refers to the relationship between the 2011 CR and the 2012 CR. Also, an asterisk (*) represents a negative correlation between the statistics.

Pos# R2
QB20.2921
RB37.1177
WR44.0976
TE22.2702
K23.0151*
DST32.038

We see that QBs and TEs have somewhat stronger correlations, whereas kickers actually had a slightly negative correlation -- a higher CR in 2011 predicted (albeit very weakly) a lower CR in 2012. On the one hand, this makes intuitive sense: it seems logical that individual QBs will have very similar roles (thus producing very similar fantasy point totals) from year to year and team to team. On the other hand, the sample sizes were (obviously) much smaller than the original data set, and there is some survivor bias: the replacement-level quarterbacks and blocking tight ends bringing up the rear have more turnover and thus would be excluded from this study. For that reason, I'm hesitant to read much into their modest correlations.

The moral of the story is that chasing consistency is probably a fruitless endeavor. Sure, you could argue that a standard deviation-based metric might be more predictive, but with important differences in rosters, schemes, and opponents, it makes sense that any measure of weekly consistency is going to end up being...well, inconsistent.

No comments:

Post a Comment