|QB||Top 10||Top 2||21st+|
|RB||Top 25||Top 5||51st+|
|WR||Top 25||Top 5||51st+|
|TE||Top 10||Top 2||21st+|
|K||Top 10||Top 2||21st+|
|DST||Top 10||Top 2||21st+|
Each player's CR is then calculated as the number of starts divided by the number of games his team has played.
For additional reference, here's a scatter plot comparing the 2011 and 2012 CRs.
We see that QBs and TEs have somewhat stronger correlations, whereas kickers actually had a slightly negative correlation -- a higher CR in 2011 predicted (albeit very weakly) a lower CR in 2012. On the one hand, this makes intuitive sense: it seems logical that individual QBs will have very similar roles (thus producing very similar fantasy point totals) from year to year and team to team. On the other hand, the sample sizes were (obviously) much smaller than the original data set, and there is some survivor bias: the replacement-level quarterbacks and blocking tight ends bringing up the rear have more turnover and thus would be excluded from this study. For that reason, I'm hesitant to read much into their modest correlations.
The moral of the story is that chasing consistency is probably a fruitless endeavor. Sure, you could argue that a standard deviation-based metric might be more predictive, but with important differences in rosters, schemes, and opponents, it makes sense that any measure of weekly consistency is going to end up being...well, inconsistent.