Before the 2012 NFL season, I used Minitab Statistical Software to project the top 100 fantasy football players. In that data analysis, I used projections from ESPN, Yahoo, and ones derived solely from Minitab. I averaged the projections from all 3 sources and then ranked the players by taking the difference between their projection and the score of the “average” player at their position. Like any statistical model that projects future events, it’s always a good idea to go back and see how accurate you were. So that’s exactly what we’ll do here!
Were the Projections Accurate?
Our first order of business will be to see how accurate the projections were. (Want to follow along? Here's my data.)
I’ll use Minitab’s Fitted Line Plot to do this. In the graph below, I’m comparing the projections to the actual points each of the top 100 players scored. I included only games through week 16. If a player played in less than 10 games (2/3 of the season) due to injury, I omitted him from the model. However, if they missed some games, but still played in at least 10 games, I extrapolated their average to get a score for 15 games. Here are the results, in average points scored per game.
Looking at R-squared, we can conclude that 72% of the variation in a player’s final average can be explained by their preseason projection. Keep in mind that we did already remove the variation due to injury. But still, considering how random the sport of football is, I would say the preseason projections were pretty accurate!
Let’s move on and see what players the model wasn’t accurate for.
Where Were the Projections Off?
We can use the Fits and Diagnostics for Unusual Observations table in General Regression to determine the players that the preseason projections missed the most on. The table will give us values with a large standardized residual. The standardized residual is the value of the residual divided by its estimated standard deviation. So the larger the value, the further the final average is from the projected average. A standardized residual greater than 2 and less than -2 is usually considered large. In the table, Minitab labels these observations with an “R”.
We see that 4 observations have a large standardized residual. When we look at the worksheet, we see that the 4 players are:
- C. J. Spiller
- Doug Martin
- Robert Griffin III
- Peyton Hillis
The variation in C. J. Spiller is easy to explain. Before the season, he was backing up Fred Jackson. But Jackson was injured in the first game of the season, and Spiller became the starter. When getting all the carries in Buffalo, Spiller was easily able to outperform his projection. Fred Jackson did return later in the season, creating a time-share with Spiller. But Jackson was injured again, and Spiller was the lone running back again the last few games of the season.
Doug Martin and Robert Griffin III were both rookies who hadn’t played a single down in the NFL before this season. With so little information, they were just going to be hard to predict. Both players became breakout stars in their first season, easily outperforming their projection.
Then there's Peyton Hillis. He switched teams from Cleveland to Kansas City before the season. Obviously the switch was for the worse, as Hillis vastly underperformed his projection. That just shows how hard it can be to predict how players will perform on different teams.
Overall, this model's projections did quite well. But they were an average from ESPN, Yahoo, and Minitab. Did either of those 3 sources do a better job just by themselves?
Who Had the Best Projections?
We can use General Regression again to answer our question. But this time, instead of entering the average of the 3 as the only predictor, I’ll put in each one individually.
Then I’ll run the regression analysis to obtain the p-value for each predictor.
The R-squared value went up a little bit to 77%, but looking at the p-values for the predictors we see that ESPN and Minitab have p-values greater than 0.05. This means they are not significant, and can be removed from the model. However, we have to remove the predictors one at a time starting with the least significant. So we’ll remove the ESPN average from the model first.
Removing ESPN from the model didn’t change the r-squared value at all! However, Minitab is still not significant, so we’ll remove that too.
Now we have a model where all of the predictors are significant. But look at the R-sq (adj): it is lower than in the last model. When comparing models with a different number of predictors, you should compare the models using R-squared adjusted instead of R-squared. However, even though the adjusted R-squared is lower, we can still use the model where all of the predictors are significant. So we're going to stick with the model that includes only the Yahoo average.
The question now becomes how different would have our projections been had we just used the projections from Yahoo? And if we were to re-draft right now, based on what we know from the season, who would the top picks be? In my next post, I'll answer both of these questions and look ahead to next year's season, too!