Why Preseason Polls Are The Most Accurate Polls
Every time there is an upset of a top ten team in September, a chorus of complaints arise about the preseason rankings. The argument goes something like this: how can anyone possibly know how good these teams are when they haven't played yet? And since many teams front load their schedule with cupcake games, we really don't know how good anyone is until the season is at least five weeks old. By that time, just about everybody has faced some decent competition.
This argument is pretty much conventional wisdom and is why the BCS doesn't publish their first rankings until week six.
The accusations against the preseason and early season polls are a serious matter. A team's starting point in the BCS process is critical to its success in the final rankings. A solid team that starts out unranked has very little chance to rise up to the top of the polls unless it has at least two top five teams on its schedule. Cincinnati is a good example of a team facing that problem this year (be honest, nobody can spell Cincinnati without looking it up, right?).
If the preseason polls are indeed biased toward popular traditional programs like USC and Ohio State, then the BCS process is at least partly rigged. Frustration at this perceived lack of fairness is one of the reasons fans clamor for a playoff every year.
Personally, I've never agreed with these arguments and I don't think the preseason polls are a significant problem. I have always found them very reasonable assessments of which teams have the best potential to succeed in the upcoming season. A multitude of experts use a diverse set of quantitative evidence to handicap/seed/rank (whatever you want to call it) teams during the offseason. This evidence includes:
1) How a team finished the second half of the previous season.
2) How many starting seniors are returning, with an emphasis on key players from the previous team.
3) How a team has finished in the recruiting rankings in the last two years.
4) A position by position evaluation of individual talent.
5) The quality and track record of the coaching staff.
Of course, few—if any—coaches and AP writers perform this analysis in detail themselves. This is left to back room specialists who write for websites and magazines. Their conclusions are published over the summer and this is the data that influences the writers and coaches in their initial preseason polls.
I've also never noticed that polls later in the year are any more reliable or accurate than the early season polls. There always seem to be plenty of upsets in college football no matter what time of year.
I decided to take a look at some hard numbers to determine if more upsets occur in the first five weeks of the season than in the last eight weeks of the season. Less upsets later in the year would seem to indicate that the polls in weeks six through thirteen are more reliable because of the knowledge gained from the results of the first five weeks.
I went back through the weekly results for the last five seasons and added up how many times a top ten team has been upset by a team that is either unranked or at least 10 rankings behind it in the polls. For example, I didn't count it if No. 10 was beaten by No. 11, only if No. 10 was beaten by No. 20 or worse, or if No. five was beaten by No. 15 or worse etc. (Source: http://rivals.yahoo.com/ncaa/football/scoreboard There is a drop down box that will take you to previous seasons).
Here are the number of top ten team upsets by teams at least ten rankings beneath them over the last five years:
Year...............Wks 1-5..........Wks 6-13
2007..................3...................14 (crazy year, with eight upsets of # 1 and 2 during wks 6-13!)
We have to make an adjustment to these totals, because obviously more games are played in weeks six through 13 and there are naturally more upsets. You can adjust for this by dividing the totals by the number of weeks, which tells us how many upsets there were per week on average:
Weeks 1-5.......0.6 upsets per week average over five years (15 upsets/25 total weeks)
Weeks 6-13......1.1 upsets per week average over five years (44 upsets/40 total weeks)
Therefore, on an adjusted basis, there were still almost twice as many upsets in weeks six through 13 as in weeks one through five.
To be sure, some—if not most or all—of this discrepancy can be attributed to the fact that top ten teams face less competition in the first five weeks of the season. Call this the cupcake factor. I don't think it's going to be possible to prove the exact influence of this factor one way or the other. However, I personally consider the difference much too big to be accounted for by the cupcake issue alone. If you look closely at these teams' schedules, there are plenty of cupcakes late in the season too.
So, have I proven that preseason polls are the most accurate polls? Not really. The cupcake factor definitely undermines that argument. I admit that I wrote my headline to attract readers and stir up the pot a bit. However, I think I did prove that the preseason polls are no less accurate than later season polls. There doesn't seem to be any knowledge gained as the season progresses that allows us to more accurately assess who is "really" number one.
That's a good thing. If we had the "correct" rankings at all times, there would really be no reason to play the games, right?
Memo to the also ran teams. Stop whining about the preseason polls. If you want a higher preseason ranking, then win more games, keep more seniors, hire a better coaching staff and recruit better.
Final note: it was an interesting exercise eyeballing these games over the last five years. Two things stood out to me:
1) Texas is really underrated. They almost always took care of business with room to spare.
2) USC, on the other hand, had a lot of close calls in addition to the many well known upsets.
My conclusion: Mack Brown is an underrated coach and Pete Carroll an overrated coach. Even though it may be true that USC faced tougher competition than Texas, most of the squeakers I noticed were against inferior programs. The more I go back and do these statistical analyses over this decade, the better Texas looks.
What is the duplicate article?
Why is this article offensive?
Where is this article plagiarized from?
Why is this article poorly edited?