Arguably, the most controversial responsibility of the current BCS system is to rank the top teams in the upper subdivision of Division I.
The purpose of the rankings is twofold. First, it provides a metric by which to judge the relative performance of the teams during the regular season. Second, the rankings are used to help determine selections to the five BCS bowls, including the national championship, and to some of the other 30 bowls.
To that end, the BCS uses three sources, which it weights equally, to make its ranking: the USA Today Coaches Poll, the Harris Interactive College Football Poll and a composite taken from the middle four ratings of the six major computer polls (Sagarin, Wolfe, Billingsley, Anderson & Hester, Colley Matrix and Massey).
These sources, particularly the computer rankings, are nearly as suspect as the conclusions that the BCS comes to from them. The BCS disallows the computer rankings from using scoring margin as a component of the rankings even though Sagarin, for example, acknowledges that it is a much better predictor than the politically-correct quality win-loss model.
Guess what? Sagarin includes margin-of-victory in his own personal models, and provides the a model without that vital stat to the BCS.
The other two components, the Harris and USA Today polls, are both comprised of human votes. The main problem with these polls is the simple big-conference bias that they show, although that can also weasel its way into the computer rankings.
On several occasions, undefeated teams from non-automatic-qualifying conferences were denied spots in the championship game. This bias also supports particular conferences over others. In the wake of the all-SEC title game, I don’t feel the need to say any more on that subject.
Of course, it’s also somewhat distasteful that the BCS ignores the AP Top 25, one of the two polls that had determined a national champion in the years before the BCS. Perhaps they prefer to disfavor the results from a poll whose voters don’t march in lockstep with their own aims.
Five of the 60 voters chose a national champion other than Alabama.
The rankings, as dubious as they are, wouldn’t be as big of an issue if the BCS had a playoff to level the field. Instead, they paired the first- and second-place teams in their poll in the “national championship” game.
This allows them to select teams that may not make it out of the first round of a playoff, which could include better mid-major teams, and match them against each other.
Often, the rankings are very imprecise and leave a lot of room to be questioned, such as the rejection of a highly-qualified Oklahoma State team from the championship game last year. Instead, a team that didn’t even win its division in its conference was awarded a spot in the game, which it then won.
There are a lot of problems with the current ranking system. Let’s throw it out.
In its place, we could pursue one of two options. The simplest option would be to simply use the average rankings of the AP Top 25 and the USA Today Coaches’ Poll, the two polls that had determined the champions of the pre-BCS era.
An alternative would be to use a composite of those two polls, the Harris Poll, and an amalgamation of the six computer polls, each of which would be required to submit rankings according to the same standard as they publish in their own names—rankings that would include margin of victory.
In either scenario, all of the components would be weighted equally.
Either of these alternatives would make the ranking process far fairer. Of course, much of the potential damage would be undone by the implementation of a playoff—but that will be the subject of the next article in this series.