Why Jeff Sagarin Doesn't Deserve to Have a Hand in the BCS
A bachelor of science degree in mathematics from MIT is all that's required to become a major driving force in the multibillion dollar Bowl Championship Series.
Enter Jeff Sagarin.
This 1970 graduate of the Massachusetts Institute of Technology has become one of the most well-known sports statisticians around. But his ranking methodology is botching his all-important slice of the BCS rankings pie. Even by his own admission, the BCS is using the wrong rating system.
The BCS “computer ratings” are perhaps the most mystifying part of the whole chase for the national championship. Try as we might, the so-called experts can't manage to pull back the curtain on how these rankings work.
But we do know they're destroying college football.
Andy Lyons/Getty Images
Each week, we so-called experts struggle to make sense of what the computer rankings really mean, and we're even more useless when trying to predict what the varying electronic brains will spit out next week.
One reason this is so difficult is the jealousy with which the various computer ranking methodologies are guarded.
Jeff Sagarin and his systems—yes, plural—are no different.
Sagarin actually compiles two different sets of rankings for college football. From the beginning way back in 1998, Sagarin has been part of the BCS equation. But along the way, the BCS decided that “quality wins” and “margin of victory” shouldn't be part of the equation—mainly in response to the botched selections for the 2004 games which resulted in a split national championship the BCS was created to avoid.
Of course, the selections after the following 2004 season weren't much better, as any Auburn or Utah fan will tell you. So why do we constantly get rating controversies?
That's the million-dollar question.
Sagarin, and others like him, won't release his methodology to anyone, and the masses are simply left to trust he knows what he's doing. But does he? Even if we ignore the unknown method he uses, what can we infer from his results?
Kevin C. Cox/Getty Images
Since the BCS has done away with caring about “quality wins” or margin of victory, you might think Sagarin would stop caring, too. And you'd be wrong.
Each and every week, Sagarin releases two rankings: one taking into account the quality and margin of a victory, and one ignoring all but win-loss records.
Sagarin's base rating lists all 246 NCAA Division I football teams—all FBS and FCS programs—with a 247th spot for “unrated” teams (presumably the Division II programs some FCS teams play in a given season). This rating takes into account not only a team's win-loss and strength-of-schedule (a combination of opponents' win-loss record, and win-loss record of the opponents of the opponents), but also margin of victory and the number of “quality wins.”
That just won't do for the BCS. So Sagarin came up with his cool-sounding “Elo Chess” rating.
In the world of chess (and other games), the “Elo rating system” is used to determine relative skill between two opponents. Invented by Arpad Elo, the rating system was adopted by the United States Chess Federation in 1960.
Chess is complicated enough. Putting together a rankings system for chess players is a different magnitude of complex. But suffice it to say that the prevailing wisdom is that Sagarin uses some variant of the Elo ratings to come up with his second number—the one used by the BCS.
Here's the kicker: Sagarin himself claims his ratings can be used to accurately predict results. So what happens when the rankings don't match?
Jeff Gross/Getty Images
Using the Sagarin's own instructions, we can predict game outcomes for ourselves using his ratings.
To make predictions for upcoming games, simply compare the RATINGS of the teams in question and allow an ADDITIONAL 3 points for the home team. Thus, for example, a HOME team with a rating of 92 would be favored by 5 points over a VISITING team having a rating of 90. Or a VISITING team with a rating of 89 would be favored by 7 points over a HOME team having a rating of 79.
Simple enough, right? But Sagarin's ratings contradict one another quite frequently.
Let's look at just one example: the 2013 BCS National Championship Game.
In Sagarin's initial “predictor” rating, Alabama is the No. 1 team in the nation with a rating of 96.07. Notre Dame is No. 3 with a rating of 93.07. So, Alabama should be favored by three points (since it's a neutral field).
But when you look at the Sagarin's “Elo Chess” rating, things look quite a bit different. Suddenly, Notre Dame is No. 1 and favored by 3 points (96.48 to 93.65).
So what does Sagarin actually believe?
Derick E. Hingle-USA TODAY Sports
Holding true to the sound ethics of a man of science—in this case, mathematics—Sagarin isn't afraid to throw the BCS under the bus.
In ELO_CHESS, only winning and losing matters; the score margin is of no consequence, which makes it very "politically correct". However it is less accurate in its predictions for upcoming games than is the PURE POINTS, in which the score margin is the only thing that matters.
PURE POINTS is also known as PREDICTOR, BALLANTINE, RHEINGOLD, WHITE OWL and is the best single PREDICTOR of future games. The ELO_CHESS will be utilized by the Bowl Championship Series (BCS).
Pretty fiery language, at least from the standpoint of the BCS and most college presidents and likely many head coaches, too.
Even though the BCS has determined that “style points” won't be part of the calculation, Sagarin clearly believes that is a massive error in judgment. He plainly states that taking margin of victory into account will provide a more accurate predictor, and thus a more sound rating for each team.
So why do he continue to supply the BCS with his coveted, if by his own admission inaccurate, ratings?
Conflicts of Interest and Conscience
Ronald Martinez/Getty Images
Sagarin's ratings are part of the problem, but could also be part of the solution.
By continuing to provide the BCS with his “Elo Chess” ratings, Sagarin is perpetuating the myth that the BCS is the best system we currently have absent a playoff to decide the national champion each season. But Sagarin has also bluntly stated that he doesn't really believe that.
So why preserve the lie?
First, it's USA Today's way of double-dipping—only this is much more heinous than George Costanza's party foul.
USA Today sponsors not only the Coaches' Poll, which counts as a third of the BCS ranking each week, but it also sponsors Sagarin's rating. That's right. USA Today is putting it's whole mouth in our BCS dip.
We're not suggesting any impropriety, but one poll in the BCS should be enough for any organization—even one as powerful as USA Today.
Secondly, there aren't many other ways mathematicians make a name for themselves. Seriously. How many other living mathematicians can you name? Sagarin probably isn't a particularly vain guy, but “BCS guru” looks good on any résumé.
We just wonder if he also admits on that CV that he doesn't really believe in the BCS or the “politically correct” quasi-rating he provides it.
While the standard "Predictor" rating, once part of the BCS equation has the personal backing of its creator, the "Elo Chess" rating does not. As we move from the current system to a new playoff format, which hand will the new system choose? The more accurate or the more "politically correct?"
The future playoff selection committee may choose to ignore computer ratings altogether. Or, they could take a good hard look at everything these systems spit out each and every week. Either way, it seems perfectly clear to us that Sagarin's "Elo Chess" rating just doesn't belong in the BCS, today or in the future.
After all, if Sagarin himself doesn't believe in it, why should we?