There's always been something off about the Official World Golf Rankings.
While Adam Scott is a great No.1 with his major win and matinee-idol looks, it sure took a heck of a long time for him to achieve the result. And that is part of the problem with the world golf rankings. The math and decision-making behind the rankings are just plain flawed on many levels.
Let me tell you the ways, at least to the best of my ability, to understand it because the rankings are like the great and powerful Oz—a little guy making up stuff and saying it in a loud voice behind a big curtain.
For instance, Adam Scott had a heck of a year...in 2013. In 2014? Not so much.
So, should Bubba Watson be the No. 1 player with victories at The Masters and the Northern Trust? Probably. Is that the way it works? Obviously not. Should Adam Scott have been No. 1 six months ago? Probably.
For instance, how can Henrik Stenson have played in 59 tournaments, but only have 50 of them count in the rankings? How can Tiger Woods have played in 36 tournaments and have 40 of them count? That doesn't pass the sniff test, even if it's rubbed in bacon.
Even better is Steve Stricker, who played in 31 events but has been credited for 40. What? Exactly.
Who are these people that invented the rankings, and were they investment bankers in a former life? Were they in Congress? Those are the only places where everybody knows one does not equal one and 10 could actually be 100, depending on how it's interpreted.
Another example is using two years of results to determine the top player today. That's such a stretch that it defies logic, gravity and a bunch of other laws invented by Isaac Newton.
Two years of points is the reason Paul Azinger changed the method for the U.S. Ryder Cup team selections in 2008. He knew what a guy did two years ago has no bearing on what his level of play is right now. Azinger got rid of the two-year points accumulation, except for majors, and based it on one year. He probably should have gotten rid of the majors from the first year as well.
However, that's not good enough for the world golf rankings.
They hang on to two years of results even though they say they discount the first year a little. Why use that first year at all? It bears no relevance to the quality of a golfer's performance today. None whatsoever.
The only good thing about the current two-year system is that it used to be a three-year system.
Can you imagine if the NFL, NBA or MLB used performance of three seasons ago to determine who would be in their lineups? Would you use old results to create your fantasy leagues? Never.
So after many complaints about the three-year time frame of the rankings, somehow the rankings people were convinced reduce it to two years. In reality, the longest period of time they should use is the most current 12 months.
If one year's results were used, Scott would probably have been No. 1 six months ago, what with the Masters victory and that slew of trophies last fall. But, no, the rankings people waited another entire season.
Now, here's the most important flaw in the rankings: They determine, in part, who gets into certain tournaments, like The Masters, U.S. Open, and British Open. When badly designed world golf rankings determine who gets into tournaments, does that make sense on any level?
Sponsors sometimes reward players based on their ranking. Are they getting true value?
Here are some examples of what doesn't make sense with this week's rankings:
- Charl Schwartzel won last December at the Alfred Dunhill in South Africa, yet he's ahead of Patrick Reed, who won twice on the PGA Tour in 2014. Schwartzel's ahead of Martin Kaymer, who just won The Players Championship against the toughest field in golf.
- Steve Stricker, lovely guy, but his last victory was at the beginning of 2012. He's ranked just ahead of Jimmy Walker, who has three PGA Tour victories since last October.
- Ian Poulter, who is just great for golf, last had a victory in 2012, and yet he's ranked ahead of Reed and Kaymer.
Nothing against any of these players. It's where the rankings have them. The rankings need a smart pill.
In addition to the two-year lag and a just plain nutty ordering of golfers, there's also the mystical method of assigning value to tournaments, which nobody explains.
The way they say it is that the tournament value is based on the strength of the field. However, the strength of the field is based on the ranking of the players who enter, and that's based on an inaccurate ranking system.
That explanation is like something you tell your little brother to make him go away and leave you alone so you can eat the last cookie. That explanation is like something an "important person" says to someone they don't think is important or smart enough to see through it.
That explanation is why those of us who are closest to the game say the rankings are no more intelligent than the great and powerful Oz.
In reality, the rankings are a 2'0" guy with a big microphone, standing behind a curtain and believing that if he says the same thing—no matter how flawed—loud and often enough, people will believe it.
Unfortunately, the rankings people are right. Most people do believe it, no matter how faulty the whole system is. It would be easy to change the current system and make it better, but they won't do it.
Kathy Bissell is a Golf Writer for Bleacher Report. Unless otherwise noted, all quotes were obtained firsthand or from official interview materials from the USGA, PGA Tour or PGA of America.