Beane Counting: How To Grade an MLB General Manager

George FitzpatrickCorrespondent IMay 29, 2009

OAKLAND, CA - APRIL 1:  Oakland Athletics Vice President and General Manager Billy Beane (R) speaks during a news conference as Lewis Wolff, new co-owner and managing partner of the Athletics and Michael Crowley, (R) Athletics President look on  April 1, 2005 in Oakland, California. Major League Baseball approved the sale of the Athletics on March 30th to a group headed by Wolff which includes his son, Keith Wolff, and billionaire John Fisher, son of Gap founder Donald Fisher.  (Photo by Justin Sullivan/Getty Images)

Although many refer to baseball's current generation as the "steroid era," far from the only changes in the game can be traced to medically enhanced physiques. Pitcher wins, losses, and saves, batting average, RBI, errors and bunting all seem to be on the way out in favor of adjusted ERA, BABIP, VORP, range factors, and pitch counts, all leading to a newer era in baseball influenced by the work of sabermetrics—trying to measure objectively what leads to wins and losses, challenging the conventional wisdom, and often upending it.

The book Moneyball and the success of small-market clubs like the Oakland A's in the early 2000s had a lot to do with this changing of the guard—and a consequence of that is the fact that general managers are in the public eye in a way not seen before in baseball history.

For example, nearly every baseball fan would associate Billy Beane with the Athletics more quickly than most of the current A's on the roster, but few would know offhand the executive of great teams of the past. Does anyone associate Bob Howsam with the Big Red Machine?

Because of this, a great irony of this sabermetric era has erupted—great strides have been made in measuring players objectively, but the man who has assembled that team gets a free pass when it comes to such analysis. In my experience, arguments among my peers about the best GMs in baseball tend to consist of hyperbolus statements like "X is the worst GM ever" or turn one bad transaction into the basis of failure, such as "He traded X, Y, and Z for W! That's an awful trade."

Discussion like this seems to get us no closer to an answer. Some articles have made an attempt at it, but the flaws in each seem to undermine the results.

David Gassko tried to do this in 2005 with his article Ranking the General Managers at The Hardball Times—basing his rankings on three factors: how well a team is built for its park, how much the team wins compared to its payroll, and how a team gained or lost at the trade deadline.

This only was done for the 2004 season, meaning that the basis for all of these factors was only that particular season—though that was the only goal of the article, so it is understandable.

However, the problems in the formula are significant. Building a team for a home park may not even be a positive factor at all—I can understand why it was included, as it may seem wise, but when the measurement basis is just having a better home team than road team, it can be seen as a detriment just as easily as a positive.

The midseason transactions idea is interesting, and has some merit, but both of these factors are taken into consideration as greatly as the "bang for your buck" of the team. That would seem to take precedent over either of the other factors—especially the home field—and that would cause significant problems in accepting his results.

Forbes published a study that measured the best GMs in sports in 2007, with the baseball GMs ranked here, which factored improvement on a predecessor and payroll relative to the league.

Aside from the problems of making this a sports-wide exercise without adjustments to the winning/payroll conditions of each specific league, the idea of comparing a GM only to a predecessor has its problems. Any GM with an historically awful predecessor will rank highly even with only a mediocre job on his part, and any GM with an historically great predecessor will only rank as mediocre even if he can maintain the team's success.

Also, winning is factored twice as much as payroll—an idea I don't object to in theory, but there is no reasoning for why this proportion was decided.

Among the factors seen, the only variables that can truly be used are the success of the team and the relative payroll. These factors also allow for comparison between different years, since the payrolls can be adjusted.

This idea is common in subjective analysis—another common statement in measuring GMs is "He won X games with a $Y million payroll in year Z." However, without a baseline for exactly how many games a certain payroll should have won in a certain year, how can any conclusions be drawn from it? Without that question answered, there is no logical starting point.

In order to get this baseline, I made a database of team payrolls and Pythagorean winning percentages for every team I had payroll information available for (I used USA Today's database, found here).

To adjust for the different average payrolls of each year, I simply found the Z-score of each payroll—figuring out how many standard deviations it was away from the mean spending of that season. By running a regression on those variables, an equation that shows what should be expected from a team at a certain payroll:

Y = 0.0263293618645785(X) + 0.500400325761534

This equation shows for every standard deviation the payroll changes, it should increase or decrease team winning percentage by about .026 (in terms of a 162-game season, that's a little over four wins.) The equation has a correlation coefficient of about 0.17 (which is significant with 600 team seasons worth of data), but indicates that having a large payroll is far from enough to win games (sorry, critics of Brian Cashman).

The middle 95 percent of payrolls fall between a Z-score of -2 and 2, which, according to the formula, indicates that an average GM should win 72.5 games at an extremely low payroll and 89.6 games at a relatively high payroll. (The only team to break these constraints on a regular basis are the Yankees.)

This is an interesting conclusion—73 wins for a team with minimal financial resources seems like a better than average job, in the same way that a team with 90 wins with a huge payroll seems like a bit of a disappointment, but this reinforces the correlation coefficient's determination that money is somewhat overrated in baseball.

Applying this to GMs is a matter of comparing how their team did compared to the baseline (in math terms, finding the residual value from the equation for each season). I took this a step further by adjusting these values to be more easily read by turning them into curved test grades: 75 is an average job, rarely will a GM get over 100 or below 50, and grades above 60 and 90 are extremely difficult to achieve.

Here are the rankings for the 10 best seasons and 10 worst seasons for a GM since 1988:

Team
Year
GradeGM of Team
OAK2001100.4Billy Beane
SEA200198.7Pat Gillick
WAS199497.7Kevin Malone
HOU199897.2Gerry Hunsinger
CLE199594.5John Hart
LAA200294.1Bill Stoneman
NYY199894Brian Cashman
ATL199892.9John Schuerholz
OAK200292.6Billy Beane
ATL199392.5John Schuerholz
TOR199457.5Pat Gillick
NYY199057.1Harding Peterson
DET199656.5Randy Smith
DET199556.2Joe Klein
FLA199855.5Dave Dombrowski
DET198955Bill Lajoie
BAL198853.1Roland Hemond
DET200250.8Randy Smith
ARZ200450.4Joe Garagiola
DET200349.3Dave Dombrowski

 

Click here to view a table of the rankings of every GM who has managed at least 5 seasons since 1988 (determined by the average scores of each of their seasons). A list of only active GMs is also included.

If you want to see individual seasons for a GM, comment and I'll post them. Feel free to debate my rankings and critique my formula - I would love to make the method even better.