Before I left Baseball Prospectus to go to KPMG Consulting in 1998,
I would occasionally write a piece on defense. Now, I would always get a
bunch of mail about articles, but none, including those in which I would
personally insult huge segments of the population, drew as much mail–both
really angry and effusively gushing–as the articles I would write on
defense. Partisan fans, particularly those of traditionally praised glove
wizards and "hunky" shortstops, would dredge up the most
hair-curling invective you can imagine. Otherwise nice, friendly people
would turn into a cross between Pat Buchanan sans Metamucil and Robert
Downey Jr. in day three of detox. "No, Rob, I didn’t see
Chaplin. I meant to, though."
Evaluating defense is difficult. Hell, evaluating offense is difficult, and
we have some very good data on exactly what happens every time someone
steps into the batters’ box. With defense, we have some data, but it’s not
nearly as clean as the data on offensive performance and there’s no real
consensus on what the best measure is. For offense, most analysts now agree
on OPS as a quick-and-dirty measure of offensive performance, with some
adjustment for baserunning ability and home park. But for defense, the most
commonly used metric is still fielding percentage.
Can you f*&^ing believe that? I can’t. To calculate fielding percentage,
you add up the number of times a player either made a putout or an assist,
and divide it by that same number plus the number of errors a player made.
That measures one thing: the ability to avoid persuading the official
scorer that you should have made a particular play. What the hell does that
have to do with defense?
This is a simple game; you win by scoring more runs than your opponent.
Offense is about scoring runs, and defense is about preventing your
opponent from doing the same thing. Fielding percentage measures something
totally separate from preventing runs, yet it is the de facto standard for
defensive measurement today. Blows my mind.
So what should you use? Range Factor, or its various flavors, add putouts
and assists, divide them by innings played and multiply by nine, yielding a
"number of plays made per game" metric; sort of like ERA for
fielders. It’s not great, but it’s not the end of the world. It can be
inaccurate because of the strikeout rates and groundball/flyball ratios of
a team’s pitchers, for one. It’s also subject to the effects of the other
fielders; most infield putouts other than those at third base are because
someone else threw you the ball.
Clay Davenport’s methodology in the BP book is pretty solid, but to be
quite honest, it’s a little complicated. Super-precise metrics usually
require tons of time, and the metrics can get more precise than the data
will support. STATS Inc. has a measure called Zone Rating, but it’s not
perfect by any means: the zones are pretty funky. Gold Gloves? Ha! Remember
when Bill Murray was still on Saturday Night Live and did his annual
Oscar Preview on "Weekend Update?" Each year, he’d grab one or
more of the nominees for Best Picture, usually some relatively artsy flick,
and casually throw it away, stating "Didn’t see it." That,
combined with after-hours Glenlivet benders, are the essence of the Gold
Gloves.
What about personal observation?
Personal observation of defense, particularly on the part of fans, is worth
less than nothing. It is all but impossible for a fan watching a game to
assess the individual defensive performance of a ballplayer.
The most important part of playing defense is the first step. You don’t see
the first step of a defender if you’re watching on television, and you
usually don’t see it if you’re watching in person. I used to have a couple
of friends that worked in scouting, and I’d occasionally get to see these
really cool tapes that focused on one player, and had a little blip on them
when the ball was released. It was very cool stuff, designed specifically
to support scouting efforts. But even those things had their limitations.
Highlights you see on Baseball Tonight or Fox Sports Net tell you
nothing about the defensive performance of a player; they tell you more
about who made a visually interesting play on a given day, or who the
producers’ favorite players to show are. That’s all.
Even if you could observe defense effectively, in order to make a
reasonable assessment you’d have to observe every player at a given
position for a large enough number of games to make that assessment. Could
you judge hitters that way, if no one wrote down the results of each of
their plate appearances? Trust me: the answer is no.
So, now that we’ve gotten the caveats out of the way, let’s have some fun,
selectively ignore the dangers and talk about defense. Over the next few
weeks, we’ll be running my top ten and bottom five defenders at each
position. We’ll start with second base on Thursday, and a new position will
follow every few days, finishing up with the strictly subjective (and
probably laughable) ratings for first basemen and catchers, for whom
quantitative analysis is about as useful as Ben Christensen‘s
conscience.
The ratings are a combination of Zone Rating, Range Factor, and me making
the best (and admittedly grossly flawed) assessment I can of the job
they’re doing. One metric doesn’t tell you a whole lot, but if they all
agree, and agree with visual perception, I’m pretty comfortable thinking
that you’re moving close to an accurate representation of a player’s defense.
Gary Huckabay can be reached at huckabay@baseballprospectus.com.
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.
Subscribe now