Previous Installments of Reworking WARP
The Series Ahead [8/21]
When I started working on a series about revising WARP, I didn’t expect to have much to say on the subject of offense. Measuring offense is probably the least controversial part of modern sabermetrics. So why start here? I have a few reasons:
- It’s a good place to start, foundationally. The topic of run estimation covers a lot of tools that are useful in more up-for-debate areas.
- The goal of this series is to be inquisitive; we shouldn’t just assume anything is right. We ought to test.
- We tend to take the relatively low amount of measurement error on offense for granted, and so neglect the measurement error we do have.
So, we’ll math. But before we math, let’s talk a bit about how sabermetricians measure offense, as opposed to what I like to call “RBI logic.” Traditional accounting of baseball offense works on two basic principles:
- If you get on base and eventually score, you are credited with a run scored.
- If you drive in a runner (including yourself), you are credited with a run batted in.
Ignoring some pretty silly edge cases, this reconciles with team runs scored. The problem is that it’s such a binary model—either a runner scores or he doesn’t. With baseball, though, there are outcomes that can increase the probability of a runner scoring without driving him in immediately:
· You can advance the runner, which makes him more likely to be driven in in a subsequent at-bat, and
· You can avoid making an out, which—even if you do not advance the runner in doing so—gives additional batters behind you chances to drive him in.
So RBI logic does a very good job of reconciling to team runs, by sheer force of will, but it’s a poor reflection of the underlying run-scoring process. You end up crediting players for coming up in spots where runners are in scoring position, and ignoring the contributions of players who advance runners over. You also ignore the value of not making outs.
The foundation of most modern sabermetric analysis of run scoring is the run expectancy table. Here’s a sample table, derived from 2012 data:
0 |
1 |
2 |
|
000 |
0.489 |
0.263 |
0.101 |
100 |
0.858 |
0.512 |
0.221 |
020 |
1.073 |
0.655 |
0.319 |
003 |
1.308 |
0.898 |
0.363 |
120 |
1.442 |
0.904 |
0.439 |
103 |
1.677 |
1.146 |
0.484 |
023 |
1.893 |
1.290 |
0.581 |
123 |
2.262 |
1.538 |
0.702 |
Top to bottom, it goes by the runner on base—a zero indicates no runner on base, one through three indicates a runner on that base. Left to right is the number of outs in an inning. (It’s not explicitly listed on most run expectancy tables, but the three-out state is a special state in which runs expected goes to zero.) The table lists the average number of runs expected to score in the rest of the inning from that state—the lowest is with the bases empty with nobody on and two outs, at 0.101 runs expected, all the way up to the bases loaded with no outs, where 2.262 runs score on average.
What’s interesting isn’t so much the run expectancy itself, but the change in run expectancy between events. So let’s run through an example. Say you have runners on first and third, no outs. That’s a run expectancy of 1.677. Now, suppose the next hitter walks. That moves you to a bases loaded, no outs situation. That walk would be worth 0.585 runs—a pretty important walk. What if the hitter strikes out instead? That moves you into a first and third with one out situation, for a value of -0.531.
We come up with the value of each event by looking at the average run expectancy change for each event—that’s known as the event’s linear weights value. Here’s a set of linear weights values for official events in 2012:
Event |
LWTS |
1.398 |
|
3B |
1.008 |
2B |
0.723 |
1B |
0.443 |
0.314 |
|
0.174 |
|
NIBB |
0.296 |
K |
-0.261 |
Out |
-0.246 |
We’ve separated the intentional walk from other walks. You’ll note that a hit-by-pitch is worth more runs than a walk—pitchers tend to issue fewer walks with first base occupied, compared to hit batters. Shockingly, a home run is worth more than a triple, a triple is worth more than a double, and so on.
Now let’s look at the same table, but with one new piece of information—the standard deviation around that average change in run expectancy:
Event |
LWTS |
STDERR |
HR |
1.398 |
0.533 |
3B |
1.008 |
0.520 |
2B |
0.723 |
0.456 |
1B |
0.443 |
0.327 |
Out |
-0.261 |
0.187 |
HBP |
0.314 |
0.183 |
NIBB |
0.174 |
0.170 |
K |
-0.246 |
0.147 |
IBB |
0.296 |
0.071 |
There is a substantial correlation between the average run value of an event and its standard error, which shouldn’t be surprising. It also tells us that the actual value of a player’s offense is more uncertain the more he relies upon power—the value of a home run is more uncertain that that of a single, after all.
We need to get into a bit of gritty math stuff here before getting to the fun stuff. What you have to remember is that the standard deviation is simply the square root of the variance around the average. In order to combine standard deviations, you have to first square them, then combine them, then take the square root again. (In other words, variances add, not standard deviations.)
Now, here’s a list of the top 20 players in batting runs above average (derived from linear weights) in 2012, along with the estimated error for each:
STDERR |
||
61.7 |
6.6 |
|
49.7 |
6.5 |
|
49.2 |
7.1 |
|
48.7 |
6.7 |
|
44.3 |
6.7 |
|
44.0 |
6.5 |
|
43.6 |
7.0 |
|
43.5 |
6.8 |
|
43.0 |
5.6 |
|
42.7 |
6.8 |
So the difference between Mike Trout and Miguel Cabrera in 2012 was 12.5 runs. The combined standard error for the two of them (remember, variances add) is 9.7. How confident are we that Trout was a better hitter (relative to average) than Cabrera in 2012? Divide the difference by the standard error and you get 1.3—that’s what’s known as a z-score. Look up a z-score of 1.3 in a z-chart, and you get .9032—in other words, roughly 90 percent. So there’s a 90 percent chance, given our estimates of runs and our estimates of error, that Trout was the better hitter. Now, we should emphasize that a 90 percent chance that he was means there’s a 10 percent chance that he wasn’t. What if we compare Posey to Beltre? That’s a difference of seven runs, which works out to a confidence level of 77 percent that Posey was the better hitter. What about comparing Braun to Votto? That’s a difference of just half a run between them—our confidence is only about 52 percent, essentially a coin flip between them.
So what we have is a way to measure our measurement of run production, and then to apply a confidence interval to our estimates. For a full-time player (one qualified for the batting title, that is) the average standard error is roughly six runs. If you want to compare bad hitters to good hitters, sure, most of the time the difference between them far outstrips the measurement error. But if you want to compare good hitters to good hitters (which is frankly a lot more interesting, and probably a lot more common), then you’ll often find yourself running into cases where the difference between them is close to, if not lower than, the uncertainty of your measurements.
So if we can quantify our measurement uncertainty, the next question we can ask is, is there a way to measure offense that’s subject to less measurement uncertainty? I have a handful of ideas on the subject, which we’ll take a look at next week.
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.
Subscribe now
I assume you'll get there in the end, but the thing that jumps out to me is that a 6 run uncertainty level on the supposedly easy hitting side of the player value equation has pretty big ramifications for player valuations.
At ~6M/win a 6 run approximation for hitting contributions would suggest a 3.6M level of uncertainty, right?
Seems like if this approach takes hold we may lose a lot of the knee jerk declarations that team X or Y is dumb for every signing (which would be a good thing).
This analysis should readily extend to baserunning. Much like hitting, you have discrete end states with a "responsible party" (assuming defense averages out over large N). Pitching follows similar logic.
Defense... I think we can differentiate the exceptional from the average, and the average from the abysmal, but finer distinctions are likely not yet reliable.
My personal guesstimate is that anything less than a gap of 1.0 to 1.5 WAR over a full season isn't significant. This may be an olive branch to the traditional community when it comes to awards. WAR becomes a tool to establish who makes up the "top tier", and then discussions of more qualitative factors can weigh in. (Such as the somewhat infamous "Cabrera moved to 3B to help his team!")
Why is that? If your actual goal is the exact change in run expectancy, calculate that directly. If you don't want to actually use run expectancy, why do we need to worry about the potential variation around the average linear weights values?
Do you want base-out context or not? Seems like you're taking an odd middle-ground here (or I'm missing something.)
--- ---
Tangential question: why use run expectancy variance, not win-expectancy variance?
If the purpose is to assume that the event would have typically occurred in a typical situation, then no.
All you have to answer is this question: how much weight do you want to give a bases loaded walk, compared to a bases empty walk, if both occurred with two outs?
If you want to give the same value, then use standard linear weights, and both get around .3 runs.
If you want to give them different values, then go with RE24, where one walk gets exactly 1.0 runs and the other gets around .13 runs.
It's a personal choice. No wrong answer.
My instinct is that the lean for linear weights would come from a desire -- justified or not -- not to "punish" someone who comes up in a lower leverage situation. A single, after all, is a single, that thinking would go.
I think that approach oversimplifies what a single is. Circumstances do vary from AB to AB, and while some players may have more "Value Added" opportunity over the course of a season, that's a drawback to the methodology that I'm more comfortable living with. (Not really different, in principle, to a player who happens to play against tougher pitchers over a given year.)
Without the linear weights, you'd expect an identical line to come out to a higher WAR for a player whose team got runners on base more. So you're rewarding a player for playing on a good team, and, presumably, reducing the year-to-year correlation of WAR.
Of course, lineup position comes into play here, too. If you're a #4 hitter, you get more high leverage PAs, whereas #9 hitters get fewer (I'm assuming). Now, some of that is tied to how good of a hitter you are. Better hitters deserve more important lineup spots and therefore slightly higher leverage situations. How to account for that?
Baseball Reference tracks it, and you will see that there is not much deviation.
In a perfect world, we have a context neutral run value, one that includes the base-out situation, and one that includes the inning / score situation by converting WPA back to equivalent changes in RE. For both hitters and pitchers. It's the middle figure that's "real" and not any kind of estimate; the other two numbers attempt to subtract context that we think may lack predictive value, and add further context that we also believe is non-predictive. But having all three measures (including some addenda that measure the contribution of leverage and opportunity alone) handy for everyone will allow us to address many interesting questions.
(I might mention that every metric I've suggested I used to do while with the Red Sox, using my simplified (conceptually) / expanded (number of terms) version of Base Runs, so they are very doable! You do things like substitute league-average rates of runners out on base and passed balls. All very straightforward.)
Standard deviation is a description of distribution. If the SD of a double is .456, we can estimate that 68% of doubles produced a change in run expectancy between .723 +/- .456.
Standard error (of the mean) is a description of the uncertainty as to what the true linear weights average value is, because we have a finite sample. It would be .456 divided by the square root of the number of doubles in the sample.
Perhaps pretty significant for 3Bs and IBBs, though.
Is that right?
The high variances in 3Bs and NIBBs make sense. For a triple men on base, potential runs, are all converted into actual runs while if there was no one on base, particularly with two outs, the certainty of scoring is much less. In a similar way, the value of a walk is much higher if first base is occupied and much higher still if the bases are loaded. If first base is open with two outs, the value is much less.
Is that about right?
There's a case for that, obviously. (That's what WPA is for, no?) But if we stick to WAR-logic, I think the way to frame it is not the there is a 90% chance that Trout was better, but rather that Trout *was* better because he performed in a way that would lead to more runs 90% of the time.
Or possibly not quite. I would imagine the variance would still matter a bit even if you stick to WAR logic. I assume that the marginal value of runs in a single game decreases above some fairly low threshold. Scoring the seventh run does less for your chances, on average, than scoring the third. So it *might* be that, because of the lower variance, X runs above average made up mostly of walks and singles would be worth more than X runs above average made up mostly of 2Bs and HRs. (I think?)
So if you were able to take that into account, the values would be different, but the end result, by WAR logic, would still be that player X was better than player Y because his performance would add more runs most of the time, not that player X was probably better than player Y.
Setting that aside, you are correct that if say all of Trout and Cabrera's hits were singles, and we were in fact doing an error term of the single the correct way (we'd end up with a value of something like .002), then their error terms (which at this point would be extremely tiny, less than 1 run) would move in the exact same direction.
But, that's not what Colin is doing.
***
As for how "small" small is for RE24, in some states (bases empty 0 outs), it's very tiny. In other states, it's larger than you might think. In some league-years, you'll have say the man on 2B state have a HIGHER run value than the man on 3B state.
This is easily corrected by using Markov chains.
As I understand it now, LWTS looks at the change in run expectancy of an event (e.g. 1B, IBB, etc.) for each base/out scenario, then takes the average to come up with a value that is used across the board.
RE24, on the other hand, looks at the change in run expectancy of an event for the specific base/out scenario, which it then plugs in as the value. No averaging involved.
Is this right?
Or in another words, RE24 uses a chart similar to this:
http://www.tangotiger.net/lwtsrobo.html
While Linear weights uses a chart similar to what Colin showed above.