BP Comment Quick Links


November 11, 2010 Ahead in the CountAre the Adjusted Standings Underselling Your Team?
In 2007, the Angels won 94 games despite a third order win total of 86. In 2008, the Angels won 100 games despite a third order win total of just 84. In 2009, the Angels won 97 games with just a third order win total of 87. Going into 2010, we were left wondering whether the Angels were the luckiest team in the world or whether they were doing something that made them appear to be a slightly above average team but actually among the best in the league. A clue was thrown our way in 2010 when the Angels came back to earth, dipping to a record of 8082. This might have resolved the issue if the Angels third order win total had been more than 72, because even a mediocre team beat the stuffing out of its third order record! The Adjusted Standings that we publish under our Statistics tab here at Baseball Prospectus are designed to give fans a clue about how lucky teams have been, but the Angels’ performances have thrown some major question marks our way in recent years about the methodology of those standings. So what is going on here? To understand this, let’s first walk through what the third order standings are trying to tell us. There are three orders of luck that are gradually stripped away, and while the math is tough, the intuition is simple. In order of presentation in the Adjusted Standings:
Knowing how to calculate them would be a challenge, but understanding the intent is simple. The goal is to infer which teams were good, and which teams just won a lot of close games, and also which teams just got a lot of timely hits. Looking at third order records as more valuable implies that teams that win a lot of close games are getting lucky, and that teams that get five runs on five hits and a walk are getting lucky too. Is that really true? So, while the Angels in 2009 clearly were much closer to their 2008 actual record than their 2008 third order record, is that true for most teams? And if so, which standings correlate the best with actual record the following year? I used the standings for 200510, giving me a solid 150 pairs of consecutive years to consider.
Do not be disappointed in the third order standings. The difficulty of opponents is something that is persistent, so the constant bonus added to the Orioles third order win total due to playing the Yankees, Red Sox, and Rays is not supposed to predict an Orioles resurgence the following year, because those teams are just going to keep on beating the Orioles next year. However, we see solid evidence that each adjustment adds a small something to our estimate of team skill, so even though the Angels are beating their first, second, and third order records every year, on average we are still better off looking at a team’s second order record if we want to guess how well it will do the following year. That does not mean the Angels do not have a knack for beating their first and second order records. To prove that, we would need to look at how well teams that beat their first and second order records repeat that feat the following year. If there is no yeartoyear correlation of teams’ abilities to beat their adjusted records, then we are left with a conclusion that the Angels are simply very lucky. That might sound like a copout, but it is not at all. There has to be one team that gets the title of “the luckiest team ever” just as someone out there needs to be flipping a coin and calling it in the air correctly 10 times in a row. One of every 1,000 people will accidentally call a coin correctly 10 times in a row without any skill (and one out of every 1,000 will guess wrong 10 times in a row). Statistically, some team needs to be the luckiest. However, we see some pretty clear evidence that the Angels might have some skill. There is a small but real skill level in beating one’s first order record. The difference between first order wins and actual wins for teams had a .079 correlation from year to year. So while almost all of the fluctuation around one’s first order record is luck, about 8 percent of that fluctuation is actual skill level. Not only that, there is a .193 correlation in the difference between second order wins and real wins, though only a .103 correlation in the difference between second order wins and first order wins. The .193 difference comes from simply aggregating the two effects: that some teams are good at winning close games, and that some teams are good at generating a bigger run differential than their total base, hit, and walk differentials suggest. The ability to generate that bigger run differential than hit and walk differentials suggest is actually made up of two parts which should be looked at separately. Firstly, do some teams have the ability to sequence their total bases, hits, walks, and outs in such a way that they score more runs than other teams? Secondly, do some pitching staffs have the ability to sequence their total bases, hits, walks, and outs in such a way that they allow fewer runs? The answer to the first question is more likely to be yes than the second. The correlation from year to year of the difference between runs and EQR is .104, while the correlation from year to year of the difference between runs allowed and EQR allowed is just .055. Both suggest some evidence of a skill, while also highlighting that the majority of this comes from luck. So, can we do any better than looking at second order standings if we want to predict next year’s standings?
It appears that the most information comes from averaging the actual standings with the second order standings. There are obviously more things that could be done such as weighted averages, but these will lead to only small gains. The lesson to be learned is that although you are better off looking only at the second order standings if asked to pick just one column, there is something added by looking at the real standings, too. Adding in which teams are likely to win close games might do something. too, but this information is probably already contained when adjusting for second order standings. For readers’ information, and also to fuel discussion about the natural followup question of “which teams win more games than their run differentials or batting lines suggest?”, I leave you with a few lists: one that answers the question of which teams have won more games than their run differentials suggest over the last six years, one that tells you how much each team has outscored their expected runs over the last six years, and another tells you how much each team has beaten their expected runs allowed over the last six years.
Matt Swartz is an author of Baseball Prospectus. 15 comments have been left for this article.

As there seems to be some degree of skill to 'overachieving' the predicted wins, I'm curious what the results would look like if we applied this sort of analysis to managers' seasons instead of team seasons. This sort of methodology seems like it could be a jumping off point to start to come to some sort of quantitative analysis of the impact of a good or bad manager. Considering that over that 6 year sample every one of those teams had massive personnel turnover (just looking at the Angels alone, Ervin Santana and Scot Shields seems to be the only relevant contributors to both teams), it seems like the cause might be linked more in the manager or organizational philosophy than the actual players.