Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

A couple of weeks ago, we had the annual rite of summer in baseball, complaining about how the All-Star game rosters are selected. Sometimes, the rosters pick themselves. The guy who’s having the best season is also the guy who’s been the best at his position for the past few years and he’s also the most beloved player in the league.

And then there’s Justin Smoak. Smoak was elected in by the fans to start at first base for the American League, despite having a somewhat flawed case. There was no question, at the time the voting was going on (or now) that Smoak was (and is) having one of the best 2017 seasons for an American League first baseman. But is that enough to be an All-Star? Smoak has long been a poster boy for a player who came into the league with high expectations he never could quite fulfill. In 2016, he functioned below replacement level. Suddenly, two-and-a-half good months was enough to endow him with All-Stat status?

Baseball is unlike the NFL or NBA or NHL in that it begins its season and crowns its champion for that season in the same calendar year, and as baseball fans, we are trained to view the year through the four seasons. There’s spring training, the regular season (two words), the postseason (one word), and the offseason (a sometimes-hyphenated word which doesn’t get capitalized, because it’s so horribly depressing).

More than that, we’re trained to think of each year as its own individual vessel. We speak of “the year that George Brett almost hit .400” in 1980 (Brett finished with a .390 average), and lament that no one has gotten close since then, despite the fact that from 1993 to 1995 (that pesky 1994 strike messed everything up), Tony Gwynn had 162 consecutive games played in where he hit .402. But it doesn’t count, because it wasn’t all in the same year. The power of those arbitrary boundaries is strong.

Then again, it makes some sense. Smoak, while he didn’t have a great 2016, had five months between the time when his Blue Jays were eliminated from the playoffs and the start of actual baseball that counts in 2017. I don’t know what exactly he did during that time, but clearly something clicked. And when Smoak’s story is retrospectively told, he will be described as someone who came into the 2017 season as a “new man.” Maybe he just got into the best shape of his life.

Does the offseason make that much of a difference? Should a player be considered just by what he did within this calendar year or should we also consider his past body of work?

Warning! Gory Mathematical Details Ahead!

This one seems like a fairly simple test. I used data from 2012-2016 and found all players who had at least 250 plate appearnaces in year one and 300 plate appearances in year two. I calculated a rolling average of each player’s strikeout rate (K/PA) over a 250 PA period. For a player’s 500th PA of the season, the rolling average would include data from plate appearances 250-499. For his 501st, it would include data from plate appearances 251-500, and so on.

Early in the season, some of those 250 PA included in that rolling average were from the previous season. And that’s the point. I wanted to see how well that rolling average of 250 PA predicted whether the batter would strike out in his next plate appearance.

There were two conditions. In one, I looked at plate appearances 101-150 of the second year. In that case, the 250 PA rolling average contained data from both the previous season and the current season (roughly half-and-half each). In the second condition, I looked at plate appearances 251-300 for the season, meaning that in all cases, the previous 250 PA making up the rolling average would all be from within the same season.

By doing this, I have two samples from the same players in the same years. The only thing that differed was whether those samples included data from the previous season or whether it was all localized into the current season. I used a logistic regression, and we can compare the two regressions by their R-squared, or in the case of logistic regression, Nagelkerke’s R-squared.

The results (values listed are R-squared, higher values mean that the running average was more predictive of what was about to happen):

Outcome

Offseason spanning

Current season only

Strikeout

.023

.028

Walk

.015

.013

Single

.004

.003

Double/Triple

.001

<.001

Homerun

.012

.018

Out in Play

.010

.013

Whoops, it’s a straight-up tie! The take-home message here is that including data from last year does just about as good a job as only using data from this year.

Hmmm …

(Ctrl+H; replace “batter” with “pitcher.”)

Outcome

Offseason spanning

Current season only

Strikeout

.012

.012

Walk

.004

.006

Single

<.001

<.001

Double/Triple

.001

.001

Homerun

.003

.002

Out in Play

.003

.002

Yeah, same message. It doesn’t look like the fact that there was an offseason somewhere in the middle there means much of anything. A 250 PA sample is about as predictive of PA number 251 whether there was an offseason in the middle or not. People are what they are, even after five months of vacation.

Now that we know that, let’s ask a slightly different question. What happens when last season’s performance and the first part of this season’s performance are very different from each other? To test that, I again used PA 251 through 300 of a season, with a 250 PA rolling average (which at that point, would be all in-season) as a predictor. As a comparison, I used last season’s full-season total. I isolated situations in which the two “predictors” were more than 10 percent different.

Which was the better yardstick?

For batters:

Outcome

Last season

Current season only

Strikeout

.025

.026

Walk

.014

.013

Single

.003

.002

Double/Triple

.001

<.001

Homerun

.018

.019

Out in Play

.010

.011

Not a lot of news here.

For pitchers:

Outcome

Last season

Current season only

Strikeout

.006

.013

Walk

.008

.006

Single

<.001

<.001

Double/Triple

<.001

<.001

Homerun

.002

.002

Out in Play

.002

.001

Same story.

What to Do with Justin Smoak?

Well, I don’t honestly know. When we’re faced with a situation in which someone who belongs on the All-Rats team suddenly looks like he belongs on the All-Star team at the beginning of the year, it’s entirely possible that we should believe his previous track record. It’s also entirely possible that we should believe his current performance. And that leaves us in a weird spot in the All-Star voting.

The problem is that we have very little understanding of how to tell whether a player is in the middle of a breakout or whether he is in the middle of a wonderful small-sample fluke. And for what it’s worth, there will be plenty of people who will wish-cast on either side of that question. I had kinda hoped that the data would be more definitive, but … such as it is.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
BrewersTT
7/26
With strikeout rates these days, I’m surprised that R-squared could be less than 0.03 for either dataset.

These R-squareds are so pathetic that they suggest that past performance is one of the worst predictors of future performance that you could find, probably trailing ambient temperature and color of uniform.

I don’t suppose these are actually p-values?