It’s that time of year again! No, I’m not talking about that rascally Puxatawny Phil (though his predictions may be more accurate than the ones I’m about to discuss). Rather, it’s that time of year for the “Year After Effect” (or the “Verducci Effect”) to start making the rounds. The theory’s namesake, Sports Illustrated senior writer Tom Verducci, has published his annual warning over at SI, it was discussed on MLB Network’s “Hot Stove” late last week, and the blogosphere has done its part in expressing concerns over the pitchers who made this year’s cut.
For those unfamiliar, Verducci describes his process as such:
I used a rule of thumb to track pitchers at risk: Any 25-and-under pitcher who increased his innings by 30 or more I considered to be at risk. (In some cases, to account for those coming off injuries or a change in roles, I used the previous innings high regardless of when it occurred.) I also considered only those pitchers who reached the major leagues.
Essentially, young pitchers with a large spike in innings pitched are considered to be at risk and make the Verducci Effect list. This year’s class:
2012 Verducci Effect List
- Derek Holland
- Dylan Axelrod
- Jaime Garcia
- Liam Hendriks
- Eric Surkamp
- Chris Schwinden
- Yovani Gallardo
- Nathan Eovaldi
- Daniel Hudson
- Jeremy Hellickson
- Mike Leake
- Matt Harrison
- Michael Pineda
- Zach Stewart
Verducci has been tracking his theory for over 10 years now, and has been writing about it for six (as far as I can tell)—plenty of time for analysts to examine its validity. While the “Year After Effect” might make sense in theory, the evidence is stacked strongly against it. My former colleague, David Gassko, at The Hardball Times, former BP writer Jeremy Greenhouse at Baseball Analysts, J.C. Bradbury at Sabernomics, Michael Weddell in the Baseball Forecaster, and Advanced NFL Stats have all run studies refuting the theory to one extent or another.
The problems with the Year After Effect are multifold. In Verducci’s article this year, he asserts the validity of the effect by saying, “In just the past six years, for instance, I flagged 55 pitchers at risk for an injury or regression based on their workload in the previous season. Forty-six of them, or 84 percent, did get hurt or post a worse ERA in the Year After.” He later says, “Two out of the nine pitchers I red flagged last year actually stayed healthy or improved… more typical, though, were the regressions last year by David Price, Phil Hughes, Mat Latos and Brett Cecil, all of whom I red-flagged.”
One of the problems with this logic is that Verducci doesn’t compare his red-flagged pitchers to any sort of control group. Yes, some of his pitchers regressed or got injured, but how do those rates compare to what non-flagged pitchers do? Pitchers regress and get injured all the time; the real question is not whether these Year After Effect pitchers exhibit this behavior, but whether their behavior differs from other pitchers.
The other enormous flaw with the Year After Effect’s logic is its inherent selection bias and the fact that it ignores regression to the mean, a force trumped only by gravity in strength. You see, for a player to actually make the list in the first place, he must have been allowed to exceed his previous innings totals. And for a player to be given this chance, he likely performed well enough to warrant it, either on the surface or peripherally. Because of what we know about regression to the mean, this performance (or overperformance, really) should be expected to decline the following season. So when Verducci talks about guys like Price and Latos regressing, that’s exactly what we should expect them to do, Year After Effect or not! Are we really going to expect them to improve upon their sub-3.00 ERAs?
While perhaps overkill at this point given all of the work that’s been done on the topic, I thought I’d run my own study on the Year After Effect, approaching the issue from a different angle.
The study I’ve run resembles one I did a couple of years back when examining the “Home Run Derby Hangover Effect.” I’ve taken all pitchers who made Verducci’s list over the past five years (all that have been published, as far as I can tell) and manually matched each player with a comparable player who didn’t make the list. By comparing the performance of a “Verducci List” to a “Comparable List” (a control group), we can see if the guys red-flagged by Verducci perform worse than non-flagged pitchers, as the theory suggests they should.
To avoid biasing the comparables I was selecting, I looked only at player stats, keeping the names of the players out of sight. (I excluded pitchers who threw fewer than 100 major-league innings, as these were often top prospects that received a cup of coffee and would have been difficult to find a good comp for without looking at names; this lowers our sample to 37 pitchers.) I first tried to find a close match from the year in question on innings pitched, followed by ERA, and then, if possible, on strikeout and walk rates. For example, David Price made last year’s “Year After Effect” list. His comparable wound up being Clayton Kershaw:
Year |
K% |
BB% |
|||
2010 |
David Price |
209 |
2.72 |
22% |
9% |
2010 |
Clayton Kershaw |
204 |
2.91 |
25% |
9% |
All told, the two groups broke down as such:
P |
K |
||||
37 |
171 |
3.64 |
3.85 |
19% |
8% |
37 |
171 |
3.73 |
3.89 |
19% |
8% |
Pretty darn close. Once all 37 pitchers were matched up and I averaged out their production, I looked at how the two groups performed in the next season (the year the Verducci Effect predicted demise). Here are the results:
Group |
K |
||||
Verducci Group |
-11% |
+7% |
0% |
+3% |
-2% |
Control Group |
-23% |
+6% |
+1% |
+2% |
+4% |
We see very little difference between the two groups and, in fact, the Verducci Group actually performs slightly better in the “Year After Effect” season. They lose fewer innings, strike out more batters, and walk fewer opponents. Of course, the differences between the two groups are negligible, and we’re dealing with a small sample size, but this is just one more piece of evidence in the “the Verducci Effect is a myth” pile.
This isn’t, of course, to say that a large spike in innings can’t harm a pitcher. Every pitcher has a different physiology, different mechanics, different conditioning habits, etc., and there are certainly limits to how hard a pitcher should be worked. There just doesn’t seem to be a hard-and-fast rule that applies to everyone, which makes perfect sense when you think about it in this way. Everyone is different, and unless we know all of these different things about the players in question, it doesn’t seem that we can draw any meaningful conclusions.
While Tom Verducci is a terrific beat writer and a great personality (so much so that he may even have a One Hour Photo-esque stalker right here at BP—I actually worry about the repercussions I might suffer myself after writing this), it really is about time he puts the Year After Effect to rest. The mounds of evidence he’s swimming in at this point might as well be gold coins to his Scrooge McDuck.
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.
Subscribe now
Are you resigned to being ignored on this topic, just like everybody else who has done a similar study? Face it. When you're cool, people listen to you - even when you say stuff that doesn't pass the smell test.
My advice: Get really, really cool (and good hair, if you don't already have it), then republish this.
Good article.
Pitchers who hadn't gotten that innings bump may be different than those that do. For one big aspect, they may be more injury prone and less durable!
To actually study this, you'd need to randomly choose players to give extra innings too as the progress. Of course, that really can't happen gonna happen.
I think the big take away from the verducci effect (or the verducci effect is a myth reports) is, don't count on reliable performance from youngin's. They on average get worse.
Also, were you weighting the era's, fips, k's and bb's for ip?
To the best of my knowledge, Verducci has never grappled with the numerous critiques of The Year-After Effect or the Verducci Effect. It leads me to believe that he doesn't have any better evidence of the effect than what he has already offered.
-- Michael Weddell
Because if he doesn't, isn't a young kid with say....20 innings in the bigs A. guaranteed to add 30 innings almost no matter what and B. subject to a once-around-the-league sort of success? As in "scouting reports really haven't caught up, yet?" kind of deal?
I understand the point -- his explanation as to why the pitchers he identifies regress is completely baseless. But it would seem to me that from the perspective of a lay person who might otherwise expect that group to continue to improve, there is value in making the point that higher performers, on average, tend to regress.
We should be careful not to throw out the baby with the bath water and rather seek to update the argument.
My studies have shown that 87% of the time that there is an increase in population in a given year, prices go up. An increase in population means that there is increased demand for goods. Using basic economics, we know that an increase in demand allows companies to incrementally raise the prices of their goods and services.
Don't let nobody tell you different.