A commonly discussed topic that seems to be pretty divisive around these parts (and others) is the subject of lineup protection. Now, I don't want to get too long-winded about this but, to sum up, prevailing wisdom for baseball traditionalists is that strong hitters will improve the hitters batting before them by ensuring that they get better pitches to hit. Isolating lineup protection effects would be extremely difficult (if not practically impossible) and, to my knowledge, I have never seen a study that demonstrated any positive effects of having a better hitter "protecting" the batter ahead of him.
However, I have run numbers and found that there are decidedly "positive" effects for eighth-place hitters in the National League, who tend to garner many more intentional walks than they would if they were not hitting in front of pitchers, who obviously tend to be inept at the plate. In fact, last season, 8th place hitters in the NL were intentionally walked 174 times, roughly 50% more frequently than #3 hitters, who received the second most intentional walks. Among AL teams, 8th place hitters were intentionally walked only 41 times and there is a good chance that many of these intentional walks occurred during Interleague play, as 7th place hitters received only 34 intentional walks. Now, a matter of just 140 or so walks over 9000 plate appearances may not seem to have any impacts when considering things in the aggregate but that is demonstrable (and, in my opinion, irrefutable) evidence that pitchers (and teams) are changing their behavior based on other players in the lineup.
However, I decided to approach the question of lineup protection from a different angle. What about the effects that a good hitter might have on the hitters behind him? Can a player improve his lineup simply by getting on base? Is the average hitter better with runners on than with the bases empty? If so, how much? I used a sample of all Fangraphs "qualified batters" by splits for runners on, bases empty, and runners in scoring position, a total of 136 batters (there should be more but I realized Fangraphs had some players missing for some reason -- this should not be a worry unless the players who were missing differed from the overall population as a whole -- I don't think they did so this sample should be valid). Here are the results:
wRC+ with Bases Empty wRC+ with Runners On Min. : 31.0 Min. : 57.00 1st Qu.: 87.0 1st Qu.: 99.75 Median :104.0 Median :118.00 Mean :107.1 Mean :117.03 3rd Qu.:126.0 3rd Qu.:131.25 Max. :183.0 Max. :175.00
And here are the distributions in graphical form:
Finally, the following boxplot (also called a "box-and-whisker plot) shows the difference in distribution in graphical form:
In this case, the solid (bold) bars indicate the median (middle value), the tops and bottoms of the plots represent the upper and lower quartile (75th and 25th percentile) values, and the horizontal lines at the top and bottom indicate the maximum and minimum values. Also notice how the "notches" do not overlap -- this suggest strong evidence that the medians of these two groups differ. Furthermore, a paired t-test indicated that these differences were statistically significant (t = 4.28, df = 135, p < 0.0001).
So how do we know this isn't simply a latent effect of worse pitchers yielding more baserunners? Well, I performed the same data summary for "qualified" pitchers in 2012 (a total of 93 pitchers were sampled) to determine whether their xFIP changed with runners on.
xFIP with the Bases Empty xFIP with Runners On
Min. :2.420 Min. :3.090 1st Qu.:3.420 1st Qu.:3.830 Median :3.780 Median :4.140 Mean :3.782 Mean :4.156 3rd Qu.:4.090 3rd Qu.:4.510 Max. :5.380 Max. :5.440
I won't subject you to another set of histograms but here are the boxplots:
Boxplot for xFIP with the bases empty and with runners on base. Same deal: the solid bar indicates the median value, the ends of the boxes indicate upper and lower quartile (75th and 25th percentile) values, and the horizontal lines outside the boxes indicate maximum and minimum values. The open circle above the maximum for bases empty indicates an observation the program identified as an outlier. Again, note how the "notches" do not overlap, which indicates strong evidence that differences between the median values is real. A paired t-test confirmed that the differences in xFIP were statistically significant (p < 0.0001, t = 7.12, df = 92).
So, adding a good hitter to a lineup should (theoretically) at least slightly improve the hitters hitting in front or behind him. But how much?
Well, let's look at it from the hitter angle first. The difference in wRC+ mean, in this case, was 107 for bases empty and 117 with runners on, the difference between 2012 Derek Jeter and 2012 B.J. Upton. The differences in median were somewhat greater: 118 with runners on, vs. 104 with the bases empty. In any case, upgrading from a .300 OBP batter to a .400 OBP batter would be a difference of about 65 extra times on base over 650 plate appearances.
However, there are also going to be times when even the .300 batter is already coming up with men on. In some cases, the .300 batter would be ending an inning with runners on (or hitting into a double play and erasing a runner already on base) or simply "passing the baton" to the following batter without affecting whether or not that runner is on. Last season, approximately 43% of plate appearances occurred with runners on, so we should discredit our .400 OBP hitter (50 * 0.43) = 28 plate appearances. However, in roughly 1/4 of those situations, there were two out, so we should recredit our .400 OBP hitter with (28 * 1/4) = 7 of those plate appearances. Additionally, the batter immediately following is not necessarily the only one to benefit. Of the 37 extra times on base occurring with no runners already on (65 * 0.57), roughly 68% should occur with less than two out. If the next batter makes an out (remember, we're estimating league-wide OBP at about .320), the third batter should still benefit and this would account for an additional (37 * 0.68 * 0.32) = 8 opportunities. Finally, there is also a slight chance that he got on base with none on and no outs and if the next two batters make outs, the fourth batter would benefit a total of (37 * 0.35 * 0.32 * 0.32) = 1 extra time. All told, this is a total of (65 - 28 + 7 + 8 + 1) = 53 extra opportunities for hitters behind him to bat with men on base.
As demonstrated below, run-value (estimated as the slope of a regression line through the origin between wRC+ and wrAA) for each point of wRC+ is 0.001253 per plate appearance:
Given the differences in wRC+ (between 10 and 14 points), over the span of a season, the extra run value for those 53 plate appearances comes to about one run.
Okay, now let's look at it from the pitching angle. At roughly 4.2 batters per inning (this was the leaguewide average for 88 starters in 2012), those extra 53 plate appearances should stack up to around 12 or 13 innings. A regression line through the origin (see Figure below) quantifies the relationship between xFIP with runners on and xFIP with the bases empty as a factor of 1.09, so a pitcher whose xFIP was 4.00 with the bases empty would have an xFIP around 4.36 with runners on.
Assuming a difference of 0.36 runs per 9 innings, over 13 innings, we are left with roughly 0.5 runs per season.
So what, exactly, is driving these patterns? Personally, I don't think hitters are driving these patterns because pitchers control the pace of the game; as I see it, the hitters job is basically to react to what the pitcher does. If he could react "better" with runners on, why wouldn't he just make himself react "better" all the time? I think it's more likely the fact that a pitcher's behavior changes with runners on base.
The simplest answer is that he's having a bad day. If you assume that pitchers have bad days or innings in an absolute sense (that is, they are bad no matter what situation they're in or who they're facing), that means they're more likely to give up baserunners in bunches, which could explain the effect somewhat (and would basically defeat the premise of the article, that one hitter could make the next hitter better). However, we've all seen pitchers who seem to be cruising run into problems seemingly all at once. This "loss of focus" (as it's often called) has often been attributed to having men on base. In keeping with the theme of the article, it's also possible that pitchers, knowing that runners could advance on a wild pitch, are less likely to throw their nastiest breaking pitches. Perhaps pitching from the stretch has some negative impacts, particularly with potential base stealers on. Some pitchers may rely more heavily on one or two different pitches with men on, which would make it easier for batters to predict what's coming. Maybe runners are stealing signs. They're all possible -- in fact, I'm sure that all these things have occurred at some time or another.
Overall, while one run certainly isn't much, it does demonstrate that having a better hitter in front of you can (albeit extremely slightly) improve a hitter's overall productivity. And if one run sounds largely insignificant to you, think on it this way: that run is worth slightly more than one-tenth of a win. At a going rate of $6 M / win, that means it's worth half a million dollars. Still, I won't argue with you if you think reading this was still a colossal waste of your time. Sorry.
Thanks to Elliott Smith for today's post title. Weighted runs created splits come from fangraphs, base/out stat likelihoods come from baseball-reference. The program R was used to perform all statistical tests and to create all graphics.