clock menu more-arrow no yes

Filed under:

Weekend chart: contact vs. plate outcomes, 1920-2020

New, 23 comments

A brief look at how and why baseball has evolved over the last century

Syndication: DesMoines Bryon Houlgrave/The Register via Imagn Content Services, LLC

Recently, I’ve done a couple posts on Teoscar Hernandez and Rowdy Tellez which centred around breaking down their production into batted ball outcomes and plate outcomes (walks, strikeouts, HBP). The short version is that hitters do their damage on balls in play (~140 wRC+), so all else roughly equal want their outcomes skewed to this buckets; whereas pitchers dominate the plate outcomes (~20 wRC+) and want their outcomes skewed to this bucket.

This is the fundamental dynamic that predominates modern baseball in the early 21st century, driving the three true outcomes trend where batters try to elevate the ball for power and pitchers try to pile up strikeouts to avoid the contact. But it was not always the case, so I thought it would be interesting to take a longer term looks at how this has evolved over time.

(Update: since there are some issues with the chart displaying, here is a direct link)

The chart could go back even further, but the end of the dead ball era is a natural demarcation point and there are issues with how statistics were kept prior to that. One note is that there’s a discontinuity in 1955 when intentional walks were recorded as a separate state (they are treated differently in the linear weights framework, so them suddenly being counted differently creates a break point).

What’s most interesting is that until after World War II, the dynamic was completely reversed. Hitters actually produced more when they didn’t put a ball in play. For example in 1948, batters walked 10% of the time, struck out 9.4%, posted a .280 BABIP with just a .119 ISO (for a league triple line slash of .263/.341/.382). Pitching to contact was a thing because pitchers were rewarded for it, and batters were encouraged to play small ball.

That balance held for over 30 years, but started rapidly changing in the early 1950s, and throughout the 1960s. Prior to 1955 batters had never done better putting the ball in play than having a plate outcome; since then there’s never been a season where they’ve done worse on balls in play.

What’s notable about this period is what drove the dynamic — remember that since by definition all outcomes have to add up to 100, at least to some extent these events are relative to each other. The change in this period was driven by pitching dominance. For example in 1964, batters posted a league BABIP of .279 and isolated power of .127, scarcely better than in the earlier time period. The difference was they walked a little less at 8%, but strikeouts were up significantly to 15.6%.

In 1969, the mound was lowered in response to this pitching dominance, and that resulted in a big blip up on the chart as balance was restored, with walks increasing a point, strikeouts falling, and power output increasing. The rebound zenith was reached around 1975, and since then the two outcomes have steadily and increasingly moved apart.

While that had been happening unabated for 35 years, the chart shows an accelerating change around the year 2010. This is the culmination of Moneyball and modern analytics. Batters are selected for the ability to elevate the ball and achieve power outcomes on balls in play; pitchers are therefore selected for the ability to avoid that damaging contact. Absent any restructuring of the underlying incentives, the two trends reinforce each other.