clock menu more-arrow no yes mobile

Filed under:

I Talk So Much That I Forget Who I Am Talking To: How Well Do 5/4/3 Weighting Approaches Work To Project Player Performance?

This article discusses the importance of considering player type and player valuation method (fWAR vs. rWAR) when deciding how to weight prior performance as an input.

Abelimages

Hi everyone. The new year is upon us and everyone knows what that means: it's that part of the offseason where we start making predictions about the coming season. Well, in discussing methods of projection with some of my fellow banterers here, the topic of how to weigh a player's previous seasons together in order to form a prediction of how he'd do in the coming season came up. As I ruminated on the topic, I came to some realizations that many of here have probably already made implicitly but may not have explicated. So here goes:

Ignoring age effects, a fairly standard method of weighting a player's previous seasons for projecting performance the next season is the "5/4/3" approach. Essentially, the system predicts that a player's performance in 2013 should be the weighted average of his performances in 2012, 2011, and 2010, as such. Attempting to project overall performance (WAR), we would see that a player with WAR totals of 3.5, 4.0, and 3.0 in 2010, 2011, and 2012 (respectively) should be projected at (3.5 * 3 / 12) + (4.0 * 4 / 12) + (3.0 * 5 / 12) = (0.875) + (1.333) + (1.25) ≈ 3.5 WAR. Depending on the player's age and personal history (i.e., perhaps he missed playing time in 2012 due to acute injuries that should not affect him in 2013), naturally, we could adjust that total up or down as we saw fit.

This method is not exactly sophisticated but it works for what it does. However, Minor Leaguer asked what the justification for the 5/4/3 method was and why we do not weight the most recent season more heavily, which made me consider why we incorporate three years of data and, more specifically, why we assign each year its weight.

Now, naturally the reason we incorporate more than one year of data is that projecting based on one year's worth of data will have more uncertainty than projecting based on more than one year's worth of data. But why is this? I thought about it a bit more -- what are the inputs to WAR that one year's worth of data aren't enough to determine a player's talent level? Well, there are basically two main aspects to WAR -- how good a player is (production while on the field) and how much he's on the field (playing time). So the next question becomes, how well does one season of data predict the next season's performance level and playing time? This question's actually a fair bit more complicated than you might expect.

First, I decided that pitchers and position players should be considered separately because one season of data will seems to do a fair job predicting a position player's playing time but is much worse at predicting a pitcher's. So, let's start with position players. If we assume the most recent season is a fair predictor of a player's playing time in the following season, how does it compare as a predictor of his performance the following season? Well, some aspects of his performance should be fairly stable (e.g., K-rate, BB-rate) but our interpretation of his performance in other aspects can vary quite a bit from year-to-year (e.g., BABIP and glovework). So, I figured that the bulk of uncertainty for position players was the result of unsustainably high or low BABIP, inaccuracy in measuring defensive performance, and (in some cases) fluctuations in playing time due to acute injuries.

Pitchers, on the other hand, are a more interesting case. As mentioned above, a large part of the uncertainty in projecting pitcher performance is the impacts of pitcher injuries, which seem to occur frequently and (occasionally) seemingly out of nowhere (Verducci effect notwithstanding). But that's only part of our uncertainty assessment: Because of how different the two kinds of widely-cited WAR measurements (rWAR vs. fWAR) can be. Since rWAR (or brWAR, or whatever) is based on actual run average, which can fluctuate wildly due to variable strand-rates and BABIP, there is great deal of uncertainty in predicting a pitcher's rWAR performance level based on one season of data. In general, fWAR, on the other hand, is based on peripheral stats (K-rate, BB-rate, HR-rate), which are not only more within the pitcher's control but are also much more predictive in small samples; thus, one season worth of fWAR should much more accurately predict that pitcher's fWAR performance level in the next season.

Now, intrinsically, it seems to me that pitcher projection systems based on rWAR should rely more on earlier seasons and should thus weigh seasons more evenly. Systems projecting based on fWAR should already have a pretty good read on a pitcher's performance from his most recent season so they should weigh that season more heavily and incorporate previous seasons more to project playing time based on how likely he is to be dealing with chronic and nagging injuries. Finally, I'd think year-weighting for position players should be more even for players who derive large portions of their value from batting average and defensive contribution whereas projections for three true outcomes types should, perhaps, assign more weight to the most recent season.

Anyway, I hate to break anyone's bubble here but I'm not going to crunch any numbers -- mostly because it's snowing out and I want to take my dogs out to play. But I am quite interested in knowing what all your thoughts on this might be. Am I totally off-base here? Does this make sense to anyone else? Does anyone know of any studies that compare different projection systems across player profiles and might suggest differential weighting patterns based on player profiles?

Special thanks to MinorLeaguer and Damaso's Burnt Shirt for discussion and insight and Lydia Loveless for her rocking song "Can't Change Me" the inspiration for today's post title.