MjwW just posted a well-constructed but inherently flawed piece about how team spending per win is made irrespective of team payroll. While I applaud the effort that he put in, I think he falls short of providing convincing evidence of his basic premise and conclusions.
The first problem is the essential thinking behind the basic premise (what was called the substitution effect). Two 3-WAR players are not as good as one 6-WAR player. They aren't nearly as good, in fact, because the team is incurs all the costs of having another player (roster spot, other nominal costs, such as hotel rooms, etc.) without receiving any greater benefits. More importantly, needing two players means that a team uses playing time that would otherwise be going to a player who is presumably above replacement level.
The next problem is the method of evaluating team spending. The independent variable should be the total payroll of each team and the dependent variable should be that team's $/WAR for players with 6+ yrs service time.
Another glaring problem with the study is that it includes teams who spend almost nothing on free agents as important datapoints. Not only did the Pirates not actually intend to spend $30 M/win but that value would be subject to so much variance that it's completely unreliable. Since WAR is inexact because defence and pitching are not well described (e.g., 2009 - 2011 Mariners aggregate fielding: DRS: +163, TZ: +147, UZR: +84, 2009-2011 Mariners aggregate pitching: 3.91 ERA, 4.14 FIP), errors in calculating $/WAR become compounded for teams who spend little on free agents (how much money did the Pirates actually spend on players with 6+ years of service time?).
Nonetheless, these methodological faults wouldn't be so problematic were the basic premise of the study not so faulty. The prevailing hypothesis that $/WAR can be modeled linearly rests not on the idea that two 3 WAR players are as good as a 6 WAR player, it rests on risk averse spending. But the fact is, risk assessment should be used to initially adjust a player's projected WAR. It's true that better players are bigger injury risks and, consequently, those differential injury risks should impact the projected WAR of better players more than lesser players (e.g., given the same playing time projections, a player with 8 WAR potential could be projected to 6 WAR, whereas a player with 2 WAR potential should be projected to 1.8 WAR), but that does not change the fact that one of the players is still better.
The underlying guiding principle here is so simple that it's beautiful: WAR production becomes increasingly valuable (due to the fact that good teams don't field replacement-level players). Thus, teams should be willing to pay more for players projected to produce more (after adjustments for injury risk, etc. are made). Teams that can afford to spend more total can afford to get the best players. Since the best players should be paid the most per win, the best teams should spend more per win. To be honest, given how simple the theory behind it is, if the relationship between salary and WAR actually was linear, I'd think that would be evidence of suboptimal spending by high payroll teams, likely the result of spending to maximize profit, rather than on-field production, and possibly indicative of collusion.
Thanks to Roky Erickson for today's post title and MjwW for the inspiration.