Running back overuse has been a hot topic in the NFL lately, partly because of Football Outsiders' promotion of their "Curse of 370" theory. In several articles in several outlets, including their annual Prospectus tome, they make the case that there is statistical proof that running backs suffer significant setbacks in the year following a season of very high carries. But a close examination reveals a different story. Is there really a curse of 370? Do running backs really suffer from overuse?

Football Outsiders says:

"A running back with 370 or more carries during the regular season will usually suffer either a major injury or a loss of effectiveness the following year, unless he is named Eric Dickerson.

Terrell Davis, Jamal Anderson, and Edgerrin James all blew out their knees. Earl Campbell, Jamal Lewis, and Eddie George went from legendary powerhouses to plodding, replacement-level players. Shaun Alexander struggled with foot injuries, and Curtis Martin had to retire. This is what happens when a running back is overworked to the point of having at least 370 carries during the regular season."

- Normal RB injury rates
- Natural regression to the mean
- A statistical trick known as multiple endpoints
- (And this should go without saying, but the "unless he is named Eric Dickerson" constraint is silliness.)

In the 25 RB seasons consisting of 370 or more carries between the years of 1980 and 2005, several of the RBs suffered injuries the following year. Only 14 of the 25 returned to start 14 or more games the following season. In their high carry year (which I'll call "year Y") the RBs averaged 15.8 game appearances, and 15.8 games started. But in the following year ("year Y+1"), they averaged only 13.0 appearances and 12.2 starts. That must be significant, right?

The question is, significant compared to what? What if that's the normal expected injury rate for all starting RBs? If you think about it, to reach 370+ carries, a RB must be healthy all season. Even without any overuse effect, we would naturally expect to see an increase in injury rates in the following year.

In retrospect, comparing starts or appearances in such a year to any other would distort any evaluation. This is what's known in statistics as a selection bias, and in this case it could be very significant.

We can still perform a valid statistical analysis however. We just need to compare the 370+ carry RBs with a control group. The comparison group was all 31 RBs who had a season of 344-369 carries between 1980 and 2005. (The lower limit of 344 carries was chosen because it produced the same number of cases as the 370+ group as of 2004. Since then there have been several more which were included in this analysis.)

Fortunately there is a statistical test perfectly suited to comparing the observed differences between the two groups of RBs. Based on sample sizes, differences between means, and standard deviation within each sample, the t-test calculates the probability that any apparent differences between two samples are due to random chance. (A t-test results in a p-value which is the probability that the observed difference is just due to chance. A p-value below 0.05 is considered statistically significant while a high p-value indicates the difference is not meaningful.) The table below lists each group's average games, games started, and the resulting p-values in their high-carry year and subsequent year.

G Year Y | G Year Y+1 | GS Year Y | GS Year Y+1 | |

370+ Group | 15.8 | 13.0 | 15.8 | 12.2 |

344-369 Group | 15.8 | 14.0 | 15.4 | 12.6 |

P-Value | 0.62 | 0.68 |

The differences are neither statistically significant nor practically significant. In other words, even if the sample sizes were enlarged and the differences became significant, the difference in games started between the two groups of RBs is only 0.4 starts and 1.0 appearances. RBs with 370 or more carries do not suffer any significant increase in injuries in the following year when compared to other starting RBs.

Regression to the Mean

The 370+ carry group of RBs declined in yards per carry (YPC) by an average of 0.5 YPC compared to a decline of 0.2 YPC by the 344-369 group. This is an apparently statistically significant difference, but is it due to overuse?

Consider why a RB is asked to carry the ball over 370 times. It's fairly uncommon, so several factors are probably contributing simultaneously. First, the RB himself was having a career year. He was probably performing at his athletic peak, and coaches were wisely calling his number often. His offensive line was very healthy and stacked with top blockers. Next, his team as a whole, including the defense, was likely having a very good year. Being ahead at the end of games means that running is a very attractive option because there is no risk of interception and it burns time off the clock. Additionally, his team's passing game might not have been one of the best, making running that much more appealing. And lastly, opposing run defenses were likely weaker than average. Many, if not all of these factors may contribute to peak carries and peak yardage by a RB.

What are the chances that those factors would all conspire in consecutive years? Linemen come and go, or get injured. Opponents change. Defenses change. Circumstances change. Why would we expect a RB to sustain two consecutive years of outlier performance? The answer is we shouldn't. Running backs with very high YPC will get lots of carries, but the factors that helped produce his high YPC stats are not permanent, and are far more likely to decline than improve.

If I'm right, we should see a regression to the mean in YPC for all RBs with peak seasons, not just very-high-carry RBs. The higher the peak, the larger the decline the following year. And that's exactly what we see in the data.

The graph above plots RB YPC in the high-carry year against the subsequent change in YPC. The blue points are the high-carry group, and the yellow points are the very-high-carry group. Note that there is in fact a very strong tendency for high YPC RBs to decline the following year, regardless of whether a RB exceeded 370 carries.

Very-high-carry RBs tend to have very high YPC stats, and they naturally suffer bigger declines the following season. 370+ carry RBs decline so much the following year simply because they peaked so high. This phenomenon is purely expected and not caused by overuse.

Statistical Trickery

Why did Football Outsiders pick 370 as the cutoff? I'll show you why in a moment, but for now I'm going to illustrate a common statistical trick sometimes known as multiple endpoints by proving a statistically significant relationship between two completely unrelated things. I picked an NFL stat as obscure and random as I could think of--% of punts out of bounds (%OOB).

Let's say I want to show how alphabetical order is directly related to this stat. I'll call my theory the "Curse of A through C" because punters whose first names start with an A, B, or C tend to kick the ball out of bounds far more often than other punters. In 2007 the A - C punters averaged 15% of their kicks out of bounds compared to only 10% for D - Z punters. In fact, the relationship is statistically significant (at p=0.02) despite the small sample size. So alphabetical order is clearly related to punting out of bounds!

Actually, what I did was sort the list of punters in alphabetical order, and then scanned down the column of %OOB. I picked the spot on the list that was most favorable to my argument, then divided the sample there. This trick is called multiple endpoints because there are any number of places where I could draw the dividing line (endpoints), but chose the most favorable one after looking at the data. Football Outsiders used this very same trick, and I'll show exactly how and why.

The graph below plots the change in yards per carry (YPC) against the number of carries in each RB's high-carry year. You can read it to say, a RB who had X carries improved or declined by Y yards per carry the following year. The vertical line is at the 370 carry mark.

Note the cluster of RBs highlighted in the top ellipse with 368 or 369 carries. They improved the following year. Now note the cluster of RBs highlighted in the bottom ellipse. They had 370-373 carries and declined the next year.

If we moved the dividing line leftward to 368 then the very-high-carry group would improve significantly. And if we moved line rightward to 373, then the non-high carry group would decline. Either way, the relationship between high carries and decline in YPC disappears. There is one and only place to draw the dividing line and have the "Curse" appear to hold water.

To be fair to Football Outsiders, they have recently admitted there is nothing magical about 370. A RB isn't just fine at 369 carries, and then on his 370th his legs will fall off. But unfortunately, that's the only interpretation of the data that supports the overuse hypothesis. If you make it 371 or 369, the relationship between carries and decline crumbles. It's circular to say that 370 proves overuse is real, then claim that 370 is only shorthand for the proven effect of overuse.

As Mark Twain (reportedly) once said, "Beware of those who use statistics like a drunkard uses a light post, for support rather than illumination."

Ideas, data, quotes, and definitions from Doug Drinen, PFR, Maurlie Tremblay, and Brian Jaura.

Great stuff. I can't usually get my head wrapped around your statistical stuff but that was good.

great job

Nice post. Aaron Schatz and the Football Outsiders should be credited with bringing better metrics to measure the quality of football play, but they do sometimes stray into small-sample over-interpretation. They have to walk the line between provoking commentary and unbiased analysis. As you have shown, their "370" theory is more commentary than analysis.

Your last figure ably demonstrates the peril of picking a cutpoint based on anecdote.

Bro, you are awesome. How am I just now finding this site.

Very good analysis.

I will say, however, that Football Outsiders usually promotes this theory to project that a RB will not produce as well the next year. It seems like they actually use it properly (basically implying regression to the mean), but communicate it poorly (stating it as some kind of magic number).

It's especially useful for fantasy football, where most "experts" will rate heavily-used backs very highly just because they had a lot of success the previous year, whereas FO is likely to advise you to avoid drafting such players, as their production is sure to decrease and lower their value dramatically.

You make some good points, but for someone pointing out invalid statistics, you make too many loose, uncalled for, or downright wrong assumptions and conclusions.

First, the curse of 370 was obviously a little tongue and cheek. Representing it otherwise is intellectually dishonest. You could have completely missed the point, but you seem to be smart, so I doubt it.

Second, FO has never said that 370 was completely magical or that someone with a high number of carries will definitely blow out a knee the next year. They just suggest that unless you're superhuman, it's more likely for a downturn the next year if your carry load is extreme. References to 370 as a jinx are, as noted above, a joke.

Third, FO was very upfront with their use of selection bias. That factored into the curse joke.

Also, upping the cutoff does not skew the results greatly. You might notice that the 4 immediate downturns are paired with 3 upturns. The "Curse of overuse" isn't nearly as catchy as the "Curse of 370"

Fourth, your re-addition of Dickerson is not in good faith. All statistics occasionally have outliers, and those outliers can skew the data. FO was upfront that if a back is extremely well put together (like Dickerson), the curse of 370 does not apply. If Brandon Jacobs goes over 370 carries, this caveat would definitely be brought up. Dickerson makes for 5 of the top carry season that did not have significant decline. If you go by player, and not by season, then your graph is considerably changed. If you are looking at the curse objectively, the only reason not to look at the curse that way is that it hurts your argument.

Kevin, that's exactly the point. There is no "curse of overuse." There is only a "curse of 370"--and not 371 or 369 It's a classic example of multiple endpoints, ignorance of regression to the mean, and bad methodology--Dickerson included or not.

The Dickerson exclusion does not satisfy any scientific standard. Good methodology doesn't throw exclude outlier 'people,' it excludes outlier 'cases.'

Besides, to exclude an outlier requires more than just saying 'if we include him our whole point goes out the window.' You need an a priori reason. Also, If you want to throw out his years of bucking the "curse" you have to throw out the same number of cases on the other end of the scale.

What FO did was cherry-pick the data. That's what I'd call bad-faith, loose, and downright wrong.

Regarding the tongue-and-cheek nature you mention, I don't buy that. Yes, they've tried to backtrack quite a bit recently, but as I mentioned in the article you can't say that 370 proves overuse is real, then claim that 370 is only shorthand for the proven effect of overuse. They're still peddling this stuff too. Just today, in fact, they published an article about how the "curse" might catch up with Clinton Portis.

PS What the heck does "well put-together" mean? -- RBs that sometimes defy the "curse?" So we're supposed to define well put-together by how well they defy the curse, then exclude them from the analysis that proves there is a curse? That's like saying 'all cars are blue'...excluding the cars that aren't blue.

Yeah, take that Kevin!

I did a bunch of research on this a couple years ago when it first came out because the Packers were considering adding Larry Johnson and I was curious how accurate it was. I found the "370 curse" to be full of holes and typed up my findings which were sent back to them via a friend. Don't know that my data ever saw the light of day after that but there are multiple problems with their theory.

Excellent analysis. I learned quite a bit about statistics, especially multiple endpoints.

Would the analysis differ if the running back was less effective in their high carry year? For instance, Larry Johnson averaged 5.2 ypc in 336 carries in 2005, while averaging 4.3 ypc in his 400+ carry year. He declined significantly since then. What does the data say about backs in these situations?

Also, excuse my ignorance of statistics, but explain why it is invalid to exclude Dickerson? If he was the only back to consistently escape the curse, then why is it unfair to exclude him?

Ben-All I can say about the Larry Johnson example is that backs with unusually high stats will tend to fall to earth regardless of how many carries they had. It just happens that backs with very good seasons are going to fed the ball a lot. This gives the appearance of high carries "causing" the decline simply because it precedes it.

Regarding Dickerson--Dickerson is a person; he is not a 'case' in the statistical sense. It's fine to throw out outlier cases in studies if they will bias the data. But you can't pick a person and throw out all his cases simply because they belong to him. And if you do throw out outlier cases, you have to balance things by excluding an equal number of cases on the other side of the coin.

For example, if you throw out 5 cases of RB-seasons that defy the "curse," then good methodology dictates you also exclude 5 cases that substantiate the curse. You can't just look at the data, and throw out a guy because he proves your case wrong.

As I said in an earlier comment--It's as misleading as saying "all cars are blue...except all the cars that aren't blue." That is, "all backs with >370 carries decline the following year...except the ones that don't."

Thanks for the explanation.

Has FO heard about this article? Commented on it?

I'll take that as a no...

Sorry...

I'm not sure. It's been posted on their comment boards many times, but I can't say for certain. I don't think they really care about the research. I think they're just tying to find a hook to hype book sales.

Yeah, or sell out to ESPN.

"The Dickerson exclusion does not satisfy any scientific standard. Good methodology doesn't throw exclude outlier 'people,' it excludes outlier 'cases.' "

Good methodology throws out outlier cases when each case is independent. That's not what we have here. One person with multiple results that buck an otherwise existing trend IS significant. If you don't want to see that, then you're intentionally blinding yourself. It is you that is committing an error by taking each of these seasons as an independent case. If each season was put up by a different individual, that would be valid, but they aren't. Each season is not an independent case. As I said before, for someone who is so good at finding flaws in the methodologies of others, it is highly suspect that you can not see your own errors.

In no way am I claiming that the 'Curse of 370' is something set in stone. Much like you, I just don't like seeing intellectual dishonesty.

As for the curse, you apparently do not have a basic knowledge of sports history. The head of FO is from Boston. Boston is the irrational baseball curse capital of the world. Even in Baltimore, the Curse of the Bambino, as stupid as it is, was talked about by sportswriters as real. It was clear to anyone with a sense of irony that the 'Curse of 370' was a not so subtle skewering of Boston sports media in general, and one guy in particular.

As for the well put together comment, I chose my words poorly, and you were right to jump on that loose definition. Let me reclarify. At the time he played, Dickerson was larger than most LBs in the league. How many other running backs does that apply to? Brandon Jacobs is bigger than most LBs in the league now, hence my comparison. When talking about the physical effects of running the ball, I think taking into account the size of the runner might be important. Getting hit by people smaller than you is different from getting hit by people larger than you.

There are any number of factors that go into the wear and tear of a RB. I agree that the 'Curse of 370' is not, by any stretch, complete, but it works as a warning against overuse.

Kevin, I'm not sure what you're talking about. I've written FO multiple times, and NOT ONCE have they replied that their "curse" is a "joke." They defend their analysis and say "look at the history." (I want to shout back in my best John Cleese fake accent "look at the bones!")

The fact of the matter is, they have made a killing getting paid by ESPN to do half-baked pieces on 370 for one reason: people read them. Many of their analyses--see today's NFC West segment, it's amazing--are terrific. So why, when they're capable of producing something excellent, they trot out the Curse of 370 every year, is beyond me.

They sound like a politician backed in a corner...they already made their statement (370 or more carries is bad for a running back) and rather than saying "oops, we were not clear enough, we meant X" they continue to defend a hypothesis that CLEARLY does not bear out. You can defend them all you want, but analysis like the one above refutes their work in this instance.

On a side note, I've run something similar to the above without ever having found it...not as elegant, but it also found that there was not a statistically significant link for a set cut-off. Someday I'll find a forum to post it. ~Greg

I haven't have time to read all the comments yet, so my quibble might have been mentioned.

It seems weird to evaluate RB-production solely on YPC. I know that you examine their starts and appearences but you forget those later on. Especially in the multiple-endpoints-part. Let's say a RB has 380 carries and 4.6 YPC in year Y, and then in Y+1 has 84 carries for a 5.2 YPC. That will seem like a great improvement and a buck of the "curse", allthough an injury-plagued and somewhat irrelevant season in reality. It isn't unreasonable to think that the combination of a high YPC and a lot of injuries in Y+1 is a common combination in the 370+-group. After all the backs are very talented thus the high YPC, but may be worn down from the overuse in year Y.

If I'm totally wrong, excuse me - I have very little experience with T-analyzis and only small knowledge of the finer parts of standard-deviation.

I'm not trolling at all, and for all I know the FO-analyzis might have the same issues.

Danish-That's a pretty good point. The only example of what you suggest is Tyrrell Davis, the Broncos RB who destroyed his knee trying to make a tackle after an interception. He had a phenomenal >370 carry season only to have a horrible YPC average in the few games he played the following year prior to his injury.

He's actually the data point way down on the -2 YPC line in the last graph in the article--the strongest single case

for the "Curse". But his injury was so traumatic it would have crippled any player, regardless of the number of carries the year before.All the other cases in the data set had a fairly sizable number of carries.

I have a master degree in statisics and you're and idiot. Using a rb a lot, clearly quickens the end of his career. Unless he's named Eric Dickerson.

Your t-test shows does not show a difference between 370+ and 344-369, and you claim it is regression. But technically, regression will have a smaller effect on the 344-369 group, especially if the 370 criterion was chosen in a way that would maximize the difference. In other words, you chose a comparison group that did not let you measure an effect you know is there in reality, and claim that because you could not detect the "curse" effect it clearly doesn't exist?

I should say that I think the published article on the Curse was complete rubbish and was really shoddy with the statistics to the point where it was a violent assault upon reason, but I don't find your refutation reasonable. It seems to come down to whether or not Dickerson is counted, because the Curse (I guess not in the hard sense, but in the "lots of carries=watch out" sense) would also predict a minimal difference between the two groups you chose. It is hard to expect FO to come up with a priori omission criteria in exploratory research like this, so I don't mind their interpretation with the Dickerson addendum. I actually think you made a better argument than they did!

Excellent comment. I chose that group because it was the most similar group to compare and still yields an adequate sample size. True, regression to the mean should be somewhat weaker for the comparison group. And that's exactly what we see in the means of the games played and started. The high-carry group played in and started in a slightly higher number of games the following year, just not enough to be due to a systematic process.

I don't claim I proved that a curse does not exist, only that the stats fail to show a curse exists. What we observe is explained by perfectly normal and expected processes unrelated to "overuse."

But...if it does exist, it must be small and insignificant enough to escape statistical detection with some fairly sturdy methodology. So, I guess, yes, I do claim for practical purposes the curse does not exist.

Regarding Dickerson...ok, if you insist on throwing out the cases that counter the thesis, then throw out an equal number of cases on the other side of the coin, and see if the effect remains. I doubt it would.

Right on, but it is tricky to decide to throw out one person to counter Dickerson, or five seasons. I don't like that it makes a difference, either.

What bothers me most about FO is that they don't publish their model. I have no way of knowing if DVOA, for example, is theory-driven or if their model consisted simply of drawing a line through some points. I'd not only like to see how DVOA is calculated, but also how it was derived. My suspicion is that they fit the event to value with so many free parameters that it would be impossible to not have a good fit. If this is the case, DVOA has very little predictive value. If DVOA were mostly theory-driven (and I guess it might be) and the model fit well with very few free parameters, I would start to pay attention to it. I just can't stand that they do not describe their methods at all. At least I looked at one point and couldn't find it, but maybe I just would have to pay for it...

Also, they say "this data" instead of the correct "these data", which is gross.

I don't see why you have a problem with the statement that almost every running back not named Eric Dickerson has had trouble producing well after a 370 carry season. That is true. Its observation.

If you want to make it a season by season case study that doesn't make sense. Aren't we interested in finding out how individuals are affected by high carry seasons?

It seems that you agree with the FO article but are intent on arguing that it is only mostly right.

Parker-I think you completely misunderstand the article. I think that normal expected processes explain the regression of RB performance after high-carry years, and overuse is not needed as an explanation. Excluding Dickerson and cherry picking 370 are the only way to substantiate the overuse hypothesis.

There is a risk in overusing running backs and we all should know by now that risk is most effectively measured geometrically and not linearly. I think that is where you method of finding statistical significance can be called into question. You didn't calculate the risk of ruin and how that would most effectively be managed.

Parker-Good questions and I appreciate your comments. However, I don't agree. You can see the linear relationship in the first graph. That should also tell you regression to the mean is at work and not some non-linear fall-off point at 370 carries or anywhere else.

Also, even if the non-significance of the difference between >370 and <370 groups could be attributed to a non-linear point of failure, don't forget about the selection bias. Seasons of >370 carries are seasons in which we know

after the factthe RB was not injured that year. Comparing those seasons to seasons in which a player may or may not be injured is not a valid comparison. That point alone throws the entire theory out the window.Besides, I'm not really out to prove there is absolutely no overuse effect. My article just shows how flawed the Curse methodology is. But along the way, if we don't see evidence of overuse with the statistical methods here, we know that any effect must be smaller than what

would bedetected, which would be negligible in practical 'football' terms.You're right, the way you solved the problem was linear. However risk is never linear its geometric. The fact that you might get 0 out of running back because of overuse adds a more complex dynamic. Thats all I'm saying. I haven't worked the problem out geometrically but I think you would get a different result if you tried it that way.

I like your work. Its hard to negatively comment on someone else's work on their own blog and not be a d*&^ about it. Keep asking questions and trying to find solutions. I just found your blog today via wages of wins and spent a good couple of hours reading.

So if I'm reading this correctly, based on regression-to-the-mean, it IS likely that we'll see a decline in production from a player's peak year. However ... how do we know it's his peak? We can only know in retrospect, so trying to predict it is pointless. In relation to Turner, the argument should focus on what has changed fundamentally about him or his situation from the prior year. The fact that no one expected 1700yds is not a valid reason to suspect less. If he did it last year, and he is still healthy and in his prime, and his supporting cast has improved (which it has), who's to say he won't put up similar numbers? Further, why aren't we expecting a decline in AP's numbers for the same reasons? It's inconsistent.

Outstanding post, thanks for putting it out there. And trust me, FO is not tongue-in-cheek on many things, those guys take themselves very very seriously.

Why are receptions igored when it comes to workload analysis? They tackle on those plays too. If backs take less punishment on those plays, then teams should throw more to their workhorse RBs.

Also, playoff touches need to be considered. They tackle in this games, too.

glad i have an mba. I hypothesize that there is a correllation to higher education and winning at fantasy football.

If you take the universe of players having 300 or more touches in year y-1 and run a regression of year y yards per carry on y-1 ypc, age, and a dummy variable for hitting the 370 carry amount in y-1, the results show that the 370 carry dummy is statistically significant.

So taking into consideration prior year performance and age, there is evidence that being "overused" does negatively impact performance.

(Same thing holds for rushing yards in lieu of YPC)

Matty-Yes, I agree you would find significance there. I also found very strong significance for % of punts out of bounds for punters with names that begin with A, B, or C.

The point is that you can find significance between two groups in just about anything if you get to pick where the cutoff (or endpoint) is.

Set your dummy variable to 375 or 365, and watch what happens to the significance. Poof.

While your analysis is good, it doesn't explain away the fact that 370 + carries have occurred 28 times and only one time did attempts rise the following year. Only once did rushing yardage rise the next year. And never has rushing TDs increased (or even remained static). The average drop the following year is 137 carries, 687 rushing yards and 7.3 TDs. So while you can find fault with the analysis - the fact is no one has ever had a better fantasy year rushing the ball following a 370 carry season.

Greg-Thanks for the comment. The reason is very simple: It's regression to the mean. Consider a slugger in baseball who has a career year. He hits 50+ homers (steroids guys excluded). Very, very rarely would we ever expect for him to repeat, much less improve on that.

Unlike baseball where at bats are capped at 4 or 5 per game, RBs get a lot more carries when they're doing very well. So there is a strong correlation between high carry years and career YPC years.

We're not seeing the effect of a handful of extra carries from 8 months ago. We're seeing the common and natural return to earth from phenomenal career-type years.

I don't see the real problem here. FO says : "Don't expect a better/comparable (fantasy) season from a RB that carried the ball more than 370 times the previous year".

Well, surprise, it works ! What's the deal about it ? They never pretended that 370 was a turning point, they even joked about it. They did cherry pick on purpose, and put a fancy name on regression to the mean and RB overuse to sell a concept that ended up being useful to fantasy football players.

In summary :

Is the "Curse of 370" a viable statistical analysis ?

No.

Did RBs not named E.Dickerson see a considerable dropoff, on average, on their yards/number of TDs a season after having carried the ball more than 370 times ?

Yes : See Greg's comment.

In the end, do the target audience (Fantasy Football players) can use this "theory" effectively ?

Yes.

And it's all that really matters. You're not playing in the same field than FO on this one, and you started that by looking at YPC, and not total yards/TDs, which are the basis of FF.

What FO basically did is that to sell this theory better, instead of "creating" "The Curse of Regression to the Mean and RB Overuse in a Given Year", they found a number (370) that worked well, and published a "mystical" "Curse of 370". That's it.

For a guy who is into stats, this comment "steroids guys excluded" was a tad disappointing. Here you are talking about working within known data and all of that and then you drop the most extreme example of assumption without data in pro sports - steroid use directly giving power hitters an advantage (as opposed to the other myraid possible factors - like simple weightlifiting (not common in baseball until the late 80s), smaller ballparks, expansion, radar-gun based pitching evaluation, pitchers using 'roids etc).

Dude, lighten up. I just wanted to take that whole issue out of the discussion of the example I offered.

I would say another point in disproving the so-called 'curse' would be that the player was one year older the year after his 370+. When a player is 22 & 23, one year is not a huge deal. When a guy is 27, 28, 29+ (like Alexander + Martin, to name a few), that one year can be a very big deal.

And there are more players than Earl Campbell who disprove it. Ladainian improved from his 2nd to his 3rd year. Terrell Davis had 369 carries in '97, and only missed out on 370 because the Broncos clinched their playoff seed and he sat out week 17, then proceeded to play in four additional playoff games. The next year he rushed for 2000+ yards. Emmitt Smith improved after seasons he had 365 carries and 368 carries. He then played in two playoff games after each of those years (I mean, surely playoff carries ought to count in these calculations. I'm sure the hits still hurt!). These are just recent players too. I'm sure there are other cases as well.

Excellent point from the above poster. Real quick on regression to the mean. Assuming a 370+ carry ballcarrier and a 344 carry ballcarrier both had the same YPC in the same season (in order to have the same population mean and standard deviation), the 344 carrier will regress to the mean moreso than the 370 carrier since the sample size (344 compared to 370) is smaller, giving more weight to the population mean.

Technically correct, but the difference in regression strength between a 344 carry guy and a 380 carry guy would be minuscule.

Right, but it does actually bring some interesting questions into consideration.

I tend to believe that the ability to avoid injury is likely a quality of an NFL player. Perhaps this attribute is a low one -- as in it only controls 5% of injury likelihood -- but I think most of us agree that running style, etc. probably make this a tangible idea.

FO says that the more carries a player gets, the more likely he is to get injured in the future. However, I say -- citing regression to the mean -- that with each carry a player receives that doesn't result in an injury, that player's avoid-injury attribute should actually increase in his favor (ie, he is less likely to get hurt) since he's continually distancing himself from the NFL injury-per-carry average.

A study into this would be impossibly difficult, but perhaps a light sifting through carries-per-injury could help shed some light on this. Do you have access to, or have a database that tracks NFL injuries?

Statistics are junk for this sort of analysis. Why? No matter how many ways we look at it, there is no simple route to the truth because football is a complex game. The stats will be confounded by the continual addition of complex variables. In such cases, the usefulness of statistics deteriorates and what we have instead is an exercise in critical thinking whereby stats are just a tool for aiding the process of dicision-making.

We would all agree that the nature of football is violent and chaotic, yes?, and that winning, game-planning, and team management are a random art defined by chance. If chance favors the prepared, as it does, then the teams that prepare best for chaos and randomness will prevail, because that is what the sport of football is. All teams have depth at all positions becuase injury is expected to happen, players are expected to wear down to the point of needing replaced for the purpose of winning. A RB is trying to help his team win on his 1st carry and on his last. Some RB's are encouraged and given the opportunity to try harder (more carries) because thereby their team views the win to be more likely. That is, the win TODAY, not next week, and especially not next year. If a player amasses a statistically high number of carries, it happened as an anomaly in a season where he was expected to be a vital contributor to help his team win week after week after week. We cannot attach the same expectations (whether + or -) to the same guy in a different year as a simple matter of deduction. His stats will be different. We can reasonably infer 'how different?' by knowing the mean from which the player differed. Defining the mean is a mess all it's own.

Why are we using the following season as the sole indicator of RB overuse? If there is some magic number of carries that causes a RB to perform poorly the next season, wouldn't it also cause a drop-off in performance for the remainder of the current season? Where are the charts showing a RB's decline in ypc on a game-by-game basis within a single season once he starts approaching 350-375 carries (adjusting for opposing run defenses, of course)? After all, it is not logical to assume that the effects of overuse on a player are delayed by 8 months (except if he had surgery in the offseason).

The media LOVES to use the multiple endpoint selection bias. Think about how often you hear "this team won 2 of their last 3 games"... when this also means (probably) that they went just 1-1 in their last 2 games and 2-2 in their last 4. LOL.

While I'm not sure either way on "370". I find it amusing that where you talk about using statistical trickery, you show a graph that is not centered around zero, but -0.5, and have not bolded the zero line.

I don't see how that would mislead anyone. It's relative change in performance, not absolute performance that's important.

I realize that, and maybe I am unusual in this respect. But for me, the first time that I read through it not realizing that, my perception of the situation was different based on those graphs.

My point was that moving the baseline is one way to be tricky with graphs. Bar graphs that start at a high number do the same thing (they can make 10,500 look (relatively) a lot higher than 10,100).

Great article.

What's particularly noxious about "the curse of 370" is that you see a lot of commenters viewing the number as a magic curtain, as if something suddenly and dramatically happened with the 370th carry. You'll see fans start saying things like "he's on a pace for 375, I hope he just slows down a little bit", as if those final 10 carries really constitute the straw that breaks the camel's back.

"Did RBs not named E.Dickerson see a considerable dropoff, on average, on their yards/number of TDs a season after having carried the ball more than 370 times ?

Yes : See Greg's comment."

I challenge you to write a scientific paper where you get to drop a data point just because it tends to hurt your theory.

Or, I should say, I challenge you to submit such a paper to a refereed journal.

Writing the paper would be easy.

Now Cold Hard Football Facts has a myth of 400

http://www.coldhardfootballfacts.com/Articles/11_3157_Stop_running_back_abuse!.html

I'm not sure why it would be incorrect to exclude Eric Dickerson from this analysis. Isn't it possible that Dickerson had some confluence of events that enabled him to sustain higher carries for multiple years? (A better offensive line, an ability to avoid straight-on hits, fewer pass receptions, less usage as a blocker, improved training methods, biological differences, etc.)

Let's say we are doing some kind of analysis of computer viruses. Wouldn't it be logical to exclude Macintosh computers from this analysis since Macintosh has a substantially lower rate of infection? Couldn't we assume that there must be something different about Macintosh computers that makes including it in the analysis unhelpful?

I realize this comment is a little late but... you blew my mind with this blog post, Brian!

One related question, if you happen to see this, Brian. Have you ever checked to see if some of the other "magical" thresholds in the statistical analysis of sports -- e.g., 27 years as the age when MLB players decline on average -- suffer from the multiple-endpoints problem?

"I'm not sure why it would be incorrect to exclude Eric Dickerson from this analysis..."

It's pretty safe to assume that, yes, there's something different about both Eric Dickerson and Macintosh computers. But when you exclude someone or something from an analysis, you're also declaring that your conclusions are inapplicable to any person or thing that shares whatever properties made the original person or thing excludable. If that property is being a Mac, then the analysis remains useful to people with non-Macs, assuming they're not somehow unclear about what type of computer they have.

With the "curse of 370", however, the basis for excludability is whatever it was that made Eric Dickerson so amazing, and we can theorize all we want, but we'll never really know what that was. More importantly, we can't know ahead of time (if ever) which current running backs share the same properties.