Team Efficiency Rankings - Week 12

Let's play a game. It's called Guess Who Won. Team H was at home hosting Team V. Team H threw for 7.5 YPA with no interceptions. Team V threw for 5.0 YPA with 2 interceptions. The adjusted YPA was 7.5 for team H and 3.2 for team V. Each team lost one fumble and had the about the same sack yards. Team H was outrun by team V, by 167 yds to 97. But both team's run SR were within 2% of each other. Also, team H had 133 return yards (not counting kickoffs) compared to only 15 for team V.

Which team won?

Extra hint: Before the end of 60 minutes, Team H scored 3 TDs compared to Team V's 2 TDs.

Extra, extra hint: Team V received a kickoff down 8 points with a 0.05 WP with 6 minutes to play, needing a TD and a 2-pt conversion just to tie. And by the way, Team H has a (statistically) above average defense so far this season.

Guess who won! If you guessed Team V, you're correct. TB beat CAR in OT.

My point is not to find a sniveling weasel-like excuse for the efficiency model's ranking of CAR as a top 5 team despite it's 2-8 record. (Ok, you got me, that's partially why I'm pointing this out.) The other reason is far more interesting.

What team does CAR remind you of in terms of GWP vs actual record? It reminds me a lot of the San Diego Chargers from a couple years back. SD had gaudy stats across the board. The efficiency model had them ranked exceptionally high week after week, but they defied the probabilities and found ways to lose games despite having better athletes and very often better bread-and-butter performance on the field than their opponents.

I am not saying the model is the true measure of a team and therefore any significant deviations are statistical flukes. I'm suggesting the opposite. There are significant factors not captured by the model, and one of those factors is Panthers head coach and former Chargers defensive coordinator Ron Rivera.

Admittedly I don't follow the Panther's from play to play and game to game, so I won't jump to conclusions.  but to those out there who do follow Carolina, please tell me why he should be kept on as the head coach.

Here are this week's rankings. As always click on the headers to sort. Team efficiency inputs can be found below.

1 DEN10.760.5232
2 SF30.710.5113
3 HOU20.650.4758
4 SEA40.630.53114
5 CAR50.610.55146
6 ATL70.570.491515
7 GB60.560.511012
8 NYG90.550.52922
9 NE150.550.50230
10 DET110.540.50618
11 WAS180.530.51423
12 CIN140.520.471221
13 CHI80.500.52311
14 DAL100.500.521320
15 STL120.500.541814
16 TB130.490.49826
17 PIT160.490.49227
18 NYJ230.480.53259
19 BAL170.480.471719
20 SD210.470.492613
21 MIN220.460.502711
22 BUF290.450.491924
23 NO220.450.51731
24 MIA190.440.472417
25 PHI200.420.492316
26 CLE260.420.482910
27 IND270.420.441632
28 ARI280.410.53325
29 OAK250.410.482125
30 TEN300.400.502028
31 KC310.320.483027
32 JAC320.310.512829


  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

40 Responses to “Team Efficiency Rankings - Week 12”

  1. Chase says:

    I admit to being confused by the Panthers, but I don't buy the Chargers comparison for two reasons. One, San Diego had a history of success and seemed to be much more talented (granted, these may be lame reasons).

    Two, ANS seems to be an outlier on CAR, but it wasn't on SD. Football Outsiders hasn't published its DVOA yet this week, but they have Carolina 19th. In the SRS, Carolina ranks 22nd, thanks to the toughest SOS in the league and the 25th worst points differential.

    The Chargers were different. One of the reasons their base #s were great was because they had an easy SOS (I know your #s adjust for SOS but I'm talking traditional metrics). Still, they were 8th in the SRS and 8th in FO.

    What's crazy is Cam Newton still leads the league in yards per attempt.

  2. Anonymous says:

    I'm pretty sure the Steelers weren't ranked 30th last week.

  3. Michael Beuoy says:

    Maybe a new column in the second table labelled "RIVERA"? With a 1 for Carolina and a 0 for every other team.

    You could even train the model against prior seasons by plugging in 0.5 for the Chargers' 2008-2010 seasons. ;)

  4. Anonymous says:

    Thanks for the new sorting upgrade!

  5. Brian Burke says:

    Fixed the errors in the 'last week' column. That was from when I was running 'what-ifs' for how bad Leftwich would have to be to justify a 6 pt swing in the BAL-PIT spread.

  6. MFLoGrasso says:


    Or, since the ranking is actually an average of offensive and defensive rankings, he could have O-Rivera and D-Rivera that is 1 for both values for Carolina this year and 1 for D-Rivera for San Diego when he was their Defensive Coordinator.

  7. David says:


    I've seen multiple people on this site arguing a team as too low that suggested a variable to capture some facet of coaching could improve the model. The standard response has been "coaching is included in the model to the extent that it is captured in efficiency metrics." That argument has always struck me as unsatisfactory but fair enough, but you can't have it both ways and say now that that the Panthers have underachieved relative to their ranking because of coaching.

  8. Chase says:

    Even more surprising than Carolina ranking 5th is seeing their defense rank 6th.

  9. Eric Peterson says:

    Carolina is 2-8

  10. Eric says:

    Redskins climbing on up the ranks. Ah, the optimism never ceases.

  11. Anonymous says:

    The biggest disconnect is in the valuation of Carolina's passing game.

    DVOA has Cam at 23rd at -9%
    ESPN's QBR has him at 31st at 36.4
    PRO-Football focus video method has him 28th and has him as one of the worst under pressure and worst at play action this season.

    The truth is probably somewhere in the middle.

  12. Brian Burke says:

    Yes, other systems have CAR much lower. Their record is 2-8, so backward looking systems should be expected to have them in the basement.

    The interesting question is, why does CAR have good passing and running metrics but still manage to lose? Luck is part of it, certainly. But what else?

    Regarding having it both ways whether coaching is captured my the model. I would say things like scheme or general play calling will certainly have an effect on efficiency. But game management tactics (clock management, proper risk/reward decisions, etc.) do not. So the answer, like most things, is 'it's complicated.'

  13. Ian Simcox says:

    I'm reminded of a reverse Atlanta Falcons from the 2010 season (I believe it was that one). Your rankings kept saying "this team isn't as good as it's record", those who disagreed with you said your model didn't count in Matty Ice and his clutch abilities, and sure enough they got stomped on in the playoffs. Matty Ice came from the Falcons having a 7-2 record in games decided by one TD or less.

    This year, you've got Carolina who are 2-6 in games decided by a TD or less (or 0-6, depending on whether you include their two 8 pt wins as being 'within a td'). Add that to the fact they have faced the toughest schedule in the league (according to your numbers) and you're probably not a million miles off.

  14. Jared Doom says:

    Ever try or consider trying, as an independent variable, AVERAGE(Off Run SR, Percentile Ranking for Yards Per Carry)?

    It just seems weird to me that run SR would be more predictive than some combination of run SR & YPC.

    In an unrelated comment, EPA & WPA are very biased towards RBs that have a higher % of their touches come from the passing game. Maybe you should separately track RB EPA & WPA on run vs pass plays.

  15. Anonymous says:

    I think special teams, which this model does not account for, is at least part of the reason. The 2010 Chargers had a historically bad ST unit according to DVOA (-8.1% regular season), while Carolina this season is not much better with a -7.1% mark (not including the game described above).

  16. Anonymous says:

    8 games is a tiny sample size. Remember this is the same Carolina that should have beaten the Falcons (who everyone agrees is a great team)

  17. Jon Jackson says:

    I respect the forum you have developed here, the conversation and analysis it generates. However,

    I'm confused, your explanation on why Carolina lost wasn't satisfactory. TB beat Carolina twice (that's a sweep of a two game series), TB wins and Carolina loses. Carolina stays at the 5th best team, TB drops 3 positions. To say TB is simply lucky, well lucky teams win games.

    Bill Parcells said, "You are what your record says you are."

    And 2-8 sucks.

    Carolina is 2-8, blame the coach if you want...this team dissolved, the defense gave up 18 pts in the final 10 minutes of the game, and you have the defense rated 6th. Top 20% defenses probably shouldn't give up 18pts in the final 10 minutes and allow a team to score a TD on the first OT possession. This isn't the first time something like this happened(bears game).

    My system said TB would win by 3, TB won by a score in OT. I posted this in the comments on Week 11 team Efficiency Ratings.

    I like what you are doing here. But throwing good ratings after bad at this Carolina team is a mistake, and when a system blows a gasket like this, it may be time to fix the system.

  18. Anonymous says:

    Brian, you call other systems "backward looking" but as someone posted last week, the ANS model is worse at predicting game outcomes than an average computer model. So what gives?

  19. Anonymous says:

    For those of us in office confidence pools who have to turn in our picks by tomorrow night: is there a way for us to calculate any of the upcoming WPs on our own?

  20. Brian Burke says:

    The ANS model is not worse at predicting game outcomes than 'an average computer model.' That is false. Over a period of a few weeks here and there, all models will have good streaks and bad streaks. Go back several years and you'll find a much different result.

    The question is: why is CAR losing despite very good on-field production?

  21. Anonymous says:

    As Chase pointed out, you seem to be alone in saying CAR has shown very good on-field production.

    But hey, you managed to misrepresent the works of Doug Drinen and Aaron Schatz in one fell swoop!

  22. Anonymous says:

    In an alternate universe with slightly different luck, Carolina is 5-3 and no one is saying anything. Ignore the trolls Brian.

    On a side note, I've noticed you're been a lot more apologetic about the model this year. I miss the old aggressive, almost militant, Brian.

  23. Kulko says:

    Well so with different Luck they are an average team and Brian wouldnt be asking when his model had the panthers at 10-14. But at five you would expect a team whose expected record is more like 6-3,7-2 and thats quite far away from being 2-8. OTOH there are teams like this for every year and every system just because sometimes a team hits all their completiions/incompletions on third down and then the result are much better than the average play result. This things tend to go away oin the long run, but at least for one team the long run usually is only next year.

  24. Kulko says:

    Another thing I iam wondering when looking in the ranking is wheter AYPA is the best possible metric for the passing game. I think it undervalues the 10yd nature of the NFL rules to a certain extend and I wonder if Adjusted EPA/Attempt would do a better job here. (By Adjusted I mean considering sacks and cutting of Interceptions to a 10 yd return or similar). I ahve been trying to rebuild the model from 2011 data but I have realized that the function for EPA is not publically available.

  25. Ian Simcox says:

    @Kulko EP data is here

  26. Jeff says:

    The "coaching" metric discussion comes up frequently in the comments. And I tend to agree with the comment made by one user that the answer that it's "embedded in the metrics" is insufficient--though probably partially true. I wish I had both the time (and the skills) to try the test the following metric, but I'll offer it up and perhaps it will inspire some thought (and improvement on the base idea) on a way to include "coaching" in the model.

    My thought on a possible coaching metric would be looking at play by play data, and calculating some kind of score (using either WPA or EPA) based on the variation resulting from "optimal" play selection versus "actual" play selection. So for instance, it's been said that teams should pass more on first and second down, and run more on third down. So let me use real names to demonstrate. Let's say on 1st and 10 from the 20, midway through a close game in the third quarter, Mike Smith's Falcons throw the football. Presuming that a pass from 1st and 10 from the 20 in this situation is "more optimal" than a run from 1st and 10 from the 20 in this situation, Mike Smith receives a EPA and/or WPA score that is the variance between the EXPECTED outcomes of a pass versus a run. The ACTUAL outcome is irrelevant for Mike Smith's score. Now let's say that pass goes for 8 yards and they are facing 2nd and 2 from the 28. We do the same thing on the "optimal" play call for the next play. Let's say it "should" be a run (I have no idea--in fact, I imagine they should still pass--just using as an example). And let's say they pass and pick up 10 more yards. Well if the "optimal" play "should" have been a run, then the variance between the expected EP/WPA of run v. pass would be negative, and Smith would get a reduction in WPA/EP. Essentially, it's not too different from the 4th down "go for it" calculations that we see every week posted on this site (particularly when coaches fail to go for it when they "should"). We can calculate how much expected WPA or EPA is different between the "optimal" choice and the actual choice. The actual result is irrelevant.

    In as much that coaches tend to be non-optimal in the NFL (as evidenced by this site's analysis), I would expect most coaching EPAs and WPAs to be negative. But maybe "less negative" is better than "really negative". And maybe Ron Rivera (who's made some rather dubious 4th down decisions) would have one of the REALLY negative WPAs and EPAs.

    We could use league baselines or team baselines to determine "optimal" choice (though I'm guessing team baselines would have much more limited data and reduce any potential significance--but maybe not). Perhaps we limit it to "high leverage" situations only (end of half, end of game, OT), similar to how the model limits baseline calculations to "normal" situations. Limiting to high leverage situations probably is better given it's those situations where coaching decisions probably have the greatest impact.

    I'm aware there are plenty of deficiencies. The fact Peyton Manning pretty much calls the plays would make it seem unfair to punish/reward John Fox for play call decisions. But it may serve as a starting point at least. And if we limited measurement of the coaching metric to high leverage situations only, I would argue Fox has a larger role (though not too vociferously).

    Just an idea. I'm sure I've overlooked deficiencies that may be obvious to some that would invalidate this approach as being meaningful. But perhaps it will spur some thinking on it from people far brighter than I am.

  27. Anonymous says:

    Brian, if you have any semblance of integrity left you won't even think about an idiotic "coaching" metric.

  28. James says:

    I think Jeff has a really good beginning of a plan here. Anon, I ask you why you think we shouldn't rank coaches? Certainly a team that has an excellent AYPA but runs 75% of the time will not be accurately ranked by Brian's model, and that's a reflection on coaching outside of scheme. Even more so for an aggressive coach on 4th downs versus one that kicks all the time.

  29. Jon Jackson says:

    Did anyone actually see the TB-Carolina game sunday? I just watched the game and all this BS talk of 4th down decisions is moot if the refs call the right call for the Martin touchdown with 10:53 remaining. a obvious miss call.

    But it didn't matter 'cause the panthers defense stinks.

    You guys spend so much energy on academic questions. the real question is who will win this week.

    If the metric system in place used here can't do any better than the vegas point spread, or the canned computer software..what is the point.

    Present your metric on a spreadsheet available to all to use, and analyze. I'll present mine and may the best metric win.

  30. Mike M. says:

    According to my meterics Carolina should of won the game by about 7 pts based on the effeciency stats in the game. About 85% of the teams that are supposed to win do win in my meteric and most of the losses come from games that should be decided by 3 pts or less.

    Their loss is likely a combination of coaching and pure luck.

    With all that said, I think one of the reasons Carolina is so highly rated here is because of opponent adjustments.

    Let me explain, and by the way I've followed stuff like this for 30 years in the NFL.

    Teams don't play at their averages game after game, sometimes they play at extreme high levels and when this happens of coares that team builds-up huge effeciency stats in those games and makes the team look better than they are, because this is not maintanable to continue at this extreme high level of play "a regression to the mean" happens to the team.

    There are things you can look for that point to when "a regression to the mean" is coming, sometimes I can be off by 1 week, but rarely will I be off by more.

    Carolina has played a number of opponents this season that were ready for "a regression to the mean". In other words, a team that was coming off extreme levels of play and had built-up their effeciency stats to high levels so Carolina recieved alot of credit for opponent adjustments and Carolina played those teams well because those teams had "a regression to the mean" in the Carolina game.

    This happened in the Tampa game where Tamps was due for " a regression to the mean" and it did happen but Tampa was very forunate to pull that game out.

    Carolina also play Atlanta earlier in the year in a similar situation where they almost won against a team coming off a number of strong games.

    An example of this happening this weekend is Seattle VS Miami. Seattle comes in off an extreme level of play that is simply not maintainable and Miami is coming in off a extreme low level of play that is not likely to continue.

    "A regression to the mean" will likely happen and Miami would be the right play against the spread at +3.5 and will in all likely-hood win straight up and many will be sssooooo surprised by this.

    If you look at the probabilities, the model has Seattle at 61%, because Seattle is coming off extreme levels of play and has built-up their effeciency stats in the process, what the model does not know is that Seattles extreme level of play is not maintainable.

    Like I said, I could be off by 1 week, but probally not.

    When you add it all-up, TEAMS DO NOT PLAY AT THEIR AVERAGES EACH WEEK, sometimes they play well above and that almost certainly means they will play below at some point soon.

    And this is one reason why a model can not achieve a very high rate of succes.

  31. Anonymous says:

    Mike M,

    You just described one of the reasons why this model works so well.

  32. Brian Burke says:

    "Brian, if you have any semblance of integrity left you won't even think about an idiotic "coaching" metric."

    Umm...not sure how to interpret that from some random internet commenter. Pretty confident in my integrity, thanks.

    "As Chase pointed out, you seem to be alone in saying CAR has shown very good on-field production. But hey, you managed to misrepresent the works of Doug Drinen and Aaron Schatz in one fell swoop!"

    Misrepresentation is a strong charge, Mr. anonymous. If Doug or Mr. Shatz dispute that their models are not backward-looking, you should instantly ignore them. However, I doubt either would make that claim. But hey, keep trolling, troll.

  33. Jon Jackson says:

    Mike M.,

    Are you stating you have a different metric than the one presented on this web site?(and by the way where are the spreadsheets for your game win probability predictions and the spreadsheets for team efficiency? They are probably in a easy to find place, but I can't find them. I'd really like to study them) Or do you have a different rating system? (Again, I would like to see that as well) Your logic above sounds good, And I fully accept that all teams never fully play exactly to their average. However, the goal (at least my goal) is to develop a system that minimizes team performance variance and maximizes team performance predictability.

    I understand your concept of a "regression to the mean". However, for Carolina to have such an inflated team efficiency rating...practically all their opponents would have to be playing above their respective mean, in 6 of their 8 losses, for these lower ranking teams to beat them.

    TB at 16th rated(and falling) 2 losses
    NYG at 8th rated a 29 pt blowout(the superbowl champs must of really been up to beat the 5th rated panthers by 29pts)
    Atlanta 7th rated
    Dallas 14th rated
    Chicago 13th rated

    6 losses to teams lower than the panthers, Am I to believe that all 6 were playing above their mean? Occam's Razor is a principle stating that among competing hypotheses, the one that makes the fewest assumptions should be selected.

    So we either assume that 6 bad teams played really really well against a great (2-8) Carolina team, OR we assume that Carolina is just not very good. Which has the fewest assumptions?

    In chess, if you lose to a lower rated player, your US
    Chess Federation ranking drops much more, than if you lose to a better ranked opponent.

    Why doesn't that work here?

  34. Jason says:

    @Jon Jackson, losing to lower ranked teams doesn't work here because whether the team won or lost is irrelevant to this metric. What is relevant is that the metric is only based on the most repeatable or predictive stats.

    So when you see a team ranked much higher than their win loss record suggests, it must mean they are losing because of non-predictive stats which are not included in the metric.

    I think special teams is probably part of the explanation. Red zone performance, fumble recovery luck, 4th quarter defensive collapses, 3rd down conversions, general high leverage situation failures and on and on... there are many things that could be at play here.

    However, I think people are really overreacting here just because one team seems to be so off. It would make more sense to wait until the season is over and then examine the other statistics not measured by the model and see if including them adds to the model's predictive value.

  35. Mike M. says:

    Jon Jackson --- your right, my explanation is only a patial reason why, not the only reason why Carolina is so highly ranked.

    Seattle had their regression to the mean just like I suggested they would and lost the game.

    Looking at Seattle's effeciency passing stats, they were out-played by Miami by 1.7 yards per pass att.

    That's a whopping 3 full yards below their average.

    The model using averages has no-clue Seattle was due a regression to the mean therefore favored Seattle by 61%.

    Non of the major web sites, CHFF & FO have figured this out................ yet.

    Now because Miami beat Seattle in passing effeciency by a fairly good amount the model will over-rate that performance, not knowing that performance was actually due to happen. It wasn't as much great play by Miami as it was a regression to the mean by Seattle.

    Next week we have more teams ready to have a regression to the mean, I'll post those in the team effeciency rankings for next week.

  36. Anonymous says:

    Mike, what about the fact that Brian's model DOES regress team efficiency stats?

  37. J.R. says:

    Didn't you already prove that the NFL is indistinguishable from a league where 40% of the games are decided by luck? What if Carolina just happens to have 5 losses on their schedule that all came up tails? That'd be a 1-in-32 occurrence, which might be observed in any season. If they won all five of those games, they'd be 9-2... or as good as the Ravens' record.

    And look at where the Ravens are. 9-2 with absolutely no reason to be there except guts and luck.

  38. Mike M. says:

    The model may regress stats, not sure your point. Seattle was 61% to win and I told you before the game that was not accurate, and it wasn't. The model had no clue Seattle would play so far below their averages.

    JR --- Good point. Some of that 40% or whatever % it is that teams are lucky, isn't luck at all but lack of understanding using stats properly , which I pointed out before the Seattle game.

  39. Jason says:

    @ Mike M, just because you predicted one game correctly doesn't mean your explanation was correct. A predicted 61% chance to win means 39% chance of losing.

    It's a truism that teams regress toward the mean, but there isn't a set number of games after which they regress toward the mean - it's just like flipping a coin, after 10 heads in a row, the likelihood of the next flip being tails is not higher because the coin must regress to the mean. But of course if after a few heads, you predict that the coin will regress to tails, you'll be correct, and if you're wrong and you predict the next flip will be heads, you'll eventually be correct.

    Seattle's loss could just as well be explained by the very long distance they had to travel, having a larger road performance drop-off than most teams, Miami playing better than usual, and of course just plain luck.

    Obviously their defense let them down in the 4th quarter, but their offense played fine - especially their passing game with about 8.6 YPA. So was it just their defense that has been playing too well the last couple of games and had to regress (in the 4th Q)?

  40. The Oracle on Dayton Street says:

    Let me cut through the crap, and answer your question: Carolina loses games it shouldn't because Ron Rivera sucks. Simply put, he is not a bright guy. One piece of evidence: He benches his $43 million RB DeAngelo Williams all year, and now he tells us how great he is:

    The guy is dumb.

Leave a Reply

Note: Only a member of this blog may post a comment.