Team Efficiency Rankings - Week 16

SF and DEN lived up to their rankings last week. In fact, 8 of the top 10 teams won last week, and the only two losses were by teams beaten by higher ranked teams.

1 DEN10.710.4932
2 SF20.690.5141
3 SEA40.650.5156
4 CAR30.630.5267
5 HOU50.600.4779
6 ATL80.590.501116
7 NE60.590.52131
8 WAS100.560.51225
9 GB90.550.501011
10 NYG70.550.53823
11 STL130.530.521512
12 CIN110.510.471910
13 PIT150.510.48225
14 DET120.500.491219
15 NO200.500.53927
16 DAL160.500.521321
17 CHI140.500.51293
18 MIA180.490.492017
19 NYJ190.470.51268
20 TB170.470.511426
21 BAL210.470.501718
22 BUF220.460.491822
23 CLE230.440.472513
24 SD240.440.502814
25 IND250.440.461630
26 PHI270.430.512420
27 MIN260.420.502715
28 OAK290.410.482128
29 TEN280.410.482324
30 ARI310.370.54324
31 KC300.330.503129
32 JAC320.290.493032


  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

27 Responses to “Team Efficiency Rankings - Week 16”

  1. Anonymous says:

    Good, b/c no excuse this week why team ABC is ranked too high or low, since there is no need for it. Really. The rankings are cult now in our community b/c they hit the nail! Who cares if some obscure ESPN reader can´t swallow the truth?

    As it turned out, the model was right in the past, will be in the future.
    Especially great calls for the ATL collapses in the past, great calls for CAR beating ATL and SD nowadays.
    Great calls for the NYG the past season too, great call for GB the year before.

    Karl, Germany
    Greetings from

  2. Anonymous says:

    Brian's system has had DEN / SF in the top 2 since week six. It has taken ESPN until week SIXTEEN to get those same teams in the top 2 of their power rankings. If that doesn't say it all, I don't know what does.

  3. Anonymous says:

    It says Espn are god awful & Brian's are markedly better but still with room for improvement.!

  4. David says:

    The model was wrong about Atlanta for about 8 games in 2010 before it was right about 1 playoff game. Any user of this site should understand that 1 game proves nothing.

    Let me be clear that I say that with a great deal of respect for this model in general. On the whole it is a great predictive tool and often outdoes common thought. My view on the Panthers all year was heavily affected by the model's output and the Panthers have certainly backed Brian up the last two weeks.

    Let's not pretend that the model doesn't have holes though and won't sometimes get it wrong. What's right for most teams might not always be right for all.

  5. Anonymous says:

    David, as far as I can tell, the model's only big hole is the lack of consideration of coaching with regards to game management.

    Of course the model will be wrong sometimes, because there's too much luck in the sport, but I believe that the model outdoes common thought at least 9 times out of 10. It's the fact that luck must often fall on the side of common thought that leads us to believe our intuition may have more value than it really does.

  6. Anonymous says:

    I'm curious about the Saint's ranking as they stand now. They were clearly a poor defense at the beginning of the year, but they have showed progress week after week and posted a shut out against what Brian is calling the 14th ranked offense. Can this model (or any model) account for this improvement? Or will the first 4-5 games of the season keep their numbers low? Obviously cutting out games cuts out data, but can we really say those first few games are representative of the defense that will be playing for the Saints next week?

  7. Anonymous says:

    how does the model compare in predicting winners vs football outsiders? Or vs point differential or Sagarin ratings? Or simply vs Vegas lines?

    I haven't seen any multi year analysis of the accuracy of different systems.

    The fact that no one seems to print such things on these sites makes me think they probably don't provide much predictive value if any over very simple systems like point differential.

    My gut is pt differential is a ceiling and all the other work is just experts mucking things up.

    I would be thrilled to be proven wrong.

  8. Anonymous says:

    Brian posted his numbers vs Vegas two years ago (AFIR).
    He did pretty good...

    Karl, Germany

  9. bigmouth says:

    @David: Spot on. Was about to say the same thing myself.

  10. Anonymous says:

    It also completely ignores special teams. IIRC, this is because there's little year-on-year correlation in special teams. Of course, you wouldn't expect much because roster churn is so much greater there.

  11. Ian says:

    @Anon, RE: year-on-year correlation.

    I'm not sure how Brian made the choice to exclude special teams play in his model, but in his article on predictivity, Brian seemed concerned with correlation WITHIN a season, not BETWEEN seasons:

    "Consistency was measured by how well the stat correlates with itself. I broke each team's season into two alternating sets of games. There were 2 sets of 8 games, with set A comprised of a team's #1, #3, #5... games, and with set B comprised of a team's #2, #4, A statistic's correlation coefficient between the two sets of games measures its consistency and how well we can rely on it as a predictor."

  12. Adam H says:

    @anon: "The fact that no one seems to print such things on these sites makes me think they probably don't provide much predictive value if any over very simple systems like point differential."

    I doubt Brian's system would out-perform vegas lines on a consistant basis, because the vegas lines can (and do) use this model as well as other information that Brian chooses not to include (injuries, wisdom of crowds, etc).

    But simple point differential?? Pleeeease. Brian would mop the floor with anything less sophisticated than the SRS.

  13. Paul Thomas says:

    @ David: "Sometimes gets it wrong" is not the same thing as "has holes." A model that correctly identifies a team as a 3 to 1 favorite will "get it wrong," in the sense of incorrectly predicting that team to win, 25% of the time.

    It takes a great deal more work to show that a model has holes. I assume that the model has holes not because it doesn't predict with 100% accuracy (if such a model could be devised, football would be a solved game, as pointless to watch as competitive tic-tac-toe), but because anything humans do is pretty likely to not be done perfectly.

  14. James says:

    Anon, re: Saints,

    Brian's looked at this before and essentially looking only at recent results improves the model's performance for some teams, hurts the model's performance for others, and has no impact for a third group. Overall it ended up making no/minimal/inconclusive difference if you overweighted recent games, so he weighted all games equally.

  15. Eric Peterson says:

    anyone who thinks that the model definitely didn't work because it had a couple of outliers is a dope. anyone who now thinks that it definitely works because of one week is also a dope.

  16. Anonymous says:

    Thanks, Eric Peterson for the douchey but accurate reply.

  17. jditoro says:

    All this talk about whether or not the model is perfect or flawed or better or worse. I think this article form this site says about all there is to say no?

  18. David says:

    @Paul: Brian attempted to explain Carolina's under-performance earlier in the year through game management. I don't want to get into an argument over semantics, but I think it's fair to say that game management is one example of a "hole" in the model. That doesn't make the model bad because I still agree with you that it outdoes common thought the majority of the time (although "at least 9 times out of 10" sounds strong). I certainly can't think of a good variable that would capture game management. I was essentially just pointing to the inherent limitations of any model.

  19. David says:

    I'll openly acknowledge that the only reason I'm posting here at all is because I feel like the model undervalues (perhaps just undervalued) the Falcons, which I realize gives me something in common with the three random people who post "this model sucks" most weeks. The difference is that I actually understand something about regression analysis and the Falcons have been outperforming the model consistently for three years now.

    Two years ago Brian posted about the Falcons after week 10 when they were 8-2 but 19th in the rankings. He concluded by saying that even a perfectly average team could finish 3-3 and end up 11-5. They went 5-1 AFTER they had already outperformed the model so significantly to justify a post.

    Last year the Falcons were 31st in the first rankings that were released. They finished 11th, not necessarily out of line with their tied for 7th-best record, but particularly until week 12 or so it would be hard to argue the model was valuing them appropriately. This year again best record but 6th in the rankings.

    Obviously the rankings are coming more in line with record so there's less reason for me to jump on my soapbox. Also I recognize that until the Falcons win a playoff game I'm wasting my breath on those of you who will defend this model no matter what.

    I wouldn't be posting if I didn't agree with you all that this is the best model out there. It is because I simultaneously respect the model and feel that it undervalues the Falcons that I was trying two years ago to point out reasons for the inconsistency so that Brian, who's infinitely more capable of addressing any "holes" than me, could consider ways to address them.

  20. David says:

    Towards that end, three areas worth looking at, though I acknowledge there might not be a way to address them in the model.

    1. Game management: this has been covered, but although Mike Smith isn't perfect, I would take him over just about any coach in the league on game management (and Matt Ryan as well).

    Just a few pieces of evidence: A field goal in 11 seconds to beat the Bears two years ago, first team to ever win starting a drive from inside their 5 with under a minute left against the Panthers this year (two sides of the same coin), being the first team I've seen to park all its defenders along the sideline in the final seconds to prevent the other team from running any plays to get out of bounds

    2. Taking the foot off the gas: The Falcons have something like a 43-1 record under Smith when leading going into the fourth quarter (more support for game management). They do what it takes to win and no more in stark contrast with a team like the Patriots). The Broncos game this year is a great example: the Falcons were up over 20 points at one point. From that point on, they essentially just ran the ball despite being completely ineffective because the Broncos knew what was coming (this is an admitted weakness of this year's team). That gave the Broncos a chance to get close but not enough time to actually win the game. It was clearly terrible for the team's efficiency ranking though.

    Another semi-related example: on Sunday while the Seahawks were running fake punts in the fourth quarter up 30 points, the Falcons were playing their second unit and kneeling three times on the Giants 5 yard line.

    3. Penalty rate: Looking at the stats the Falcons are essentially an outlier for penalty rate at .22. The next best is .30, then .36 after that. I know penalty rate is included in the model, but the Falcons extreme level of discipline might be insufficiently accounted for.

    This will probably be my last post for another two years, but certainly happy to have a constructive conversation with people.

  21. Anonymous says:

    "Obviously the rankings are coming more in line with record so there's less reason for me to jump on my soapbox"

    This shows you understand absolutely nothing.

  22. MIKE M says:

    To say the model was ahead of some teams before ESPN hardly makes it special.

    I had the Redskins in the top 10 after 2 weeks the model had them 30th. It took the model many weeks to catch-on to the Redskins.

    Last season the model had the 49ers 15th when I had them in the top 5 and the model never did catch-up with the 49ers.

    In situations like this he says it's the luck factor which is bull.

    There are many, many factors that cause teams to win,there are some things I know that Brain and many others do not, and there are some things Brain knows that I do not.

    His model can only include things he knows and understands as can mine.

    My model differs from his which is why I will be ahead of his sometimes and his ahead of mine sometimes.

    I have out picked the model the past few weeks right here in these pages and can do it again this week, the 49ers will beat Seattle this week.

  23. Brian Burke says:

    MIKE M-You leave the dumbest comments. Why don't you write up a full explanation of your methodology, as I have, and I'll post it on the community site? Also, how about if you state your picks *before the games*? Saying you outpicked a simple logit model in one or two weeks is 1) not verifiable, and 2) not impressive in any way.

  24. David says:

    Record was a bad choice of words, particularly with guys like you looking for any chance to discredit me. Performance might be better, but still flawed and backward-looking. Regardless that statement wasn't meant to be statistical, just to say there's no reason for me to be on here every week arguing the Falcons are the first or third best team in the NFL, not sixth. I already made my case for the model having a history of undervaluing the Falcons and I'll leave it at that. Thanks for the snark though.

  25. David says:

    Clearly I have too much time on my hands around the holidays, but I can't help myself. 2009 pre-dates my visiting this site so I didn't include it before, but sure enough the Falcons went 9-7 and made the playoffs despite the league's 20th highest GWP at .46 (and 4th highest opponent GWP at .56).

    We have a four year sample size of games now, so let me do some back of the envelope math (I realize there are tons of holes in this, that's why I said back of the envelope, I encourage anyone to do something more exact). Since the start of the Mike Smith-Matt Ryan era, the Falcons have played 65 games, including Karl's favorite playoff games. They have won 44 of them. If we assume the Falcons had a 53% chance of winning each game, the likelihood of the Falcons winning 45 games or more is 1.2%.

    To clarify assumptions and anticipate some of your counters:
    -53% chance of winning each game was the average of the Falcons GWP at the end of each season. Since the team's GWP has actually risen over the course of most seasons this is probably a high estimate of the team's average GWP in any individual week. If you factored in the team's variance around the .53, it would make their expected number of wins lower.
    -Didn't account for opponents, but this seems reasonable since the average opponent GWP after the season is a little over .5
    -These endpoints are not at all arbitrary. I have argued that the Falcons have outperformed the model based on things like coaching, discipline, and game management, and this sample starts when the coach and QB responsible for that arrived
    -The argument doesn't hold up that "There are 32 teams in the NFL. Based on luck you would expect one to outperform its GWP by this type of margin". I didn't pick a random team after 4 years. This team was identified even by Brian as outperforming the model a third of the way into this sample. At that time I pointed out real, persistent reasons that could be the case, and those reasons and the wins have in fact persisted.

  26. captain lou says:

    For office pool purposes.(Bragging rights)only.Which would yield me a higher winning percentage? The efficiency rankings or the game probabilities?

  27. Brian Burke says:

    They are one in the same. The game probabilities are based on the rankings with home field advantage thrown in.

Leave a Reply

Note: Only a member of this blog may post a comment.