Week 10 Efficiency Rankings

A good showing against the Giants was enough to keep the Eagles at number one. The Redskins are still riding high after a bye based on their extremely low turnover rate. Carolina's poor outing drops them two spots, and Chicago's loss to Tennessee drops them a spot. San Diego's performance against Kansas City didn't water anyone's eyes, but their strong passing game is enough to keep them ranked very highly.

The biggest surprise still has to be Atlanta. The question is, are the Falcons for real?

They have solidly above average offensive passing and running efficiencies, low turnover rates, and a slightly better than average pass defense. In fact, their offense ranks first this week. 7.0 net yards per pass attempt and 4.7 yards per rush combined with an almost non-existent fumble rate tells me they are deserving of their 6-3 record. But are they racking up all those numbers against easy opponents? No, they've faced an average (19th toughest) schedule so far including games against TB, CHI, PHI, CAR, GB and NO. Yes, they're for real.

The Titans are slowly climbing the efficiency rankings in recent weeks. They played well against Chicago this week, plus teams they previously played, such as the Colts and Ravens have been improving themselves. Vince Young's weak passing numbers are steadily being drowned out each week by Kerry Collins' more efficient ones.

You might wonder how a 9-0 team could be ranked only 10th. As I wrote previously, a very weak schedule and some lucky calls are partially responsible. But they are also winning by defensive interceptions. They are among the league's leaders in this category and this certainly helps explain their record to-date. But chances are they won't be able to rely on such a high rate going forward.

The 8-1 Giants and 6-3 Ravens' ranking suffers from the same effect. It makes me question my own research that defensive interceptions are highly random and do not predict themselves. But keep in mind we've played 1 more game than half a season. Let's see how the interception numbers turn out in the 2nd half of the year.

1 PHI10.770.5437
2 WAS20.740.5065
3 SD40.690.49219
4 ATL60.680.50121
5 CAR30.670.54172
6 CHI50.670.53106
7 NYG80.660.46811
8 MIA90.660.50718
9 PIT70.640.51281
10 TEN120.640.45143
11 ARI110.600.49423
12 NO100.590.55522
13 TB130.580.53184
14 IND160.570.53914
15 DAL140.540.51198
16 GB150.520.51139
17 MIN170.510.532110
18 BAL190.500.452012
19 NYJ200.500.432713
20 BUF180.500.462515
21 DEN210.480.471226
22 NE240.420.472424
23 JAX230.410.441127
24 HOU220.410.481529
25 SF250.370.502225
26 CLE260.330.531628
27 SEA270.320.513017
28 OAK280.300.553116
29 KC310.260.532331
30 CIN300.230.523220
31 STL290.200.572630
32 DET320.190.552932

The to-date season efficiency stats are listed below. Click on the table headers to sort.


The ratings above are listed in terms of generic win probability. The GWP is the probability a team would beat a notional league average team at a neutral site. Each team's opponent's average GWP is also listed, which can be considered to-date strength of schedule, and all ratings include adjustments for opponent strength.

GWP is based on a logistic regression model applied to current team stats. The model includes offensive and defensive passing and running efficiency, offensive turnover rates, and team penalty rates. A full explanation of the methodology can be found here. This year, however, I've made one important change based on research that strongly indicates that defensive interception rates are highly random and not consistent throughout the year. Accordingly, I've removed them from the model and updated the weights of the remaining variables.

Offensive rank (ORANK) is the ranking of offensive generic win probability which is based on each team's offensive efficiency stats only. In other words, it's the team's GWP assuming it had a league-average defense. DRANK is is a team's generic win probability rank assuming it had a league-average offense.

  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

12 Responses to “Week 10 Efficiency Rankings”

  1. Anonymous says:

    Hey Brian... So I'm new to the site (it's GREAT!) and obviously I've been spending probably way too many hours "catching up", but I did have a couple of questions regarding this last chart you've posted regarding the "To Date Season Efficiency Stats."

    You mentioned that you've removed the D Int Rate from the model, and yet you've got it listed in the chart. I also notice you don't have a column for D FFumble Rate on the chart.

    Is that just a mistake, and that column actually IS the D FFumble Rate mislabeled? Or have you removed that variable as well, and though it's not in the model, you just still chose to continue to show the D Int Rate?

    The mind is a little hazy with all of the reading of all of your posts, and trying to process it all in a short time, so perhaps I've missed a portion of a previous post where you chose to eliminate the D FFumble Rate.

    More than likely that's the case, but I just saw the discrepancy, and having read that you found a strong correlation with the D FFumble Rate, I just thought i'd ask you to clarify if it's truly out of the model, along with the D Int Rate, or if there's another explanation.


  2. Unknown says:

    I'm interested to know why you have the Ravens at 18. What explains the difference between your stats and Football Outsiders', who rank the Ravens 4th?

  3. Brian Burke says:

    I dropped Def FF rate a couple years ago. It was hurting the model and didn't help accuracy much at all, even retrodictively. Since then I've also determined that def int rate does not provide any predictive value. I include it in the table because it still explains a good part of why teams won in the past, even though it may not predict who will win in the future. It also helps explain why my model's rankings differ from some of the other "power ratings" out there.

    Just to clarify, neither one is in the prediction/ranking model.

  4. Anonymous says:

    What's the theory behind adjusting strength of schedule (or opponent's win probability, in your case) after the game has already been played? So if Team A played teams in the beginning of the season that were playing shitty but then gradually improved during the season, Team A's opponent strength (for those specific opponents) would increase, because the opponents played better after Team A played them. That doesn't make too much sense.

    Playing devil's advocate, I guess you could say that if the opponent were truly good, but the stats just hadn't "caught up" with them yet until the end of the season, then maybe. But I think the first scenario is at play, where some teams go through ebbs and flows.

    What do your correlations (or coefficients, whatever they're called) unadjusted for opponent strength and with zero phantom games look like? - Andy

  5. Anonymous says:

    You wrote that, "You might wonder how a 9-0 team could be ranked only 10th. As I wrote previously, a very weak schedule and some lucky calls are partially responsible."

    Weak schedule, yes. But how would even an exorbitantly high number of lucky (random) calls affect the stats on which your system is based? And you're implying by referring to the lucky calls that they should have lost. Granted, but how would even a loss to the Ravens affect your system's ratings? -Andy

  6. Anonymous says:

    Oh I see. You're saying the reverse. That they probably shouldn't be 9-0. So those lucky events don't affect your stats, but they affected the outcome of the game, their winning percentage and thus the perception that they are the best team in the league or at least better than 10th. - Andy

  7. Brian Burke says:

    Jacob-Good question. I don't know exact weights for DVOA, so it's hard to say for certain.

    A couple ideas, though--Looking at DVOA, they have BAL's defense ranked incredibly high. This has to be based on 2 things: their def ints and their def run eff.

    As I've mentioned elsewhere, def ints are primarily a function of 2 things-the qb throwing the ball and luck. So even though lots of def ints help explain past wins and tell us a lot about the QBs BAL has faced, it doesn't tell us anything about what their future def int rate will be.

    Def run eff is the least important facet of football. In simple terms, it correlates the least with regular season wins. I suspect DVOA may put an undue amount of weight on stopping the run. But that's just a guess.

    From what we do know for sure about DVOA is that it overweights performance in the red zone. This is going to capture a tremendous amount of noise/randomness/luck. One lucky play in the red zone can make up for an entire half of sub-par play. I think that's poor modeling. DVOA will therefore match most people's intuitive sense of which teams are good and bad, but ultimately it will be overfit to the unique non-repeating circumstances of past events.

    But what I really hope is that I'm way off and I don't know what I'm doing! I hope the Ravens really are a top 10 team! GO RAVENS!

  8. Brian Burke says:


    I think the best way to answer your question is just describe how I think about the epistemology of football.

    Let's say every team has a true but hidden "power rating" knowable only in the mind of God. Every snap, drive, and game reveals a small amount of evidence about what that "true" rating really is.

    Game-to-game variability in performance is enormous in the NFL, as it is in most sports. One game tells us almost nothing about a team. Two games tells us a little more than almost nothing. After 4, 6, or 9 games, we still only have a rough estimate of that true rating.

    Yes, teams can fundamentally improve over the course of the season (or more likely decline due primarily to injuries). But that "signal" of fundamental improvement or decline is overwhelmed by the "noise" of the game-to-game variability. So when we estimate the "true" rating of a team, I think we're far better off using as many games as possible to cancel out the noise and hear the signal.

    Here's an example. San Diego was 1-3 going into week 5 last year, with stats that indicated they would likely win only 4 games all season. Later on in the season, when I calculated the strength of schedule for the team that played them in week 5, should I have used the 1-3 Chargers of the moment, or should I have used the 11-5 Chargers we knew they really were? Did they improve, or was a sample size of 4 too small to make conclusions on?

  9. Anonymous says:

    I just found this site a few days ago and love it. One question, where are you able to get your data from? I'd like to try out a few ideas but haven't been able to find any data in a spreadsheet format.

  10. Anonymous says:

    hello Brian,

    Jacob asked about how you have the ravens 18th and football outsiders 4th. you said dropping def ff rate and def int rate are not predictive
    (certainly, there will be differences)
    however, I have studied Football outsiders menthodology, and dont think that those type of values make that big a difference. (I realize there are other differences) also, you have san diego 3rd and FO has them 18th.

    you call your yardage values 'efficiency' but it seems to me that it is more a case of raw yardage (efficiency would include context).

    I suspect the big difference between yours and football outsiders is context.

    either way, keep up the good work

  11. Brian Burke says:

    John-Thanks. I agree with your analysis. I define efficiency very simply--yards per play, just like efficiency in cars is miles per gallon. I don't use totals in the rankings.

    I think the contextual comparisons in DVOA is fine. It's a good way to examine how teams did in the past on a play-by-play basis. But the contexts don't tend to repeat. They are the unique circumstances of previous games, opponents, situations, etc. So I don't think DVOA would make a good predictive model.

    When I mentioned above how DVOA overweights red zone performance, that's what I was getting at. That's going to capture and then leverage a lot of luck/randomness.

    When you add it all up, DVOA is a very round-about way of getting to points scored and points allowed. Points are just the culmination of all of those contextual outcomes. This comment thread and an email from another reader got me interested, so earlier this week I took all the teams' DVOA and compared it to each teams' points scored/points allowed difference.

    They correlate at 0.95. You'd obviously expect a strong correlation, but that's insanely high. The other 0.05 might just be from the 'D' part--opponent adjustments. So in the end, DVOA is really telling us about points from previous games, something we already know.

  12. Anonymous says:

    thanks Brian, your last paragraph says a mouthful.
    at a corr of .95, which I would have never suspected, it does seem to tell us something we already know.


Leave a Reply

Note: Only a member of this blog may post a comment.