Game Probabilities Week 14

Win probabilities for week 14 NFL games are listed below. The probabilities are based on an efficiency win model explained here and here with some modifications. The model considers offensive and defensive efficiency stats including running, passing, sacks, turnover rates, and penalty rates. Team stats are adjusted for previous opponent strength.



























PwinGAMEPwin
0.09 OAK at SD 0.91
0.54 WAS at BAL 0.46
0.19 JAX at CHI 0.81
0.87 MIN at DET 0.13
0.31 HOU at GB 0.69
0.10 CIN at IND 0.90
0.53 ATL at NO 0.47
0.39 PHI at NYG 0.61
0.09 CLE at TEN 0.91
0.62 MIA at BUF 0.38
0.16 KC at DEN 0.84
0.65 NYJ at SF 0.35
0.04 STL at ARI 0.96
0.29 DAL at PIT 0.71
0.69 NE at SEA 0.31
0.27 TB at CAR 0.73

  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

11 Responses to “Game Probabilities Week 14”

  1. Unknown says:

    Can you post how accurately your model has predicted game outcomes?

  2. Unknown says:

    Have you done any analysis on the correlation between defensive schemes (ie. 3-4 vs 4-3) impacts defensive rankings?

  3. Anonymous says:

    BUF game is in TOR.

  4. Anonymous says:

    Here is my analysis which covers week 4 to 17 2007 and week 4 to 13 2008 and is in terms of the home team.

    Prediction
    Range Avg W L W%
    90-99 93.0 21 3 87.5
    80-89 84.1 42 8 84.0
    70-79 74.2 18 20 47.4
    60-69 64.6 35 16 68.6
    50-59 54.2 20 17 54.1
    40-49 45.2 14 21 40.0
    30-39 34.2 14 22 38.9
    20-29 25.5 9 19 32.1
    10-19 15.1 4 17 19.0
    0-9 7.7 4 8 33.3
    TOTAL 181 151
    AVG 55.7 54.5

    CORRELATION BY GROUP .904

    Another way to look at it is it is correct about 67% of the time.

    I hope the columns come out lined up correctly and readable.

  5. Brian Burke says:

    Thanks, jarhead. My numbers seem to differ from yours slightly.

    2007 was about 72% correct. I can't remember the exact decimal. I tracked it and touted it publicly every week, which I'm trying not to do this year. I can't recall, but that might exclude week 17 (which is notoriously unpredictable)--which might explain any discrepancy.

    This year is 98-47-1 for a 68% accuracy so far.

    Although it doesn't seem like the system is doing well as last year, keep in mind there are more upsets in some years than others. I use the Vegas consensus picks as my benchmark for accuracy. The opening line is only 66% correct so far this year, and the updated line at kickoff is not much better at 67% correct.

    That's all I expect, that on average, I'll be slightly ahead of the consensus favorites.

  6. Brian Burke says:

    No, I haven't done any comparisons of 3-4 vs 4-3 defenses to date.

  7. Anonymous says:

    Our numbers are very close, I show this year at 95-47-1 with 2 no bets (50/50). That is 145 total games. I also show that there were 145 games this year from week 4 to 13 in my system data base. The differences are not worth double checking week by week to resolve. I must have an error of two in my records.

    My 2007 record is from week 4 to 16 (not 17..error in previous post) shows 128-62 with 2 no bets.

    What I find amazing is how good good your predictions are by percentage group.

  8. Brian Burke says:

    I think part of the difference is that I'm counting both no-bets as correct. If you check the comments on those posts, you'll see how the model worked.

    You're right about the percentage group. The accuracy of a probability model is not just in its % correct, but also in its calibration. For example, in 60/40 games, I want to be wrong 40% of the time.

  9. Brian says:

    Brian,

    Did you ever consider adding more weight to recent performance?

    Correct me if I'm wrong, but I believe your system puts equal weight on all weeks. So in predicting outcomes for week 14, week 1 is just as important as week 13. Do you think or is there any evidence that a team's recent performance is more indicative of future performance than a a team's performance earlier in the season?

  10. Brian Burke says:

    Brian-Good point. Yes, I have. Last year's system included double weights for the 4 most recent games. Using that method, accuracy improved by a game, maybe 2. So there is evidence, but it's a question of balance.

    I decided not to do that this year because one of my goals is simplicity. If a model is so complex that no one can understand it, it loses its usefulness. Behind all the regression mumbo-jumbo, my model is pretty straight-forward: Team efficiency stats plus HFA are weighted according to how they correlate with winning, adjusted for previous opponent strength, then converted using a logarithm into probabilities.

    One other thing I might test in the future is to only consider recent-week performance. My model seems no more accurate in the later weeks than it did in the early weeks, despite the smaller sample size.

  11. Unknown says:

    I have spent exhaustive amounts of time building a predictive model for NBA games and I can tell you that pulling season-long stats typically gives me no more accurate results than using only those results from within a recent window of games. For the NBA, the most accurate window seems to be the last 20 games played. Proportionally speaking, that is nearly identical to using the last 4 games of an NFL season.

    Statistically, this is counter-intuitive, since we usually understand that larger sample size equals greater accuracy. But from a sports perspective, this might make sense, because teams change and evolve so rapidly. If an NFL team was blown out in Week 1 of this season, does that really have any validity with regard to this week's numbers? I could make a cogent argument that it does not.

    Grabbing a smaller, more recent sample size (like the last 4 games) also would make the model more responsive (too responsive??) to streaks and recent trends. For example, you currently have Washington as a slight favorite over Baltimore. I don't think this would be the case if you were only compiling data from the last 4 or 6 weeks. Washington, while certainly being a decent team, is still living off some of the statistical glory from their solid start to the season. Meanwhile, Baltimore seems to be picking up steam.

    All this being said, I have found that in the NBA, compiling data on the last 20 games, while more accurate than using season-long stats, is only MARGINALLY more accurate. This seems to be consistent with your findings from last year when you placed double weight on the 4 most recent games. Yes, it's better (more accurate), but not by much, and there is a great value (elegance) in simplicity.

Leave a Reply

Note: Only a member of this blog may post a comment.