How Accurate Is The Prediction Model?

A frequent question around here is 'What's your prediction model's accuracy?' The answer is that it varies from year to year, depending on how the schedule works out, with some random chance thrown in. In some years, there just happens to be more evenly-matched games, and in others there are more mismatches.

What's more important, at least in terms of accuracy, is what's called calibration. For example, when the model says a game is a 60/40 game, I actually want the model to be "wrong" 40% of the time. I want an accurate estimation of the game odds more than favoring the eventual winner.

Fortunately, reader 'K Rich' tracked the model's performance since 2007 and sent me a thorough spreadsheet, and the chart below illustrates the model's calibration results.



For the most part, the bulk of the calibration error appears to be from sample error--there simply aren't that many games in each bin to be definitive. The calibration line zigs from one side of the optimum line to the other like we'd expect. On the other hand, there appears to be some trends. the home team is over-favored in mismatches where it is the stronger team and is under-favored in mismatches where it is the weaker team. It's possible that home field advantage may be even stronger in mismatches than the model estimates.

  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

15 Responses to “How Accurate Is The Prediction Model?”

  1. Giles says:

    Was this written/conceived before your latest NY Times article because this is similar to what you mentioned there.

    Are you going to run the numbers for more seasons to see if it is consistent and will you change your algorithm as a result?

    I'm interested in how the predictions shake out for the rest of this seasson as there have been some crazy wins and losses so far.

  2. Anonymous says:

    A lot of people are curious not just about the games that win but the games in which your model's opinion is vastly different than the conventional wisdom. The Vegas betting lines can be converted into percentage chances of winning the game straight up. I think a lot of us are curious about how often the model works where you and the oddsmakers has really different opinions.

    For example, you gave Cleveland a 72% chance of winning where Vegas said about 40%. You had Arizona at 52% where Vegas had them around 25%.

    These cases are far more interesting than the ones like Oakland where both said an upset was unlikely and one occured.

  3. Jim A says:

    You should just submit your predictions/ratings to Todd Beck's Prediction Tracker at http://http://www.thepredictiontracker.com/

    That way you can have you have your model tracked independently against dozens of other prediction models.

  4. Tarr says:

    Just from eyeballing that, I'd say it appears as though the system is a bit overconfident. A linear fit of the actual win rates would have a lower slope than 1:1, which means your predictions would be more on-target if you nudged all the predictions a bit towards 50%.

    It also seems like you're very very slightly undervaluing home field, but that could just be my eyes playing tricks on me or an artifact of the small sample size.

  5. Chuck Winkler says:

    Do you have an archive for this prediction model?

  6. Unknown says:

    Reduce confusion and plot either the greater than 50% or the less than 50% since they must be complementary. However you represent it, linear regression would give the appropriate corrections which when applied to existing data gives the actual accuracy of the methodology.

  7. Brian Burke says:

    Not quite. The probabilities are for the home team. Not complimentary.

  8. Dave says:

    "I actually want the model to be 'wrong' 40% of the time."

    I get this point, but while this displays the model's accuracy, it doesn't necessarily say much about its precision. For instance, I could give a model that says "Home team always has a 57% chance of winning" and it would backtest extremely well, and likely do very well in the future. A better way to compare models would be to judge its accuracy and somehow account for how often and how far it strays from the safety of 57%. In an extreme (although impossible) case, the best model would be one that always predicts 100% or 0% for the home team and is always right.

    Does that make sense? Is there someway to measure that aspect of your model? Might not be too meaningful in a vacuum, but would be a good way to compare against other models.

  9. Benjamin Morris says:

    I blogged briefly about this here:
    http://skepticalsports.com/?p=839

    Unrelated, Dave: Your point is valid and important. I'm working on a calibration method for these types of models that rewards accurately making bold predictions. It's more difficult when the outcomes are binary (as with wins rather than winning percentage per season), but it's done all the time in finance and other quantitative fields.

  10. Jim Fisher says:

    Brian - have you ever looked at using the efficiency stats of the past 4 games for the prediction model as opposed to the whole season? I was reading "Take your eye off the ball" and they mentioned in there that coaches game plan off the past 4 games of their opponents. Either they are operating inefficiently there by not taking all the data available (which should be relatively easy these days) or they understand that the past 4 games of a team are more accurate than what they did several months before. I guess the best way to evaluate would be to see which has a better correlation to game stats, efficiency stats over the whole season or from the past 4 games.

  11. Brian Burke says:

    Jim-Yes, in fact in late 2007 and all of 2008, the model overweighted the last 4 games (by about double). It turned out to be no more predictive than using the entire season evenly. My rule is, all things being equal, go with the simpler more elegant method.

  12. Benjamin Morris says:

    In my research, I've found that not only is immediately previous performance less reliable than overall performance, but it's not necessarily even more reliable than less recent performance. E.g., in Basketball, the 3rd, 1st, and 2nd quarters of the season are actually more predictive of playoff success (in that order) than the 4th.

  13. Tarr says:

    Dave, if we're talking about overall prediction accuracy, a good measure would be root mean square deviation. So, for each game, take the predicted percentage chance given to the home team, subtract the result (100% or 0%, or 50% for a tie), and square the result. Average the squared deviations of all predicted games, and square root the average.

    Benjamin Morris, I'm not saying I would have predicted that, but I can make a reasonable-sounding justification for why 3rd>1st>2nd>4th quarter of the season makes a lot of sense for predicting NBA results. I wouldn't be surprised if the order came out the same for the NFL, for roughly the same reasons.

  14. Buzz says:

    Do you have any data showing whether the second half of the year is more accurate than the first couple weeks when there isn't as much data or does the model have enough data to start being just as accurate in week 4 as it does in week 15? Obviously last weeks results were a good indication that the early week results are pretty good.

  15. Anonymous says:

    David, dumb question here, but do your predictions take into consideration the line, or is your analysis straight up, team vs team?

Leave a Reply

Note: Only a member of this blog may post a comment.