Jay Cutler's Interceptions in Context

In my third installment this week at the Fifth Down Blog, I discuss Jay Cutler's interceptions. Although he threw 18 picks, the second most in the league in 2008, that number is very deceiving.

If you missed the second installment yesterday, don't worry. It was a rerun of my 'Vick is Underrated' article from a few weeks ago.

  • Spread The Love
  • Digg This Post
  • Tweet This Post
  • Stumble This Post
  • Submit This Post To Delicious
  • Submit This Post To Reddit
  • Submit This Post To Mixx

4 Responses to “Jay Cutler's Interceptions in Context”

  1. Will says:

    Brian, I left you a comment at the fifth down but I don't know if you saw it there. I know you can calculate a change in WP for a given play; this is how you came up with the best plays of the year for last season. Can you also calculate an average change in WP for a type of play? For example, can you find the average change in WP for a Cutler interception vs. a Favre interception vs. the league average, to see who throws more bad picks? I've long felt that many of Favre's interceptions equate to punts, as he throws it up deep on a late-down, long-yardage situation. By the same token, it might be good to know which passers have the highest delta-WP per attempt, or which rushers most change their team's fortunes per rush.

    I know the stats you use to create the WP model are proprietary, but is there any way to publish the model parameters (the percent chance of winning for each combination of independent variables) so that we in the public can play with it without having to license the underlying statistics?

  2. Brian Burke says:

    Will-Great question. But first, let me address the proprietary issue. It's not a matter of proprietary, it's that the model can't be summarized with equations or simple algorithms.

    The model is basically a series of look-up tables for first down WPs. The tables are anchor points and the algorithm I use interpolates between the two most relevant tables. Then for 2nd, 3rd, and 4th downs, I use some Markov process stuff to estimate the WPs. The tables themselves are based on raw winning % in various combinations of scores, field position, and time remaining. The raw data can be very noisy, so they are smoothed using a variety of techniques: LOESS, regression, and just plain eye-balling it.

    The best idea for sharing (so far) is the win probability calculator I built last season. You can use that to establish baseline parameters, evaluate the cost of interceptions, or any other applications as you like. The only thing I ask is that authors who use cite the reference.

    Ok. Great question on Favre and Cutler. You're getting a little ahead of me because I'm planning on publishing some neat stuff on individual player WPA (win probability added) this season.

    To make things a little easier on myself, I'll cite Bronco and Jet passing game numbers, not necessarily Cutler and Favre, but I think they're identical for practical purposes. I'll also throw the league average at you.

    Denver's 18 INTs cost a total of -1.56 WPA (or, in a sense, lost 1.56 games ). That averages to -.087 WPA/INT.

    New York's 23 INTs cost a total of -2.49 WPA. That averages to -.108 WPA/INT.

    You could say that the Jet's interceptions were about 20% more costly than Denver's last year.

    For reference, there were 465 INTs in the league in 2008, costing a total of -46.98 WPA. That averages to -.101 WPA/INT. On average an interception costs a team a 10% chance of winning.

    Again, great question. I might turn this into a full post.

  3. Dave says:

    Brian you should definitely turn that into a full post.

  4. Will says:

    Brian,

    I (and I think many readers) would be interested in more detailed analysis in this area. Baseball people have attempted to invent all kinds of metrics to compare players, and they try to evaluate the effectiveness of these metrics by determining their power to predict wins. I think WPA captures it all in a neat little nutshell, with lots of interesting applications.

    My apologies for not doing more background reading before I asked the model question. I assumed the WP model was an extractor from a large dataspace with some kind of k-nearest neighbors sampling, but now that I think about the size of the database that sounds pretty intense. The WP calculator is a great tool - I don't suppose you could make it available for batch processing (in your spare time...)?

Leave a Reply

Note: Only a member of this blog may post a comment.