## Roundup 9/18/10

Here's a cool team ranking site. It's based on only score and home field advantage, but I like the algorithm, and the presentation is cool. It reminds of Beat Graphs, which has a new look.

I really enjoy watching the NFL Films series America's Game. Now you can watch them on Hulu. Here's the Ravens 2000 season. Man, am I glad Ray Lewis is on my team.

Florida Danny at Niners Nation does some extra homework on how bad NFL pre-season predictions really are.

A couple interesting posts at Fifth Down. Week 1 winning teams make the playoffs 52% of the time, while week 1 losing teams make the playoffs only 23% of the time. (In case you think that this implies 75% of teams make the playoffs, like I did, that's not the case. 16 teams are 1-0, half of them--8 teams--make the playoffs. 16 teams are 0-1, and a quarter of them--4 teams--make the playoffs. That's a total of 12.) There's two things at work. First, 1-0 teams now have a numerical head start toward earning enough wins to qualify, and second, it's an indication of a good team.

Plus, here's a nice breakdown of the Giants' long-time bread & butter run play.

More New York Times: Freakonomics has an interesting post about innovation and immitation in football.

It's hard to beleive that junk like this makes it through academic peer review. What the heck to peer reviewers do anyway?

Don't forget to start number crunching and send your research/opinions/analysis to the re-launched Advanced NFL Stats - Community. It's 100% non-peer reviewed, unless you count the thousands of readers and their comments.

Football Outsiders, or whoever wrote this, needed over 2,300 words to try to explain regression to the mean, and it's mostly incoherent blather. It's a real disservice.

Here's a one way to think about it: Whenever we observe a series of events, even if we observe every one of those events in the past, we are only observing a tiny fraction of all the other similar events that could potentially occur. Every set of data is only a sub-set, a mere sample of an infinite possible eternity. If we repeat the event infinitely, the frequency of the event must eventually approach its true mean. Therefore, future events will tend toward their mean. There. Just 77 words. The key to intuitively understanding nearly all statistical inferences is understanding all data is (are) just a sample of what could happen. We can only infer a true mean from past data; the observed average is only a clue to what it really is.

Gregg Easterbrook often teases sites (including this totally awesome one) about hyper-specificity, such as going out to a ridiculous number decimal places when presenting analytical results. One reason I tend to leave in the extra digit is becuase I get a mountain of email whenever probabilities don't perfectly add up to 1. Here's another really good reason. Helmet-knock: Tango.

Are tight ends really "a rookie QB's best friend?"

Don't get too excited, Seattle fans.

Evidently Drew Brees really has a dark side. More quality reporting on the NFL here.

Tune in Sunday for live WP graphs, and don't forget advanced stats for individual players is already available for the 2010 season.

### 5 Responses to “Roundup 9/18/10”

1. Unknown says:

Florida Danny's article is really good stuff. I've wondered about prediction accuracies for a long time, and I think he has built an excellent template for future research.

2. Borat says:

Dear Brain:

"Gregg Easterbrook often teases sites (including this totally awesome one) about hyper-specificity, such as going out to a rediculous number decimal places when presenting analytical results."

That's a rediculous way to splells ridiculous!

3. Jonathan says:

For teams starting 2-0/1-1/0-2, the playoff probability going something like this:

63%/37%/13%

4. Anonymous says:

Where did you find the information on the algorithm for the power rank site? I couldn't find it?