How Accurate Are Preseason Projections?

Over the past few years, the quantity and sophistication of preseason college basketball projections have exploded. Ken Pomeroy has been publishing projections since 2010, while Dan Hanner took his lineup-based model to this year. At a conference level, you can find projections based on anything from straightforward regressions to player-by-player analysis.

But how accurate are these projections? In particular, how do they compare to the preseason coach and media polls, which aggregate the opinions of league experts?

To test this, I’ve compiled preseason projections and polls from the last few seasons, benchmarking them against the final conference standings. (For example, Siena was picked to finish 10th in the MAAC last season, but actually finished 5th, for an error of five spots.) Here are three findings:

1. Preseason projections are about as predictive as the polls.

The three projection systems I studied in 2013-14 were Pomeroy’s,’s, and Hanner’s (then at All three models performed almost equally well, with an average error in predicted conference ranking of 1.85-1.88 positions across all D-I teams*. This is a crude measure of accuracy (for example, missing by two spots is more meaningful in the eight-team Ivy League than in the 15-team ACC), but earlier this week, Pomeroy came to a similar conclusion with a slightly more sophisticated test of the major prediction models.

The polls in 2013-14 were about as accurate, with an average error of 1.82 spots — slightly but not meaningfully better than the projection systems. The polls correctly predicted 46% of conference champions; projection systems hit on 46-51%.

The below chart compares systems on the percentage of teams predicted with each level of accuracy. (The lines show a cumulative distribution — the systems each predicted ~28% of teams’ positions exactly, ~55% within one spot, ~75% within two spots, etc.)


The polls have a slight edge in accuracy — correctly picking 76% of teams within two spots, compared to 72-74% for the projection systems — but for the most part, all four methods are very similar.

2. Projection systems seem to be improving faster than the polls.

Three years ago, the distributions weren’t as close: The average error of Pomeroy’s 2011-12 forecasts was 1.95 spots, compared to 1.85 for the polls, and the polls nailed 30% of teams exactly to Pomeroy’s 25%. The following season, however, Pomeroy’s projection system caught up with the polls.

I’m hesitant to put too much stock in this apparent trend — with only three years of data, I don’t know how much season-to-season variability should be expected. But it makes sense to me; all of the projection systems are still relatively young, and their creators are making adjustments annually. In theory, pollsters could make use of the same knowledge to keep pace with the projections, but I bet that happens more slowly.

3. The best available predictor is a combination of projections and polls.

In any modern “stats vs. experts” discussion, the cliché answer is that both are valuable — and at least in this case, it’s true. In 2013-14, combining the forecasts of projections and polls yielded an average error of 1.72 spots, better than either group individually. The difference is noticeable on the same cumulative probability chart:


Individually, the polls and projection systems correctly picked 28% of teams exactly and 55% within one position; combined, those rates were 33% and 61%, respectively. So when the projections and polls disagree widely — as for the teams below — an in-between prediction might be best.

Team Conference Pomeroy Rank Poll Rank
Tennessee SEC 6th 13th
Creighton Big East 4th 9th
Alabama SEC 5th 10th
Delaware CAA 5th 8th
North Dakota St. Southland 2nd 5th


Team Conference Pomeroy Rank Poll Rank
Missouri SEC 13th 7th
Auburn SEC 14th 8th
Hofstra CAA 8th 3rd
Monmouth MAAC 10th 6th
Fort Wayne Summit 5th 1st



*Actually, not quite all D-I teams: Conference USA was excluded because it did not sponsor a preseason poll prior to this season, and independents were excluded for obvious reasons. For leagues whose polls were separated by division — such as the MAC, OVC and Big South — each division was treated as a separate league in this analysis. Where both coach and media polls were available, the coaches’ poll was used. Many historical projections obtained via

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s