Several college basketball analysts recently released preseason projections for the upcoming season. How well do these models forecast the final conference standings? Here are some guidelines based on recent history:
1. The top statistical models correctly predict ~25% of teams’ conference finishes exactly, and ~50% within one position.
I studied the prediction models of Ken Pomeroy, Dan Hanner and TeamRankings. The three systems rely on many of the same inputs, so perhaps it’s no surprise that they perform similarly well. In 2014-15, all three models correctly predicted the exact finish of 24-29% of teams (including ties), and called 49-50% within one slot.
Using the more rigorous metric of conference wins and losses, Pomeroy found a slight edge for Hanner and TeamRankings’ methods over his own. All three forecasted every team’s conference record with an average error of about 2.2 games, which is rather impressive.
2. The preseason coach and media polls predict conference standings about as well as statistical models.
Like statistical models, the league polls were exactly correct for about one-quarter of teams, and within one place for half. Even though coaches and writers usually take much different approaches than the statistical models, they end up with similar accuracy.
Polls were substantially more accurate than projections as of about four years ago, but the methods have recently converged. You can pick your favorite of several possible explanations, such as: 1) Projections are improving, and will surpass polls in the future; 2) Projections are improving, and pollsters are learning from their models, so they will improve in tandem; 3) Both projections and polls are about as good as they’ll ever be, and any further movements are just noise.
3. 2014-15 was a bad year for both projections and polls.
For several years, the models and polls each had an average error of about 1.8 places in their conference standings projections. Last year, the average error jumped to 2.1.
There was some consolidation across leagues last year, which skews that statistic a bit (the expected error is greater when a conference has more teams). But across metrics and systems, 2014-15 was much harder to predict than 2013-14:
I wouldn’t read too much into a one-year trend — sometimes unlikely events happen, making good forecasts look bad. We saw unusual patterns in last season’s Patriot League, to pick one example close to our Big Apple hearts. (The largest individual error in most predictions came from Davidson, which won the Atlantic 10 after being picked anywhere from ninth to 13th.)
But if the projections and polls have another rough year in 2015-16, it may be worth wondering if there are any inherent forces (transfers? rule changes?) making college basketball less predictable.
*Note: Conference USA is excluded from all data because it has not sponsored a preseason poll in the past, and independents are excluded for obvious reasons. For leagues whose polls were separated by division — such as the MAC and OVC — each division is treated as its own league. Where both coach and media polls were available, the coaches’ poll was used. Many historical projections obtained via MasseyRatings.com.