January 27, 2009
As recruiting takes over the landscape to an even greater extent than usual over the next week or so, we should also see the annual barrage of "snake oil" articles scoffing at the rankings that draw larger and larger audiences every year. In fact, they're already coming this year.
I was in this camp for a long time. One of the running ideas I had at my old site, actually, was to review old recruiting rankings and ruthlessly mock them for being so woefully off-base. Except that every time I tried, I ran into a wall: There were always as many (and usually more) players who "made it" among the top prospects as there were busts. Unlike some people, I find arguing "Recruiting rankings are stupid because Rhett Bomar" more than a little disingenuous; ditto "Recruiting rankings are stupid because Utah."
You can do anything you want anecdotally, especially with a subject as chicken-and-egg as recruiting. But I've become a believer over the last couple years because of the numbers -- that is, all of the numbers. Rivals, for example, uses a formula (no, I don't know the formula) to assign every team in the country a total score for its overall class every year, usually ranging from about 200 on the low end to just shy of 3,000 for the USCs and Floridas of the world. Here is each BCS conference team's total score for the last five years (the sum of the scores from 2004-2008). And here are those same five-year sums distributed according to overall winning percentage over the same period:
For all the hits and misses, the big trend is clear, and speaks for itself: The average winning percentage steadily increases in lockstep with increased recruiting points. The 15 or 20 teams that separate themselves on an annual basis have no chance of a losing record over any sustained period of time; the bottom fifth or so, obviously, is more likely to come out below .500.
Remember, though, that that graph is of overall records, including mid-major and I-AA patticakes that even the worst BCS conference teams typically out-recruit. Without that kind of context, the big picture can be a bit of a mess because of wildly varying schedules and other inconsistencies that wreck head-to-head comparisons; SEC teams, for example, are clustered at the top of the rankings every year, and Big East teams clustered at the bottom where the BCS conferences are concerned, and since these teams mainly play within their conferences, overachievers and underachievers are inevitable (somebody has to win the Big East). To get a more detailed of look, I took the same five-year point totals from Rivals and applied them to all 332 games last year between teams from BCS conferences, broken down here according to how far apart the two teams in each game were in the rankings:
Bottom line: Based on the recruiting rankings of the last five years, the "more talented" team according to the gurus won almost two-thirds of the time in 2008, by a little more than a touchdown per game. Just as importantly, the difference became more obvious as the gap widened, exactly as you'd expect if the rankings are worth anything at all.
There was virtually no difference between teams that recruited within 2,000 points of one another over the preceding five years (or less than 400 points apart per year); as you might expect, the rankings weren't very useful for parsing talent gaps that small with so many other factors in play, and teams that found themselves bunched closely together in the rankings were generally in the same situation when they went head-to-head on the field.
At that point, though, the class differences become too wide bridge, and the higher-ranked teams begin to dominate. Teams that brought in an annual 400-1,000-point advantage over their opponent on any given weekend won two-thirds of the time last year, by 10 points per game; teams that "out-recruited" the opposing sideline by at least 5,000 points from 2004-08 won a whopping three-fourths of the time, by more than two touchdowns. In other words, for every Oregon State over USC and Ole Miss over Florida, there were three cases of Oklahoma over Baylor, LSU over Mississippi State and Ohio State over Northwestern. But you knew that.
So the rankings are definitely not precise enough to predict the national championship (or, unless you're talking about USC, even most conference championships). But they are especially good at grouping programs into classes that tend to hold up over time. They establish the ceiling and floor of a program's potential: If your team isn't a top-10 recruiter over at least a three or four-year period, it's not going to be carrying off any crystal footballs, either.
A generalization, yes, but as far as generalizations go, it's solid enough to set an informed baseline for expectations until there are specific reasons to think otherwise. If you ask any prediction to do more than that, you probably lead a very disappointed life.
Wednesday: Recruiting stars and All-Americans.