Advertisement

'Death to the BCS': Nonsense rules

Editor's note: "Death to the BCS: The Definitive Case Against the Bowl Championship Series" looks at the ridiculous system used for determining college football's national champion that has long frustrated fans, coaches and sports journalists alike. The following is an excerpt from Chapter 11, "Nonsense Math." For another excerpt as well as a blog from Yahoo! Sports columnists Dan Wetzel and Jeff Passan, visit www.deathtothebcs.com.

Four years ago, a man named Hal Stern invited the nerds of the world to unite. Their enemy wasn't the typical scourge of jocks or acne but a faceless, inanimate entity that posed a threat to the credibility of all mathematicians.

Yes, the horror of the Bowl Championship Series – the mind-numbing way college football sets its "champion" – extends beyond the sporting universe and into that of the bespectacled and pocket-protected.

In an obscure math journal, Stern wrote an impassioned plea to the men whose ranking systems compose the computerized portion of the formula that determines who plays in the BCS championship game: Don't allow the BCS to corrupt you and the laws of mathematics.

"I am advocating a boycott of the Bowl Championship Series by all quantitative analysts," Stern wrote in his 2006 article, which has earned a cult following after the godfather of modern sports statistics, Bill James, jumped on the boycott bandwagon and urged his peers to do the same. Joining the BCS boycott are dozens of other analysts who agree this is an insult to applied mathematics.

Bill James, the godfather of modern sports statistics, jumped on the boycott bandwagon and urged his peers to do the same.
(Charles Krupa/AP)

Simply put, the computer formulas that the BCS employs to help select its championship game matchup aren't just attempting the near-impossible. They're barely rooted in a little thing called math. And as the BCS releases its first standings of the year Sunday – ones in which Ohio State, ranked No. 1 in all of the human polls, is expected to be fifth because of its computer score – the college football-viewing nation will again focus on the inadequacy of the formula that determines who gets to play for the flawed national title.

The computer rankings, for example, are not allowed to take into consideration margin of victory. A 63-0 victory is the same as a 6-3 win. Jeff Sagarin, the most famous of the computer rankers, calls his BCS rankings the "politically correct" version and says they're "less accurate" than another version he calculates. It includes margin of victory, and the BCS won't let him use it.

"You're asked to rank teams that don't play each other, that don't play long seasons, and you can't include margin of victory?" said Kenneth Massey, another of the handpicked mathematicians, who also provides a "better version" on his Web site. "It's a very challenging problem from a data-analysis standpoint. It does require sacrificing a bit of accuracy. It's not the best way to do it."

The entire point of the BCS using computerized ranking systems was to provide some sort of impartiality and balance out the two human polls. The computers count for one-third of a team's BCS score. Of course, the first time they tried, the computers didn't jibe with the humans, so the BCS changed the formula three years after it started. Same for the second time the computers failed to agree with the voters. And the third. When the math didn't satisfy its standards – prop up the big schools, stomp on the small ones – the BCS altered the formula.

"Stern's analysis was clearly right," said James, whose revolutionary work with baseball statistics was highlighted in the book "Moneyball" and who has since developed his own college football rating system. "This isn't a sincere effort to use math to find the answer at all. It's clearly an effort to use math as a cover for whatever you want to do. I don't even know if the people who set up the system are aware of that.

"It's just nonsense math."

Attempting to use computers to rank teams with little common data is one problem. The bigger issue is the system under which they operate, and how ignoring something so telling as margin of victory is far from the only blunder.

Take the actual computing itself. Every week, the six systems input scores, let the computers spit out the rankings and send them to the BCS. That's it. Nobody at the BCS double-checks the rankings. Only one of the six, Wes Colley, makes his formula fully public. Which leaves five systems open for corruption with no safety net. Massey once admitted that if offered $1 million to doctor his standings, "It would take a lot of willpower to refuse that, to be sure."

Rich boosters, forget that tailback recruit. Pool your money for this guy.

Massey earned a Ph.D. at Virginia Tech and now teaches mathematics at Carson-Newman College, and he's not the only scholar of the group. Sagarin is an MIT graduate who has ranked teams for USA Today since 1985, and Colley is a bowtie-wearing, Princeton- and Harvard-educated astrophysicist who tries to solve everyday problems like traffic and freight delays. Another doctor, Peter Wolfe, specializes in infectious diseases when he's not obsessing over college football, and political scientist Jeff Anderson and freelance sportscaster Chris Hester started compiling rankings from Hester's mother's computer 20 years ago.

Then there is Richard Billingsley. He is 59 years old and lives in Hugo, Okla. Unfailingly courteous, Billingsley speaks with a homespun voice that exudes calm. Though he's a stress-management expert for a living, Billingsley follows his passion for college football in obsessive ways. Starting in 1970, he set out to name a national champion for every season dating back to 1869, when Princeton and Rutgers split the two games played. (Billingsley's verdict: Princeton.) His institutional history of college football is unquestioned. There's just one snag.

"I'm not a mathematician," Billingsley said.

A nonmathematician who uses a numbers-based formula to rank teams. A nonmathematician who, accordingly, uses the previous year's rankings as a starting point for the next year's, even if a school graduates its quarterback, running back and middle linebacker, and loses its coach.

"I don't know that the powers that be even know what he's doing," Stern said. "I'm not saying he's bad. But … he's bad. It's clear it's not what the BCS should be doing."

Billingsley is unrepentant about using the previous season's results. He believes the past portends the future, even if the past is now playing in the NFL. The other computer systems that use preseason rankings take into account graduations, recruiting classes, and coaching changes – everything that matters.

"I'm not even a highly educated man, to tell you the truth," Billingsley said. "I don't even have a degree. I have a high school education. I never had calculus. I don't even remember much about algebra. I think everyone questions everything I do. Why is he doing that? Does he know what he's doing, a crazy kook in Oklahoma? I had a guy tell me in an email once that I'm a crazy Oklahoma hillbilly. Well, it's true, but it has nothing to do with my ranking skills."

The actual skill involved is suspect. A Dutch computer scientist named Martien Maas, who has never been to a college football game but compiles rankings in his spare time, analyzed amateur ranking systems for their accuracy in picking bowl games last year. He assumed the success rate of predicting the correct winners would be somewhere between 75 and 85 percent. The computers barely chose 50 percent of the games correctly.

And yet the BCS insists the computers are integral to the system. They've been around since the 1920s, although the acceptance of independent, numbers-based analysis took years. When David Rothman, whose progressive rankings influenced Sagarin, wrote NCAA executive director Walter Byers asking that his system be adopted by the organization, he received the following response: "Mr. Rothman, we will never do standings at the NCAA and second, we will never do yours."

Byers retired in 1988. Ten years later, when the BCS was born, the system centered around three computer rankings: Anderson and Hester, Sagarin and the New York Times. The next year, the BCS added Billingsley, Massey, Rothman, the Dunkel Index and Herman Matthews. In 2001, it dumped the Times and Dunkel rankings and replaced them with Wolfe and Colley. A year later, Rothman and Matthews were gone after refusing to remove margin of victory from their formulas, and the BCS continued to impress Liz Taylor with its divorce rate.

The reasoning behind the decision to banish margin of victory before the 2002 season: The BCS didn't want teams that beat up on weaker opponents to be rewarded for doing so. Never mind that the BCS was actively corrupting the impartiality of its system. By mandating the removal of margin of victory, the BCS brought an issue patently tied to emotion – whether a blowout is right or wrong – into the machines it hired to be emotionless.

"Their action is crazy," Rothman told the Cincinnati Enquirer. "This makes the computer people look like hacks. It gives the impression of a lack of integrity."

To illustrate what he called "nonsense," Matthews removed margin of victory from the final rankings in which it was used, 2001, and sent the hypothetical results to the BCS. The starkest difference involved the University of Tennessee, which blew a chance at the national championship game by losing the SEC title to LSU and finished sixth in the BCS standings. It was Tennessee's second loss of the season, and yet without margin of victory, the Volunteers would have finished second in the BCS – ahead of one-loss teams from Nebraska and Oregon – and faced Miami in the championship game.

"That's really suspect," Matthews told the Knoxville News Sentinel. "Then Tennessee slaughtered Michigan [in the Citrus Bowl], but the Vols would have dropped from No. 2 to No. 3 while Michigan increased from No. 25 to No. 20. That's crazy."

All of this was in the name of sportsmanship, and nobody dislikes sportsmanship. The BCS neglected the numbers – the actual, objective data that a computer can measure – and the letters sent by Sagarin and Massey urging the BCS to allow them to keep their margin-of-victory rankings. It ignored the hypocrisy in letting the coaches and Harris Poll voters factor in margin of victory. It disregarded everyone who cares about the score of the game, which is pretty much anyone who watches. The computers were an easy scapegoat, and the BCS got rid of Rothman and Matthews because they refused to flout their mathematical principles.

Some of the computer rankers even parroted the illogical message.

"A significant but hard-to-measure factor in comparing teams is sportsmanship," Wolfe wrote on his Web site. "Running up the score is generally looked on as evidence of bad sportsmanship, behavior which should not be encouraged or rewarded."

The statistical community guffaws at Wolfe's concern with blowout victories while lamenting the BCS's decision. Any elementary mathematician, let alone someone with the ability to write a program that ranks every college football team, could figure out a way to limit margin of victory's effect – say, by making a thirty-point win count the same as a seventy-point win. The American Statistician in 2003 devoted more than 5,000 words to how removing margin of victory compromised the rankings. The magazine's research showed the two best algorithms were Rothman's and Matthews', both discarded by the BCS.

"It's about respecting and accepting what the math tells you," James said. "If it tells you Boise State is better than the teams that have the opportunity to play for the championship, what are you going to do?

"Well, if Boise State ever finishes first, they'll change [the formula] a fourth time."

James isn't exaggerating. The BCS really has tweaked its formula three times. Its original version used computer rankings, human-poll rankings, a strength-of-schedule component, and number of losses. In 2001, the BCS added bonus points for quality wins. That wasn't good enough, so in 2002, it changed its quality-win formula and removed margin of victory. And after USC ended 2004 at No. 1 in the AP poll despite not playing for the BCS championship, the whole BCS system was blown up to de-emphasize the computers.

Fall guys once, fall guys always.

Even the mathematicians warned the BCS the new system was untenable. "They looked at what we were trying to do and said … we're asking them to do an impossible job with imperfect tools," BCS consultant Kevin O'Malley told the Riverside (California) Press-Enterprise.

So, naturally, they went ahead with it anyway. The diluted computer rankings are determined rather simply. The six send in their 25 best teams, with the top one receiving 25 points and the lowest getting one point. For each team, the BCS drops the highest and lowest ranking to get rid of potential outliers, adds the four remaining numbers, divides them by a hundred to get a percentage, and averages that percentage with the one from the coaches' poll and Harris Poll.

The computer guys do it because they love the challenge of competing against other minds, sort of like a science fair for adults. Otherwise, the fringe benefits are fringy. It's a good conversation starter. It's a license to brag when you get something correct, like Anderson and Hester did in 2008, when they ranked Utah second before its bowl victory against Alabama. None of the other computers had Utah higher than fourth. Even the Utes' coach, Kyle Whittingham, voted them fifth.

It's not for the money. The BCS pays only a nominal sum, not close to enough for the rankers to quit their jobs. And it's not for the swag. Colley, a longtime Alabama fan, wanted to attend the 2009 SEC championship game between Florida and the Crimson Tide. He figured a couple tickets wouldn't be much trouble.

"I emailed the SEC BCS liaison," Colley said, "and he just laughed at me."

About the authors

Book cover

Dan Wetzel and Jeff Passan write for Yahoo! Sports, the most-read sports site on the Web. Josh Peter, a former Yahoo! Sports reporter, is a freelance writer. Wetzel has coauthored four books, including the New York Times bestseller "Resilience: Faith, Focus, Triumph" with Alonzo Mourning, and lives in Michigan. Peter is an award-winning investigative journalist who has earned national attention for his reporting on the Bowl Championship Series. In 2005, he was nominated for a Pulitzer Prize for a series on race and high school football in the South. He lives in Los Angeles. Passan has won multiple Associated Press Sports Editors awards and lives in Kansas.

Buy at Amazon

How fitting. The BCS laughing at the computers. Its computers. The men who run the computer rankings for the BCS don't dare complain, their loyalty admirable if misguided, and that is where it goes wrong for Hal Stern, where any chance of the boycott dies.

To them, the computer rankings are a chance to matter, and that's something they hold on to dearly. They see it as a privilege, no matter how corrupt the organization, how shady the leadership, how unpopular it is – or they are – among fans. The fame and renown intoxicates them. The BCS chose them over more than a hundred others whose rankings appear on Massey's Web site, and it's good to feel important.

"We're part of history," Massey said, and he chewed on that idea for a moment, wondering whether it really is better to be a part of history that nobody supports than not to be part at all. He emerged from his quick philosophical debate with a compromise that seems downright shocking for someone employed by the BCS.

"I would like a playoff," Massey said.

He's not the only one.

"It's hard to argue with a 16-team playoff," Colley said.

Massey and Colley, and their ranking peers for that matter, are like so many others. They are good and smart people with noble intentions, and they work for bosses who make them look bad. The BCS is too strong a force. The nerds aren't ready for a revolution.