Ranking College Football Teams

If you aren’t happy with where your team stands in the national rankings, why not come up with a different ranking scheme that puts it in a higher slot?

That’s what happened when physicists at the University of Michigan devised a novel way to rank college football teams, based on the mathematics of networks. On Oct. 31, Michigan ranked 21st in the so-called BCS standings. According to the new ranking scheme, it should have held 9th place, despite three losses.

The Bowl Championship Series (BCS) was established in 1998 to determine the U.S. national champion for college football. BCS officials use a mathematical formula that blends together rankings representing the views of various panels of human judges and outputs from several computer-based schemes.

Week-by-week, the BCS formula generates standings, and the national champion is the winner of the final bowl game, which matches the number 1 and number 2 teams in the year-end standings.

On several occasions, the BCS formula has collided with human expectations, and, to many people, the wrong teams played for the national championship. So, just about every year, researchers tinker with the BCS formula or suggest new schemes that, apparently, offer improved, easier-to-understand results.

A key difficulty is that there are 119 teams in Division I-A of the National Collegiate Athletic Association (NCAA), but they play only 10 to 13 games each. Every team doesn’t play every other team. Moreover, some teams play games against much weaker (or stronger) opponents than others do. And the results of individual games can be contradictory.

Physicists Juyong Park and M.E.J. Newman aimed for a ranking system that’s fast and easily understood by fans (in contrast to the cumbersome, opaque BCS formula). They based their ranking method on the notion that, if A beats B, and B beats C, then A also beats C, even if it may not actually play C. Hence, the method counts both direct wins by beating a team and indirect wins by beating a team that beat another team.

“In addition to a real, physical win (loss) against an opponent, an indirect win (loss) . . . should also be considered indicative of a team’s strength (weakness),” Park and Newman remark in a paper published on Oct. 31 in the Journal of Statistical Mechanics: Theory and Experiment.

At the same time, a direct win counts for more than an indirect win. A team’s ranking is then based on the sum of the direct and indirect wins.

Park and Newman picture the college football schedule as a network (or graph), in which the vertices represent colleges and there’s a line (or edge) joining two colleges if they play against each other during a given season. An arrow on each edge points to the winner of any given match, producing what mathematicians call a directed graph.

Direct losses and wins of a team in this network correspond to edges (with arrows) running directly to and from that team, and indirect losses and wins correspond to “directed paths of length 2” in the network, to and from the team. It’s also possible to take into account longer, though less important paths of the form A beats B beats C beats D, and so on.

“A particularly nice property of these indirect wins is that a direct win against a strong opponent—a team that has itself won many games—is highly rewarding, giving you automatically a large number of indirect wins,” Park and Newman say. “Thus, when measured in terms of indirect wins, the ranking of a team automatically allows for strength of schedule.”

Park and Newman found a rule of thumb that allows them to figure out how much to discount indirect wins. The researchers can then compute a “score” for every team, in effect summing a team’s direct wins and, with a certain discount for each step, its indirect wins.

“The method has an elegant mathematical formulation in terms of networks and linear algebra that is related to a well-known centrality measure for networks,” the researchers say.

In comparing the output of their scheme with BCS rankings of previous years, Park and Newman found that the two systems generally give comparable results. Overall, the agreement between the two systems was quite good, particularly at the top. In cases where the systems differed, the Park-Newman rankings agreed more closely with the computer polls than with the human judges.

“There is a reasonable match between our rankings and the official ones,” the physicists say. “Given the simplicity of our method, it is pleasantly surprising that the rankings are in such agreement with other far more complicated algorithms.” For comparisons, see http://www.umich.edu/news/index.html?Releases/2005/Nov05/football.

This year’s standings are a bit more scrambled. Here’s a comparison of how the top 10 fared in the BCS and Park-Newman rankings on Oct. 31.

Rank
BCS
Park-Newman
1
Southern California Texas
2
Texas Penn State
3
Virginia Tech Virginia Tech
4
Alabama Wisconsin
5
UCLA Alabama
6
Miami (Fla.) Southern California
7
Penn State UCLA
8
LSU Texas Christian
9
Florida State Michigan
10
Ohio State Ohio State

Of course, that was before Virginia Tech, UCLA, and Florida State lost games on the following weekend.

Even then, the Park-Newman system suggests that the University of Southern California is overrated. Michigan apparently gets a boost from beating Penn State, despite losses to Wisconsin (#4), Notre Dame (#22), and Minnesota (#19). So, winning against a strong opponent can help a team a lot in the rankings, and losing against a strong opponent doesn’t hurt as much (though winning would be better).

“We believe that the combination of sound and believable results with a strong common-sense foundation makes our method an attractive ranking scheme for college football,” Park and Newman conclude. “A method such as ours reduces the extent to which the calculations must be tuned to give good results while at the same time retaining an intuitive foundation and mathematical clarity that makes the results persuasive.”

Of course, college football could go to a play-off system instead, pitting, say the top four or eight teams against each other in a season-ending tournament, until one emerges as champion. Even then, there’s lots of room for controversy—particularly over which teams would deserve to be among the elite eight.