The BCS and its predecessors were created with the goal of crowning an undisputed national champion. Two major problems were seen with the previous system: the fact that the two polls didn't always agree (creating split national championships) and the fear that both polls might both be wrong (creating an illegitimate national champion). The first point is indisputable given split championships in 1990, 1991, and 1997. The second point deserves examination: how often did the best team in the country not get the title (or at least half of a split title)?

Comparing poll champions with my historical computer rankings, we can look into this question. I will consider two sets of years. First are years in which there is a clear-cut champion: only one undefeated and untied team from a major conference, or only one undefeated major team, or only one once-defeated team. During the BCS years, for example, there has been a clear-cut champion every season; in all seasons the "correct" national champion has been picked in both polls. The BCS is designed to create clear-cut champions, so this is clearly a point in its favor.

Prior to the BCS, there were 31 years of post-bowl AP polls (1965, 1968-1997) and 24 years of post-bowl UPI polls (1974-1997). The AP years include 15 clear-cut champions, while the UPI years include 12. In these seasons, the only clear-cut mistakes were the AP's selection of Colorado for the 1990 championship and the UPI's selection of USC in 1974. Georgia Tech was 11-0-1 against a schedule only moderately easier than Colorado, which went 11-1-1. The 1974 UPI vote was even more mind-boggling, as Okalahoma went 11-0, USC went 10-1-1, and Oklahoma played a more difficult schedule. Indeed, while Colorado was at least ranked #2 in my ranking in 1990, USC was the 5th-best team in 1974 -- Alabama, Michigan, and Auburn were more deserving of the title than was USC. Thanks to the two-poll system, however, all clear-cut winners have at least gotten a share of the title.

[Update 1/6/05: an astute reader, William Myers, has pointed out that the UPI excluded teams on probabtion, thus explaining the 1974 discrepancy.]

Before heading to the tougher years, it might also be worth looking at the results of various computer ranking systems. My rankings confirm the clear-cut winner in all 15 seasons. Massey's rankings put Florida State ahead of Miami in 1987, but get the other 14 correct. Howell's rankings put Pitt and FSU ahead of Georgia in 1980, but the other 14 are correct. Billingsley's rankings give Colorado the 1990 title, but are correct in the other 14 cases. Wilson's and Sorensen's ranking systems both get only 12 of the 15 clear-cut cases correct; this certainly calls into question the validity of those systems. (Links to these historical ranking systems can be found at Wilson's rsfc site.)

Now on to the tougher years, which include 16 years with post-bowl AP polls and 12 with post-bowl UPI/CNN/etc. polls. The AP picked my #1 team nine of those seasons, my #2 team five times, and my #3 team once. The coaches had a considerably better record, picking the top team nine times and the second-best team the other three times. Finally, of the twelve tough-to-rank seasons in which both polls were run post-bowl games, the polls were split four times (the coaches agreed with me in all four), my #1 team was picked by both polls six times, and my #2 team was picked by both polls three times.

It is one thing to detect disagreement; the other question is which is right and which is wrong? The three years in which my ranking disagrees with both polls are 1983, 1989, and 1994. In 1983, I picked Auburn (over Miami); in 1989 I picked Notre Dame (again over Miami), and in 1994 I picked Penn State (over Nebraska). In all three cases, Massey's rankings (which also uses a good statistical model) agree with my selection, which perhaps indicates that the voters were applying non-statistical criteria. The fact that Billingsley's non-statistical approach was able to reproduce two of those three poll champions further points in this direction. In the other three cases, it would seem that the common wisdom (that the coaches understand the game best) would apply, as the coaches made the statistically-correct calls (Massey's rankings agree with mine in those years as well).

In summary, it appears that the polls are extremely good at selecting national champions in years where there is a clear-cut winner -- much better than several computer rankings. Only one computer ranking (of the handful known to have historical rankings) fares better than the polls in clear-cut winner cases, and two others perform as well as the polls. In about 1/4 of other seasons, the polls disagree with statistical rankings, albeit in a somewhat rational (i.e. predictable) way. In these years, the question of the national champion is unanswered. In another 1/4 of these seasons, there is a split champion including the statistically-better of the teams, and in the remaining half both polls selected the best team as the champion.


Return to ratings main page

Note: if you use any of the facts, equations, or mathematical principles introduced here, you must give me credit.

copyright ©2003 Andrew Dolphin