We head into the meat of the ranking system, and look at the following question: how much can you learn about two teams from one game? As you will see later, this is equivalent to the reverse problem: what are the odds that a particular game result was produced given the ratings of the two teams.


Let's look at the easy case first. For some applications, it is desirable to rate teams solely based on their wins, losses, and opponent quality (including schedule strength). In this case, using the definitions, the probability of team A (rated a) beating team B (rated b) at home equals:


where h is the opponent strength adjustment for a home game. Likewise, the probabilities of team A winning on the road or at a neutral site, respectively, are equal to:

To simplify, I will define "dr" as the location-adjusted rating difference, so that the odds of a team beating its opponent is merely:



That was easy, wasn't it? Unfortunately it gets tougher in a hurry when scores are added to the equation. The main reason is that I have designed the ranking to worry most about who is the best team, rather than the total score of any potential game between two teams. As such, it is a little awkward (and a lot of math), but we press on with that warning.

Let's start out talking soccer or hockey, where one possession either results in a score or doesn't, and the only possible score is one point. If each team has N possessions, and the probabilities of scoring on any one possession equal xa and xb, respectively, the odds of them scoring sa and sb times in the game equal:

   P(sa|xa,N) = (xa^sa) ((1-xa)^(N-sa)) N! / ( sa! (N-sa)! )
   P(sb|xb,N) = (xb^sb) ((1-xb)^(N-sb)) N! / ( sb! (N-sb)! )

Converting these equations from possession scoring odds xa and xb to expected scores ma and mb (xa=ma/N, xb=mb/N), one gets:
   P(sa|ma,N) = (ma^sa) ((N-ma)^(N-sa)) N! / ( N^N sa! (N-sa)! )
   P(sb|mb,N) = (mb^sb) ((N-mb)^(N-sb)) N! / ( N^N sb! (N-sb)! )

For the data I use in my rankings, the number of possessions is not known. Instead, I have to assume that the percentage of non-scoring possessions is essentially constant in a particular sport. Defining that value as "F", substituting, and combining the two equations to compute the probability of both scores being produced, the above equations become:

   P(sa,sb|ma,mb) = C ma^(sa/F) e^(-sa/F) mb^(sb/F) e^(-sb/F)

where "C" is a multiplicative constant hiding terms that can be ignored later.

The probability that team A will beat team B, counting a tie as half a win for now, equals:

   P(A win|ma,mb) = 0.5 sum(i=0,inf) P(i,i|ma,mb)
                    + [ sum(i=1,inf) sum(j=0,i-1) P(i,j|ma,mb) ]
Recalling the equation for win-loss evaluation, we see that the probability that team A wins also equals:
   P(A win|dr) = CP(dr)

Setting these two equations equal, we get a relation between ma, mb, and dr. This can be used to measure mb as a function of ma and dr, which I define mb(ma,dr). Using this relation to rewrite the previous game outcome probability function, we get:

   P(sa,sb|ma,dr) = C ma^(sa/F) e^(-sa/F) mb(ma,dr)^(sb/F) e^(-sb/F)

Again, we don't care (at the moment) about the total score, just how convincing the win is. So we can marginalize ma out by integrating over all possible values (zero to infinity) to get:
   P(sa,sb|dr) = C integral(ma=0,inf) ma^(sa/F) e^(-sa/F) mb(ma,dr)^(sb/F) e^(-sb/F)

There we have it: the odds that teams with a rating difference of dr will play a game in which they score sa and sb times, respectively.

Computing this is not as easy as it looks, but fortunately there is an excellent approximation that can be made:

   P(sa,sb|dr) = NP(G(sa-sb)-dr),
   G(sa,sb) = (sa-sb)/sqrt(F*(sa+sb+1))
In other words, G(sa,sb) equals the impressiveness of the win, measured in sigma, and translates directly into rating differences (dr).

One interesting fact seen here is that a team can be ranked as accurately by a mismatch as by a game against an equal-quality opponent. The reason is that, if a team is two sigma better than its opponent it will usually play like it is about two sigma better, 16% of the time it will play like it is more than three sigma better, and 16% of the time it will play like it is less than one sigma better (and 2.5% of the time will play like it is worse, i.e. will lose). In other words, if you can identify how well a team played compared to its opponent -- regardless of how well matched the teams are -- you can rank both teams accurately.

To test this assertion, I examined predicted vs. actual outcomes of games between closely-matched and unevenly-matched opponents. Indeed, the mismatches were predicted just as well as the close matches.

Return to ratings main page

Note: if you use any of the facts, equations, or mathematical principles on this page, you must give me credit.

copyright ©2001-2003 Andrew Dolphin