here. Uruguay’s victory is the second right-most upset, meaning that of all of the upsets in the data-set, this game had the second-highest discrepancy in strength between the two teams. The match with the highest? Japan v South Africa in 2015. Put in rugby terms, my model would have expected South Africa to have been better than Japan by 29 points, while today Fiji were expected to be better than Uruguay by 27. The table below shows all the RWC upsets since 2003, in descending order of the expected winning margin, where today’s match appears second. We could therefore conclude that Uruguay’s wonderful, tournament-alighting victory is the second biggest upset in the history of the World Cup*, just ahead of Tonga’s defeat of France in 2011. But doesn’t this all rather miss the point? By reducing matches to a single number or a ranked list, we strip them of the context that gives them meaning. While it is valuable to take a temperature check which can protect us from overreacting in an age of hot takes, I’m not sure I’d be comfortable saying that today was a bigger, or less big, or indeed better or worse upset than Japan v South Africa. For me, it’s the emotions that days like today invoke that make them memorable. So can we say that this was the biggest giant felling in all World Cup history? No. But have I ever cried while listening to a post match interview before? Also no. Finally, the Uruguayan captain didn’t let his speech pass without recognising the people of Kamaishi, whose painful, inspiring story is outlined beautifully here. If you haven’t read it already, you really should. * at least since the rankings were first published in 2003, from which my analyses are derived.
they’ve made a film about it. A proper one. For cinemas and everything. It’s widely acknowledged as “one of the greatest shocks in the history of the tournament”, but just how big a shock was it? Could it have been anticipated? And what are the chances of it happening again if the two sides meet in the knockout stages of the 2019 tournament? In my last post I made the case that the World Rugby Rankings represent a useful (if imperfect) proxy for the relative strength of any two teams, and can be used to contextualise performances. With a small amount of manipulation, we can begin to understand the likelihood of Japan’s victory, and by extending this approach we can assess likely outcomes of future matches, and even entire tournaments. As per my previous post, I took the results from the four previous World Cups, plotted the points differences from each match against the corresponding difference in world ranking points, and then plotted a trend line through the data. I’ve used a curved line because it fits the data better than a straight line, and because it is pragmatically sensible: tightly ranked teams tend to result in close matches, while large ranking differences tend to mean someone’s in for a hiding. The trend line represents the points difference we would expect in a match between two teams given their relative difference in ranking points. The data above this line represent matches where the higher-ranked team performed better than expected. Conversely, the data below the line illustrate where the higher-ranked team performed less well than expected. Finally, the data below the horizontal axis represent upsets, where the lower ranked team won. At the start of the 2015 RWC, South Africa had 13.09 more ranking points than Japan of (85.15 – 72.06). Reading off our trend line, we would therefore have expected the Springboks to beat Japan by about 29 points. If we take all of the World Cup upsets since 2003 and compare them similarly, this is indeed the largest scale of upset. Those of you of a morbid disposition might like to hunt down your own personal heartbreak from the table below, where I’ve listed all of the upsets ranked from biggest to smallest. A more instructive approach is to assess the prediction error of each match, comparing the actual match points difference with what was predicted. If we sort our table by this metric, Japan v South Africa (2015) is actually only the fourth most remarkable reversal since 2003, behind France v Argentina (2011, Bronze medal match), Argentina v Ireland (2015, QF) and Ireland v Australia (2011, Pool stage).
What were the chances of that happening?[Authors note: From here on in, I’m going into more methodological detail than I normally would. I’m aware that this won’t be of interest to everyone but I want to be clear about where the numbers in other posts come from. I think transparency is important in prediction, and want people to know what method I’ve employed when they’re interpreting any predictions I make. Budding analysts might find the approach useful in their development, and equally readers might well share superior approaches]. If we look at the differences between our match predictions and actual points differences for all matches, not just the upsets, we start to do some pretty cool stuff. And by “cool” I mean of course “deeply uncool to most people, but potentially interesting to some.” Plotting all of these prediction errors in a histogram (below), we can see that the data is more or less normally distributed around zero (which means there’s not a systematic bias in the model – this is good), and has a certain amount of spread. This is our prediction uncertainty. I’ve used the same colour coding as before – all results with a negative prediction error represent where the higher ranked team performs less well than expected. In some cases (but not all) this results in an upset. If we design a population of random numbers such that it replicates the characteristics of this data set, we can create a data-set of likely outcomes for future matches. All we do here is take numbers from our artificial population (mean = 0, SD = 17), and add it to our points difference predicted by our trend line for that match up. Repeating this process many times for a match, we get a realistic distribution of possible match results around our predicted score-line. The proportion of predicted wins and losses for each team in a match will then vary according to this distribution shape and the distance of its mid-point from zero. We can calculate the chance of victory for either side by simply counting all the occurrences either side of zero, and divided those numbers by the number of iterations. For reference, I used 1,000,000 iterations (below that my results weren’t that repeatable from run to run). Applying this approach to Japan v South Africa (RWC 2015), we get a distribution of possible results around a likely scoreline of South Africa by 29. Approximately 5% of simulated results differences have negative points differences, in other words, giving Japan a 5% (or 1 in 20) chance of victory. While this is definitely heavily weighted in South Africa’s favour, it’s considerably better than the “this is never going to happen” chance I was still giving Japan at half-time.
Prediction TimeWe can, of course, use exactly the same method for prediction. If Japan were to meet South Africa in a re-run of this game, in the quarter finals of the 2019 RWC (which is not that fantastical a proposition), today’s World Rugby ranking points give Japan an improved chance of staging an upset. Comparing the two visualisations, we can see that South Africa’s most likely winning margin has reduced from 29 to 23, but also the distribution has shifted in favour of Japan. This means, a higher proportion of simulated matches result in a Japanese victory (11%), and the chance of an upset (assuming this match were played) has improved from a 20-1 shot, to about 9-1. Now, none of this accounts for Japan’s (presumed) home advantage, nor does it factor in the Bokke being far less likely to underestimate the Brave Blossoms a second time around. What it does mean though, is that the gap between the two nations has narrowed considerably since that memorable day in Brighton. Anyway, this is all very good, and hopefully of interest to some. Lots of people expand it to construct simulations of entire tournaments, using the fixture list and draw to run tens or hundreds of thousands of iterations of the tournament. Adding up the outcome stats gives the relative likelihood of various outcomes (champion, semi-finalist, pool winner, etc). While this is entertaining, for me the power is in its provision of a base-line from which we can explore other effects.
A note of cautionTo round this out I want to highlight the inherent assumptions in my approach, and what that means for interpreting the results.
- That the underlying model accurately represents each competitor nation. We have previously established that the rankings provide a reasonably representative generic model of outcome v relative team strength, with a certain level of variance. However, France have a reputation for being unpredictable. Meanwhile Argentina might reasonably claim the consistent over-performers. If these and other team specific effects are present, they are not factored into this approach, and the predictions are weaker for it.
- That home advantage isn’t a factor. Is it a benefit? Is it a disadvantage? Likely different for any given team. Either way, it isn’t accounted for here. Maybe one to explore further in another post.
- That what happened before predicts what happens next. The data set we have used here is at a minimum four years old. A quarter of the data-set is sixteen years old. In that time, squads, players, coaches and even the laws themselves have changed, many times over. It is entirely possible that we’ve developed a model on a set of data that no longer reflects the nature of the game.
It would be fair to say the World Rugby rankings have been getting some attention in the last few weeks. Wales (potentially briefly) ended New Zealand’s near decade long run as the Best Rugby Team in all the World, and a lot of people have been getting quite sniffy about it. With Ireland, England, Wales and New Zealand all potential holders of the coveted-but-apparently-not-coveted-because-the-rankings-are-meaningless number one spot during August, considerable shade has been cast on the validity of the rankings, not least from World Rugby Vice President Agustin Pichot, who this week called the rankings “ridiculous”. Coincidentally, at the time of writing Argentina are ranked 11th in the World. Because I like this sort of thing, I’ve been exploring how the historic rankings (from immediately prior to previous editions of World Cups) have stacked up against actual tournament and match results, to tease out if there is value to be had in them, and what this might tell us about likely outcomes of the 2019 RWC. As the rankings were first published in 2003, I’ve concentrated on the 2003, 2007, 2011 and 2015 tournaments. Over the next few weeks I’ll be sharing how well the rankings have previously predicted tournaments and matches, what they can tell us about upsets, and how today’s can be used to predict likely outcomes of the 2019 Rugby World Cup. I might even put my head on the block and make some predictions.
“All models are wrong but some models are useful.” – George E. P. Box in Statistical Control: By Monitoring and Feedback AdjustmentContrary to Pichot’s opinions, when compared to actual World Cup performance, the pre-tournament rankings have proven a reliable barometer of the general world order since they were introduced in 2003. The 2003, 2011 and 2015 tournaments were all won by the team ranked number 1 at the outset (England, New Zealand, New Zealand respectively). Only the 2007 edition differed, with South Africa beginning the tournament ranked 4th. At the individual match level, the rankings are impressively powerful indicators of likely match outcomes. Even without accounting for home advantage, the higher ranked team won 86% of world cup matches, consisting of 165 victories, two draws (Japan v Canada on both occasions; 2007 and 2011) and 25 upsets (13%). Breaking the data down further and looking at individual tournaments, the rankings’ prediction rate is also consistently good.
- 2003 – 89.6% (43/48 matches correctly predicted; 5 upsets)
- 2007 – 75.0% (36/48 matches correctly predicted; 11 upsets; one draw)
- 2011 – 89.6% (43/48 matches correctly predicted; 4 upsets; one draw)
- 2015 – 89.6% (43/48 matches correctly predicted; 5 upsets)