Quantcast
Channel: Minitab | Minitab
Viewing all articles
Browse latest Browse all 828

The Glass Slipper Story: Analyzing the Madness in the 2013 NCAA Tournament

$
0
0

Glass SlipperCinderella showed up early and often during the first weekend of the 2013 NCAA Tournament. Florida Gulf Coast stole the show with their glass slippers, becoming the first ever 15 seed to reach the Sweet 16. But don’t let that overshadow what happened in the West Region: Wichita St and La Salle both arrived in a pumpkin-turned-carriage, and now the Shockers are a game away from the Final Four! And don’t forget about Harvard just because the clock struck midnight on them first. They were at the ball, too! Madness indeed.

In the world of statistics, we have another word for this “madness.” It’s called variation. In the quality world, variation is usually bad. You don’t want there to be large differences in the length of the part you’re making or in the weight of the bag of food you produce. But in the world of sports, variation makes for great entertainment. If the better teams always won, what would be the point of watching? We’d have to rename it March Monotony. Luckily, that’s not the case, as this year’s tournament clearly shows. But how much “madness” did we actually have this year? And was it the most ever?

How to Judge the Madness

 Normally, people judge the madness in the tournament by looking at the seeds. Here is a list of the craziest Sweet 16s by seed. 2013 ranks 5th. But using the seeds doesn’t quite tell us the whole story. For example, Minnesota was an 11 seed this year, but was actually favored over 6 seeded UCLA. And Oregon was a lot better than your average 12 seed, as they were 26-8 on the season and won the Pac-12 tournament. So I’m going to ignore seeds and instead use the probability that one team has of beating the other.

I calculated the probabilities using a regression model that takes the rankings of both teams and calculates the probability one has of beating the other. I used the Sagarin Ratings to get the rankings of the teams since I determined earlier in the year that they were the most accurate ranking system. You can get all the data here. Now let’s get to the statistics!

For each of the first 48 games in the NCAA tournament (I didn’t count the First Four), I calculated the odds one team had of beating another. Then I grouped the probabilities into categories (50 to 59, 60 to 69, and so on). So if there are 10 games in the “80 to 89” category, you would expect the favorite to win 8 or 9 of those games. Now we can compare what the model predicted to what actually happened.

Bar Chart

We can clearly see why there was no shortage of Cinderellas this year! Only 54% of the teams favored to win between 60% and 69% of the time actually won. And things weren’t any better for the teams favored to win 70% to 79% of the time, as only 58% of them won. That’s a lot of upsets! The 70% category in particular seems pretty low, so let’s break it down further.

There were 12 games in the “70 to 79” category, with an average probability of the favorite winning being 74%. Of those 12 games, the favorite won only 7 of them. The 5 upsets were Ole Miss, Harvard, Wichita St (twice...the computers loved Pitt), and Florida Gulf Coast (over San Diego St, their win over Georgetown was the lone upset in the “80 to 89” group). So how unlikely was it that in this group of games only 7 favorites would win? We can use a probability plot to find out.

Distribution Plot

This plot shows the probability for each possible outcome if 12 games were played with the favorite having a 74% chance of winning (I know the probability varied for each game, but I’m using the constant to satisfy the assumptions of the binomial distribution). The red area shows the probability of 5 upsets is 11.43%.

That’s pretty low, but look at the rest of the plot. The most likely outcome is that 9 favorites would win, and that only has a probability of about 25%. And having 5 upsets is just as likely to occur as having 1 upset. If only 1 of these upsets occurred, sports analysts would have been saying this tournament was boring because nothing unpredictable happened. But in fact, that would have just as unpredictable as what did happen!

This all just goes to show how hard it is to predict the tournament. Even if you’re given 12 games and told the favorite will win 74% of the time, you’ll have no better than a 25% of picking the correct number of upsets. And even if you do, you then have to be lucky enough to correctly pick the games where the upsets will occur.

Why do you have to be so difficult, Cinderella?

But back to the data: we saw that there was a high number of upsets in both the “60 to 69” and “70 to 79” categories this year. Let’s see if the same thing happens in other tournaments that had high seeds in the Sweet 16.

You Get a Glass Slipper! And You Get a Glass Slipper! And You Get a Glass Slipper!

We keep hearing that this is the wackiest Sweet 16 since 2000, so let’s check out that year. 2000 had only two 1 seeds, one 2 seed, and one 3 seed advance to the Sweet 16. Surely the observed probabilities for the higher categories will be crazy here (and again, these numbers are only through the first weekend of the tournament).

Bar Chart

Wait, what is this? I thought everybody was getting glass slippers...I’ve been misled! This chart shows the tournament went almost exactly as we’d expect. And the two highest groups show there was actually a lack of upsets!!!!

This just shows how seeds don’t tell the whole story. The two 8 seeds that beat the 1 seeds were Wisconsin and North Carolina, not exactly who you would think of as Cinderellas (both teams went on to reach the Final Four). The three 3 seeds that lost all did so to 6 seeds: not exactly major upsets. And there were only two double-digit seeds in the Sweet 16 (both of them 10 seeds).

In fact, according to the Sagarin Ratings, there were only 11 upsets in the first two rounds of the 2000 NCAA tournament, compared to 15 this year. So before we go any further, let’s just see if any other year can beat 2013 in just number of upsets. I’ll stick to years at the top of the list of craziest Sweet 16s.

Tally

Sagarin Ratings only go back to 1999, so I couldn’t do any earlier years. 2001 and 2012 are lower on the list, but I added them because 15 seeds beat 2 seeds in both years. I also added 2009 as a fun comparison because it was the chalkiest Sweet 16 ever.

We see that no year is able to top the 15 upsets that occurred in this tournament. In addition, the 2010 tournament (6th on the list of craziest Sweet 16s) has the same number of upsets as the Sweet 16 with the most chalk!  That just goes to show the 2010 tournament had some terrible seeding.

When we break the statistics down by our categories, there is only one year that had an upset in the “90 to 99” group (2012, when Norfolk St beat Missouri). And no other year had more upsets than 2013 in the “80 to 89” group. But do any other years have 2013 beat when it comes to upsets in the “60 to 69” and “70 to 79” categories? Or have we just witnessed the craziest opening weekend ever (well, since 1999)? The next chart will tell us.

Bar Chart

A few other years come close to 2013 in the “60 to 69” group, but none beat it. And no other year even touches the 5 upsets that have occurred in the “70 to 79” category. I think that does it. The clock just struck midnight, and 2013 is the last one dancing. So congrats Florida Gulf Coast and company, you’ve made 2013 the craziest Sweet 16 ever. I just hope you saved some glass slippers for next year! 

Photograph by Glamhag.  Licensed under Creative Commons BY-SA 2.0.


Viewing all articles
Browse latest Browse all 828

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>