This is Part 2 of a two-part post about conference scheduling in college sports. I submitted a version of this for inclusion in this year's Evolution of Sport competition at the Sloan Sports Analytics Conference in March. Since they didn't accept it, I decided to post it here. Part 1 can be found here.
Since college football won't be very topical in March, I want to close this discussion with an example from college basketball. Now, the goal in college basketball is a 68-team, single elimination tournament. Of those 68 bids, 31 are reserved for the winners of the respective conference championships. The remaining 37 are handed out by a selection committee on the basis of body of work during the season. Since no one can watch every game, the committee gets some help by the Ratings Percentage Index, or RPI.
Now, again, the formula for the RPI isn't spelled out, and there are various bonuses for winning on the road and so on, but the RPI is estimated to be 50 percent strength of schedule. And wouldn't you know it: the RPI is very consistent from year-to-year, with a correlation coefficient of about 0.75.
What's at stake? Well, if you miss the NCAAs, there's always the NIT, and I've seen teams estimate that they can clear about $25,000 to $50,000 from a good NIT run. But make the NCAAs, and Forbes estimates that your conference gets just under $2 million per game to distribute to its members. Remember that year VCU went to the Final Four? That brought in a little under $9 million for the Colonial Athletic Association.
So yes, scheduling is very important in college basketball, and I've read articles that suggest the Missouri Valley Conference punishes members that schedule soft out-of-conference games that bring down their strength of schedule. But could they be doing more?
Here we have an example of another mid-major conference, the Atlantic 10, from the 2010-2011 season. The previous year, 5 of the A-10's 14 teams won at least 19 games. That's usually good enough to be a borderline tournament team, but the A-10 only got 3 bids. In 2010-2011, the A-10 had 6 teams with at least 23 wins. And they got ... the exact same number of bids, for the exact same teams. Here you can see their conference strength of schedule as compared to their previous season's performance, and no surprise: winning doesn't correlate with the next season's strength of schedule. Note that each team played 16 conference games: you play everyone once, and then three teams twice. For the record, the teams in gold made the NCAAs in 2011, and the teams in blue missed out.
But if we take last year's performance into account, we see a drastic improvement in the average opponent's ranking for each of the top six schools – over a 60% improvement, in fact. And again, with the exception of the outlier of Fordham, strength of schedule matches up better with last year's play all around. Now, this won't necessarily guarantee you more tournament bids, but it'll improve your rankings and your public perception, and that might just be enough to get you off that dreaded bubble.
Now obviously this is not a finished product. The calculations and the sample schedules I used were strictly back-of-the-envelope stuff: there's no accounting for home-field advantage, which could be useful, and I'm almost positive the results I generated aren't optimal. And I'll admit, this won't work for every conference. You'll notice I didn't include any of the Mountain West Conference teams because there's no wiggle room: there are nine teams in the conference in football, so every team plays every other team once. That's it.
But if you're the commissioner of a college sports conference – and, with all the turmoil lately, you might be one and just not know it yet – consider all your options before you set that schedule. Take a good long look at that filet mignon. And then take a good long look at those cheeseburgers.