This is Part 2 of a two-part post about conference scheduling in college sports. I submitted a version of this for inclusion in this year's Evolution of Sport competition at the Sloan Sports Analytics Conference in March. Since they didn't accept it, I decided to post it here. Part 1 can be found here.
Since college football won't be very topical in March, I want to close this discussion with an example from college basketball. Now, the goal in college basketball is a 68-team, single elimination tournament. Of those 68 bids, 31 are reserved for the winners of the respective conference championships. The remaining 37 are handed out by a selection committee on the basis of body of work during the season. Since no one can watch every game, the committee gets some help by the Ratings Percentage Index, or RPI.
Now, again, the formula for the RPI isn't spelled out, and there are various bonuses for winning on the road and so on, but the RPI is estimated to be 50 percent strength of schedule. And wouldn't you know it: the RPI is very consistent from year-to-year, with a correlation coefficient of about 0.75.
What's at stake? Well, if you miss the NCAAs, there's always the NIT, and I've seen teams estimate that they can clear about $25,000 to $50,000 from a good NIT run. But make the NCAAs, and Forbes estimates that your conference gets just under $2 million per game to distribute to its members. Remember that year VCU went to the Final Four? That brought in a little under $9 million for the Colonial Athletic Association.
So yes, scheduling is very important in college basketball, and I've read articles that suggest the Missouri Valley Conference punishes members that schedule soft out-of-conference games that bring down their strength of schedule. But could they be doing more?
Here we have an example of another mid-major conference, the Atlantic 10, from the 2010-2011 season. The previous year, 5 of the A-10's 14 teams won at least 19 games. That's usually good enough to be a borderline tournament team, but the A-10 only got 3 bids. In 2010-2011, the A-10 had 6 teams with at least 23 wins. And they got ... the exact same number of bids, for the exact same teams. Here you can see their conference strength of schedule as compared to their previous season's performance, and no surprise: winning doesn't correlate with the next season's strength of schedule. Note that each team played 16 conference games: you play everyone once, and then three teams twice. For the record, the teams in gold made the NCAAs in 2011, and the teams in blue missed out.
But if we take last year's performance into account, we see a drastic improvement in the average opponent's ranking for each of the top six schools – over a 60% improvement, in fact. And again, with the exception of the outlier of Fordham, strength of schedule matches up better with last year's play all around. Now, this won't necessarily guarantee you more tournament bids, but it'll improve your rankings and your public perception, and that might just be enough to get you off that dreaded bubble.
Now obviously this is not a finished product. The calculations and the sample schedules I used were strictly back-of-the-envelope stuff: there's no accounting for home-field advantage, which could be useful, and I'm almost positive the results I generated aren't optimal. And I'll admit, this won't work for every conference. You'll notice I didn't include any of the Mountain West Conference teams because there's no wiggle room: there are nine teams in the conference in football, so every team plays every other team once. That's it.
But if you're the commissioner of a college sports conference – and, with all the turmoil lately, you might be one and just not know it yet – consider all your options before you set that schedule. Take a good long look at that filet mignon. And then take a good long look at those cheeseburgers.
Sports analytics without the science-fair quality writing. Asking interesting questions and, hopefully, answering a few of them. "Let's rumble!" (Updates Monday and/or Friday.)
Showing posts with label front office. Show all posts
Showing posts with label front office. Show all posts
Sunday, February 10, 2013
Thursday, February 7, 2013
Optimizing Conference Scheduling for Tournament Selection: Part I, College Football
This is Part 1 of a two-part post about conference scheduling in college sports. I submitted a version of this for inclusion in this year's Evolution of Sport competition at the Sloan Sports Analytics Conference in March. Since they didn't accept it, I decided to post it here. Part 2 is due Monday.
In 2008, the Boise State Broncos of the Western Athletic Conference were ranked 9th in the final BCS standings. That same year, the TCU Horned Frogs of the Mountain West Conference were ranked 11th. Now, in part because neither school was in one of the power conferences like the SEC, both teams were passed over for the most prestigious bowls, and met in the Poinsettia Bowl. Both teams earned a payout of $750,000.
The next season, Boise State finished 6th in the BCS standings, and TCU finished 4th. This time, they met in the Fiesta Bowl, one of the four games in the Bowl Championship Series, and earned a payout of $18 million each. Same teams, very similar regular seasons, 24 times more money. 24 times! That's the difference between a filet mignon with crab meat on top at Smith and Wollensky, and two cheeseburgers – no fries – at McDonald's.
And that's just the monetary benefits. That doesn't even count the national exposure for recruiting, or the increase in freshman applications that typically follows athletic program success.
So, naturally, if you work for a school like Boise State or a conference like the Mountain West, you want to know, "What can I do to improve my chances to get into the biggest bowl games and get that BCS money?" My talk will describe how conferences can improve their members' chances by stacking their conference strength of schedule.
In 2008, the Boise State Broncos of the Western Athletic Conference were ranked 9th in the final BCS standings. That same year, the TCU Horned Frogs of the Mountain West Conference were ranked 11th. Now, in part because neither school was in one of the power conferences like the SEC, both teams were passed over for the most prestigious bowls, and met in the Poinsettia Bowl. Both teams earned a payout of $750,000.
The next season, Boise State finished 6th in the BCS standings, and TCU finished 4th. This time, they met in the Fiesta Bowl, one of the four games in the Bowl Championship Series, and earned a payout of $18 million each. Same teams, very similar regular seasons, 24 times more money. 24 times! That's the difference between a filet mignon with crab meat on top at Smith and Wollensky, and two cheeseburgers – no fries – at McDonald's.
And that's just the monetary benefits. That doesn't even count the national exposure for recruiting, or the increase in freshman applications that typically follows athletic program success.
So, naturally, if you work for a school like Boise State or a conference like the Mountain West, you want to know, "What can I do to improve my chances to get into the biggest bowl games and get that BCS money?" My talk will describe how conferences can improve their members' chances by stacking their conference strength of schedule.
Wednesday, January 16, 2013
How Much Is a Win Worth to an NBA Team?
Last month, I used J.C. Bradbury's free agent valuation method to determine how many wins the Red Sox expected Mike Napoli and Shane Victorino to contribute to the team in 2013. That worked fine, but suppose we want to build a similar model for the NBA. Again, we'll use the basic system Bradbury outlines in "The Baseball Economist" (ch. 13). Here, Bradbury found a relationship between revenue, wins, and the size of the city a franchise plays in.
All three of those variables are readily available. For city size, we'll use the population of the metropolitan statistical area (MSA) each team plays its home games in, as reported in the 2010 U.S. Census*. Revenue is available through Forbes' Business of Basketball listings. This data is almost exactly one year old -- suggesting that it covers the 2010-2011 season, and not the recent lockout-shortened 2011-2012 season. This is better for our purposes; I don't want the compressed schedules and reduced number of games to interfere with my results.
* - And the Canadian equivalent for Toronto, with the hope that the two have very similar methodologies.
All three of those variables are readily available. For city size, we'll use the population of the metropolitan statistical area (MSA) each team plays its home games in, as reported in the 2010 U.S. Census*. Revenue is available through Forbes' Business of Basketball listings. This data is almost exactly one year old -- suggesting that it covers the 2010-2011 season, and not the recent lockout-shortened 2011-2012 season. This is better for our purposes; I don't want the compressed schedules and reduced number of games to interfere with my results.
* - And the Canadian equivalent for Toronto, with the hope that the two have very similar methodologies.
Monday, December 10, 2012
Evaluating MLB Signings, Part 2: Madness
Last time out, we asked how good the Napoli and Victorino signings were for the Boston Red Sox. Using J.C. Bradbury's method, we established that we need to do the following:
1. Figure out how much a win is worth,
2. Figure out how much an individual player contributed to his team's wins, and
3. Convert that number of wins into a dollar value.
1. Figure out how much a win is worth,
2. Figure out how much an individual player contributed to his team's wins, and
3. Convert that number of wins into a dollar value.
Thursday, December 6, 2012
Evaluating MLB Signings, Part 1: Methods
After their 2012 season went down in flames, the Boston Red Sox were active during the recent winter meetings, signing 31-year-old first baseman/catcher Mike Napoli to a 3-year, $39 million contract, and 32-year-old outfielder Shane Victorino to a 3-year, $37.5 million contract. The moves were modest when compared to past offseasons, but the question remains: will the Sox get value from their new acquisitions?
Thursday, May 31, 2012
Tangled in the Rigging: Defending the NBA Draft Lottery
The NBA conference finals brings with it one of the best sideshows in sports: the NBA draft lottery, in which 14 grown men stand around awkwardly for half an hour to figure out how a bunch of ping pong balls bounced. We*, the viewing audience, are treated to a half-hour special containing some 15 minutes of talking heads speculating wildly, 2 minutes of commisioner David Stern reading franchise names, and 5 minutes of awkward interviews with team representatives. Fascinating.
* - Maybe "we" is the wrong pronoun; I mean, I didn't watch it.
But while the presentation of the lottery may not be especially compelling, the lottery itself sure is. The lottery teams (i.e., those that miss the playoffs) are ranked in inverse order of record, with the worst teams receiving the best chances of a high pick. So the team with the worst record has a 25% chance of winning the lottery, the second-worst team has a 19.9% chance of winning the lottery, and so on down to the 14th-worst team (the last team out of the playoffs) who has a 0.5% chance of winning the lottery. The whole list of probabilities for this year's draft is available here.
Some have argued (with varying degrees of seriousness) that the lottery system is rigged*, and point to the fact that the worst team in the league hasn't won a lottery since the Orlando Magic won and picked Dwight Howard in 2004. But I want to stress this again, because it's important: the team with the highest probability will still lose the lottery (i.e., not get the first overall pick) 75% of the time.
* - Maybe "we" is the wrong pronoun; I mean, I didn't watch it.
But while the presentation of the lottery may not be especially compelling, the lottery itself sure is. The lottery teams (i.e., those that miss the playoffs) are ranked in inverse order of record, with the worst teams receiving the best chances of a high pick. So the team with the worst record has a 25% chance of winning the lottery, the second-worst team has a 19.9% chance of winning the lottery, and so on down to the 14th-worst team (the last team out of the playoffs) who has a 0.5% chance of winning the lottery. The whole list of probabilities for this year's draft is available here.
Some have argued (with varying degrees of seriousness) that the lottery system is rigged*, and point to the fact that the worst team in the league hasn't won a lottery since the Orlando Magic won and picked Dwight Howard in 2004. But I want to stress this again, because it's important: the team with the highest probability will still lose the lottery (i.e., not get the first overall pick) 75% of the time.
Subscribe to:
Posts (Atom)