With Selection Sunday a little over a week away, conference tournaments represent the last chance for teams to improve their tournament résumes. For the nation’s best, they’ll look to improve their standing in hopes of earning a No. 1 seed. But for the bubble teams, a deep tournament run could be the difference between the Big Dance and the NIT.
As you know, projecting tournament teams (aptly called “Bracketology” by ESPN) has become an art form. Their head “bracketologist” Joe Lunardi has a history of correctly projecting the now 68 teams and has averaged less than one miss over the past eight seasons. Last season, Lunardi only incorrectly guessed one team to make the tournament. ESPN loves to trumpet Lunardi’s accuracy, though once you take away the 31 automatic bids and the 25-30 teams who “should be in,” the only trouble comes with the last four or so teams in. But regardless, Lunardi’s record is still noteworthy especially when you consider he’s not a member of the tournament selection committee.
So how are Lunardi and other bracketologists so accurate? Well, advanced metrics play an important role, especially a team’s Ratings Percentage Index (RPI).
What is RPI?
There’s much talk about RPI this time of year, but it’s unclear to most what the rating really means. The RPI was instituted in 1981 by the NCAA as a method for selecting the field of then 64 tournament teams. It began as a way of selecting just at-large teams in the tournament (those without an automatic bid), but it is also used choosing seedings for tournament teams as well.
The basic formula for RPI is the following:
- 25 percent a team’s winning percentage
- 50 percent the average win percentage of opponents
- 25 percent the average win percentage of opponent’s opponents
I could bore you with an mathematical example, but, 1) I’d probably mess it up and 2) they do it better here.
How Important Is The RPI?
This is up for debate.
It seems like in years past, the RPI is all we’ve heard about when comparing team’s tournament résumes (Something along the lines of “Team A has a Top 40 RPI, but too many losses to RPI teams ranked 100-150”). In recent years, many in the CBB community have bengun to distance themselves from the rating or point out the inconsistencies of it (one major one being teams with a high SOS tend to finish high in the RPI rankings, regardless of record).
In recent years, the RPI has been a pretty fair indicator of seeding at the top of the tournament. This is a quick chart of the Top 10 teams in the final RPI rankings last year and their seeding in the tournament:
Last year was one of the better ones for the RPI Top 10. It projected all four #1 and #2 seeds, plus two of the #3 seeds. Nearly perfect.
But when you get down to the bubble teams, it’s clear that many more factors come into play. Last year was the first time which the NCAA actually released its list of the last four teams in and out of the tournament. The phrase comes up quite often on ESPN (if you haven’t noticed, tune in next Saturday – the teams will change a few times throughout the day). Take a look at the last four in and out and their RP (in order of committee decision – bottom teams are last in/first out):
The numbers seem cut and dry — the last four in are ranked higher than the last four out. But there’s more to it, upon seeing the other teams with high RPIs to miss the tournament (courtesy of ESPN.com):
Marshall was the lone team with a higher RPI than the last four in, but numerous teams with a higher RPI were ruled out before committee’s the last four out.
So the short answer is that the RPI is highly regarded by analysts, the committee, but it’s not the end all be all. The “eye test” still rules all, but RPI can help the committee break a stalemate between bubble teams.
Problems With The RPI
A quick Internet search of the title above will bring forth numerous criticisms of the RPI: it doesn’t contain a human element, it doesn’t factor in things like margin of victory, etc. but questions such as these could go either way — having too much of a human element or de-valuing teams who win close games can yield the same problems.
But one notable flaw with the RPI is in the formula itself. Scroll back up and look at it one more time and notice that 75% of the formula is based on the quality of opponents instead of the success of the actual team. This is widely believed to be the reason that Iona was chosen for last year’s tournament over Drexel. Iona’s non-conference SOS was 43rd to Drexel’s 222nd. Pretty sizable difference, which would certainly show up in the RPI rankings. But a closer look shows that not only did Drexel have better wins than Iona, but Iona had worse losses. But unfortunately this was only a quarter of the equation.
So in theory, a team could load up their non-conference schedule with good teams, win a few but lose a bunch, and still come out with a high RPI. A team like NC State could go 0-8 Top 50 RPI last season and still earned automatic bids, as they did last year, mostly because they played good teams. It’s a little strange to think that the actual result is only 25 percent important.
RPI of Current Bubble Teams
The RPI’s of this year’s bubble teams (according to Lunardi) shows that the correlation is much less direct the further down the rankings we go (RPI rankings as of March 9 at 2:30am):
It should be noted that 1) these RPI rankings are still subject to change as there are still a few more games to play and 2) Lunardi’s projections aren’t necessarily 100% accurate. But there’s still quite a discrepancy here. An RPI Top 50 teams such as Southern Miss are currently on the outside, while a team like Virginia is in with an RPI of 70. It’s also interesting to see two of the four teams “out” are from one conference (SEC). That should be some tournament to watch. But the lesson here is that while RPI may be the most important metric used by the committee, its influence varies the further down the list of eligible teams you go.