A ranking system that doesn't account for margin of victory isn't particularly useful as a predictor of future results. It also hurts teams that play weaker opponents. Since conference play makes up the bulk of a team's schedule, the teams that play the weakest schedules tend to be from the weakest conferences. The most obvious example this season is Belmont, currently 53rd in the RPI. Playing in the Atlantic Sun, the Bruins had just three games this season against respectable opponents. All of these games—two against Tennessee and one against Vanderbilt—were on the road, and while Belmont was competitive in each, they lost all three. In their other 31 games, Belmont played as a tournament team should, winning 30, with 25 of those wins coming by double-digits.
It's the kind of success that any team currently projected as an eight- or nine-seed would be expected to have against a similar schedule. Yet because of the RPI, Belmont wouldn't have been considered for an at-large bid due to its lack of quality wins. Fortunately, Belmont won its conference tournament title game by 41 points to receive an automatic bid. Still, most observers are projecting them as a 12- or 13-seed—quite a bit lower than they'd deserve if the NCAA seeded teams on the basis of how good they were rather than who they played.
There's a reasonable debate to be had about how much to reward a team, like Belmont, that beats up on inferior competition. However, there are smart ways to include margin of victory in a ranking system while continuing to keep sacred the ultimate value of winning the game. A system could be devised, for instance, that ignores how a team performs during the parts of a game that had no impact on the outcome. If a team is up 30 at the half, it should be irrelevant whether they ultimately won by 10 or 50. In this way, a system would treat a game the way players do—winning is the goal, but building a big lead is preferable to needing heroics in the final seconds.
Even if the NCAA opposes the inclusion of any context in their ratings system of choice, there are still better systems out there than the RPI. For starters, I'd direct them to the work of Jeff Sagarin or Kenneth Massey. Both Sagarin and Massey look exclusively at outcome and location and, unlike the RPI, are rooted in solid mathematical theory. Massey, for one, has an elegant way of incorporating strength of schedule. Not only does he include the location of the game, but he rewards a team more for playing a few elite teams and a few poor teams than for playing a series of mediocre teams. It's easier for a bubble team to rack up a good record against a bunch of middling squads, and Massey's system recognizes this, unlike the RPI. (And for what it's worth, Sagarin and Massey both give Belmont its due, ranking the Bruins 35th and 34th respectively.)
The RPI may be one of many tools that the committee uses to do its job, but it's clear that it's a very important tool. It's also one that was invented in the age of punch cards. Since then, we've learned a lot more about what makes a good team, and more sophisticated methods have been developed to quantify that goodness. Perhaps someday the NCAA will take advantage of this information.