Soccer player ratings are confusing, erratic, useless … and great.

Soccer player ratings are confusing, erratic, useless … and great.

Soccer player ratings are confusing, erratic, useless … and great.

The stadium scene.
Sept. 7 2011 12:26 PM

Lionel Messi Goes to 11

Soccer's player ratings are confusing, erratic, and useless. More sports should have them.

Jozy Altidore #17 of the United States advances the ball against Team Panama during the CONCACAF Gold Cup Match at Raymond James Stadium on June 11, 2011 in Tampa, Florida. Click image to expand.
Why is Jozy Altidore rated so differently among soccer analysts?

The U.S. men's soccer team lost 1-0 to Costa Rica last Friday, but the result masked some strong individual play from the Americans as the players tried to implement new coach Jurgen Klinsmann's ball-control strategy. But which players stood out? It depends whom you ask. ESPN's Jeff Carlisle rated Jozy Altidore one of the team's top performers on the night, giving him a 7 on soccer's 1-to-10 player-rating scale and noting his "excellent link play." SI.com's Steve Davis gave Altidore his lowest grade of the night, a 4, writing that the powerful forward never came close to scoring.

For Altidore to earn such vastly different ratings, you'd think he gave a truly confusing performance—perhaps he dribbled Maradona-like through the Costa Rican defense only to Ronny Rosenthal the ball off the crossbar. Or maybe he was an asset in the attacking run of play but a liability on defense and set pieces. But no—Altidore played a typically Altidorean game in which he often got to the right place at the right time but lacked the touch to finish off his chances.

Jozy Altidore's conflicting player ratings have less to do with the player than with the scale. The game of rating players from 1 to 10 is a subjective mess with no statistical value—a poor stand-in for real statistics in a game desperate for them. Despite its flaws, the scale seems to be growing in popularity.

Advertisement

Player ratings first appeared in the late 1970s in England. To stand out from the glut of soccer magazines in the U.K., newcomer Match hired a London-based news agency called Hayters to provide in-depth statistics for every British match, all the way down to the Scottish Second Division. Each writer was tasked with compiling the Match Facts, which included an Entertainment Rating for each game, as well as the Player Ratings, in which the starters and substitutes were given a score between 1 and 10.

Match's readership took off, and the publication soon brought Match Facts in-house. The mag also began awarding a Matchman of the Month award to the player in each division with the highest average player rating, presenting the trophies at stadiums around the country. It was a great marketing tool, says then-editor Mel Bagnall, and the subjectivity of the ratings only added to their appeal. "Match [was] the currency for debate in playgrounds, on terraces and in the pubs," he says.

Other soccer rags, including Shoot and Football Weekly, soon started offering similar athlete appraisals. Over the ensuing decades, the practice has gone worldwide. Nowadays, every soccer game of consequence—and many of no consequence—is followed by sheets of 1-to-10 player ratings from reporters and bloggers.

These ratings wouldn't be so popular if soccer fans had more stats to bat around than goals and assists. While a few advanced statistics are slowly rolling out, soccermetrics are way behind those of baseball, basketball, and even (American) football. John Godfrey of the New York Times' Goal blog says that the current stats are both limited and misleading: "A striker might score two goals in a match, but one of them could be a sloppy deflected shot off a corner kick and the other could be through a penalty kick that another player earned." In that case, Godfrey says, the player ratings are a good way to add some context, and to give the players' teammates the plaudits they deserve.

Advertisement

While the raters don't usually come to a consensus, they do agree when someone turns in a spectacularly bad performance. The analysts from the New York Times, Washington Post, ESPN.com, and SI.com all said that Jonathan Bornstein, the much-maligned left back, was the worst player on the field during the United States' Gold Cup loss to Mexico. Breaking with the unspoken tradition of giving all players a 3 or higher simply for competing, several analysts gave him a 2. Godfrey gave him a 1.

As Match's Bagnall hinted, controversial ratings are good for business, and that's especially true in an era when pageviews are directly related to profitability. "I get many more emails commenting on my grades than I do about all the more in-depth stuff I write put together," says ESPN.com's Leander Schaerlaeckens. "Which is probably the point." The soccer player ratings are the equivalent of NFL power rankings and the Heisman experts' poll—an ultimately meaningless exercise that exists mostly to rile up fans.

For soccer followers, the player ratings are just confusing. SI.com's Davis consistently gives Altidore lower scores than ESPN.com's bloggers do, and recently has started to rate most players lower. He gave Altidore a 3 and midfielder Jose Torres a 4 for their performances against Belgium on Tuesday. ESPN.com's Schaerlaeckens gave them a 5 and a 7.5, respectively.

While part of it may be that Davis thinks that Altidore is a worse player than Schaerlaeckens does—and a comparison of their ratings over the past couple of years gives that impression—it's also true that different raters have different standards. "I, for one, think players should be graded within the context of their own ability and surroundings," Schaerlaeckens says. "Others seem to take a wider approach."

That simple 1-to-10 scale, then, has to do a lot. A rater must evaluate a player's performance in a particular game. He also must compare that performance to that of his teammates, weigh it against how the player did in previous games, and place it somewhere in the context of the worldwide level of play. If you're judging both Lionel Messi and Jonathan Bornstein on the same point system, you run out of room for intraplayer nuance pretty quickly. The result is a mess of precise but entirely useless numbers with no internal or external consistency.

While the player ratings are statistically worthless, they are vehicles for useful information. Besides the simple grade, each player is the subject of a bit of evaluative exposition. The best commentators pay attention to the player's role throughout the pitch, noting off-the-ball marking and runs, and give readers fresh perspectives that they couldn't have gotten elsewhere. Jack Bell of the New York Times, for example, noted that winger Robbie Rogers didn't just fail to convert any touches into scoring opportunities against Belgium—he never got into position to receive any passes at all.

We're missing that level of individual analysis in American football, where valuable players are ignored because they don't produce easily understandable statistics. I never hear about the play of Broncos right guard Chris Kuper, even though the quality of his blocks could be the key to a Denver victory or defeat. A tip for pageview-chasing NFL writers: Start doing postgame player ratings. Fans won't be able to stop themselves from clicking, and they'll debate the scores until it's time to start the next game. And who knows, they might even learn something.