If there’s one lesson to be learned from the technology of the early 21st Century, it’s that data aggregation, the force which drives information empires like Google and Facebook, is a double-edged sword. On the one hand, it can help us better understand our world by arranging oceans of abstract information into something more tangible, useful and easy to swallow. But the dark side to this is that in our zeal to make sense of the all this stuff, we oftentimes fail to account for one equally important factor: context.
Such was the mistake of popular media reviews aggregator Metacritic. The site normally pulls and averages review scores from popular film, TV, music and videogaming sites to draw a larger picture of quality in the entertainment industry. But when it began to rate individual developers based on the “metascores” of the games they were involved in, Metacritic was met with some well-deserved backlash from members of the videogame development community.
Videos by VICE
Observes TIME:
In his role as writer, Ken Levine couldn’t have influenced Freedom Force vs. the Third Reich as much as he did BioShock, where he served as creative director. Under Metacritic’s system, both scores affect his career average equally.
It’s possible—hell, probable, even—that Peter Molyneux learned tons more about narrative and player agency in the three years between Black & White and the first Fable game. They’re different kinds of games, too: One’s a real-time strategy game, and the other’s a third-person RPG/action hybrid. But again, the scores don’t reflect that.
Thankfully, Metacritic pulled the feature earlier this week. Just to be clear here, no one’s saying that developers shouldn’t be subject to criticism based on the projects they involve themselves with. But Metacritic’s obsession with quantifying everything represents a recurring problem in today’s info-hungry tech landscape: We lack an effective way of contextualizing data.
That’s where AIs come in. Unlike our human brains, computer systems are quite good at holding a lot of information and outputting it quickly. Finding relevancy and context? Not so much. But AIs like Watson are a baby step in fixing that because they’re able to generate output based on an understanding (or rather, simulated understanding) of relevancy with regards to the subject at hand.
Game making, like many things, is far too complex a process to chart empirically without a means of intelligently processing an enormous variety of factors. Among them: studio size, game genre and direction, project timescale, budget… The list could go on forever until we’ve basically simulated life itself.
But sites like Metacritic don’t really worry about stuff like that. Instead, their purpose remains deeply entrenched in the illusion that quality can be accurately represented in a 1-100 score. And the illusion works fairly well, evidenced by the many RSS-skimming internet fanboys who take videogame scores as word of law rather than taking time to, you know, read a review.
Thar be dragons here, Metacritic. Turn back while you still can.
TIME TechlandMore
From VICE
-

Chris Farley in 'Tommy Boy' (Photo by CBS via Getty Images) -

Dan Akyroyd and Bill Murray (Photo by Vinnie Zuffante/Getty Images) -

