One thing that I’m not a fan of is when rules are seemingly bent every which way for purposes that seem to be less for the greater good and more about self-promotion. Now I’ll be the first one to admit, sometimes bending the rules can lead to good things, especially when the rules weren’t in the ballpark of fair to begin with. I did that plenty when I was a Software Engineer—not breaking the rules straight up but bending them in ways that I knew perfectly well wasn’t the intent of the rule. What’s crazy is that the rules within journalism seem to be more defined and more common sense more than anything else that I’ve run across. With some rules it’s clear where the line was and never to cross them, and what the original intent of the rule was. And when you see people sorta go against the spirit of those rules, it can be irritating as hell, because you know that they know better. And this seems to be the case for me with The Washington Post and Metacritic.
Let’s start where I actually began. Some of you may know that I do YouTube as my “primary” job and do TechRaptor stuff as well. So I constantly hunt for Indie games to request keys for my channel, and I came across a game called Red Goddess: Inner World while browsing Steam’s upcoming games. Now I attempt to do research before I look at a game and noticed that it first came out on the Playstation 4. With that information, I went to Metacritic and looked at some of the review scores. This is actually a little different for me—usually the games I cover don’t have a review at all when they first come out, but for some odd reason I decided to take a look at the scores at this point, and one review … well it sorta stood out on the front page.
Now, if you go to more reviews, the next lowest one was 30 by GameSpot. But this really intrigued me. What made it this bad? It’s very rare to see that low of a score, especially from big publications, so I went over and decided to read the full review … and that’s where things were troubling. The reason it was troubling? Because the full review didn’t have a review score. You can check it out here if you want to look yourself. I looked up and down, far and wide, and there was no review score in sight.
Now do reviews need review scores to be effective? Not at all, in fact, I’d argue that they sometimes cause more harm then good. They can be effective in judging games against each other, but games are too complicated and too vast in their nature to really bring it down to a mathematical formula. It’s part of the strength of video games—the experience can be so different from player to player. But with that said, this was being listed on Metacritic with a review score that seemingly appeared from nowhere, and considering that Metacritic has such an influence on the industry and is really about review scores in general … this was troubling. And what was worse was that this wasn’t a one off situation. In fact, most if not all of the Washington Post’s reviews didn’t have review scores on the actual review side. From Majora’s Mask 3DS to Bloodborne, none of these games had review scores.
Isn’t that a problem in Metacritic’s system? I mean, it’s pretty clear by their about section that a review must be scored: “Metacritic’s mission is to help consumers make an informed decision about how to spend their time and money on entertainment. We believe that multiple opinions are better than one, user voices can be as important as critics, and opinions must be scored to be easy to use.” And yet in this case, the score is an after thought in the review itself. There’s no way to cross reference the score, to get an idea that yes, this truly represents the score or that there was no mistake. Because guess what, mistakes happen even on simple data entry products.
Now look, scores have their faults and can be overrated like I said. People like Totalbiscuit with their first impressions or scoreless reviews can be just as beneficial as a scored review, if not more so. But understanding what went into that score, and in particular, how big of an impact it may have made can be valuable information. I for one have experimented with scores myself at times (not on TechRaptor, due to the guidelines), but this number seemingly appeared out of nowhere. So I’ve contacted the Washington Post and Metacritic in an attempt to get answers, but like most stories of this nature, I haven’t heard anything back after a few days. As you know with my history regarding getting answers on certain topics, I’m not exactly going to say that I’m expecting to get any sort of response at this point.
And what’s more fascinating about this is that they seem to be the lone wolf in this situation. Just for the sake of argument, I went through and looked at a bunch of the front page reviews that were listed, for games like Lost Dimension, King’s Quest, and Victor Vran—which by the way, I highly recommend that one. Publications from all over seemingly filled the requirements, from the Financial Post to GamingTrend to RageQuit—which was in a foreign language, and yet lo and behold I can still read the score. The Washington Post was the only one that did not put the score up for anyone to see, although I will admit a couple of publications could have done a better job of making the score easier to see.
But what really bothers me about this has to do with TechRaptor’s own attempt to get on Metacritic. We missed the last deadline to submit, and that’s on us. But the questions that were posed were very specific about the kind of information that you needed to have for them to make a judgement on:
6. What scoring system (if any) does your site employ? (i.e., 5 star, 4 star, 1 – 100, 1 – 10, Letter Grade, etc.):
6a. Please provide your publication’s “Scoring Explanation” if you maintain one (i.e., 10 = Excellent…. 5 = Average…..0 = Abyssmal)
6b. Has your site ever changed the score on a review once it has been published? If so, please explain the circumstances surrounding each such instance.
8. Please list the titles of the last 20 games your publication has reviewed (include SINGLE version primarily tested, link to review, and SCORE for each):
It’s pretty apparent that some of the requirements would be … well the scores themselves. So why isn’t this a problem for the Washington Post? Did they get in when the requirements were different? And why can’t they provide the score in a simple form on the article in question? Why are the basic elements of Metacritic seemingly not followed on what seems to be something that could easily be added to the original review? I mean, you made a score. Why not put it in the review? And if you don’t like your viewers being exposed to a score … why put it on MetaCritic in the first place?
Now, is the Washington Post more well known than TechRaptor and going to get more hits and views for MetaCritic then us? Absolutely, there’s no arguing that—they’ve got a bigger reach and more pull. Could that be why they are given a pass on an issue like this? It’s definitely possible. But Metacritic, don’t you care about the quality of the reviews in question? If they can’t meet simple standards that others are putting forward, why have them on there? You’ve got plenty of publications, TechRaptor included, that are seemingly meeting your standards in order, and yet you let problems like this off without a scratch. It makes people wonder, myself included, if you truly believe your mission statement as said above, or there’s a more … monetary reason that you are letting this go. But hey, in the end, you’ve got to stay up and running, right?