FYI.

This story is over 5 years old.

Games

Game Companies Can Serve Communities or Customers, But Rarely Both

An ex-moderator explains how company policies and procedures protect the worst abusers at the expense of players and employees.
'Poker Game' by Cassius Marcellus Coolidge

CW: This article contains discussions and descriptions of online harassment and abuse.

I used to be a moderator for a popular online card game. Which game doesn’t really matter for what I want to talk about, which is “the Money Hose.” The Money Hose is why game companies have always been okay with turning a blind eye to a small sliver of the gaming population that relishes every opportunity to piss in the pool. It’s why so many Rainbow Six: The Siege players were stunned when they received bans for using hate speech in chat. They’re a population that’s used to being tolerated and even catered-to.

Advertisement

And they are being catered-to. I know because, as a moderator, I saw all the ways that toxic players were allowed to dodge consequences for their actions, to escalate their misconduct toward other players and moderators, and remain customers in good standing. There were lots of ways the people operating my game could have fixed or at least seriously mitigated most of the problems afflicting our community. The problems were obvious and straightforward. Like they were with SuperCreep. You have to hear about SuperCreep before you really understand the influence of the Money Hose.

My colleagues and I had a lot of interactions with SuperCreep, who would serially harass individuals. When I started moderating, this player was a known issue. Other moderators would share horror stories of having dealt with them for years. One such story was about a period of time where they would message moderators and ask what color panties they were wearing.

'Magic: The Gathering Online' screenshot courtesy of Wizards of the Coast

Most of the moderators made some sort of comment about “He’s a weird one” or “He’s crazy.” The behavior was written off and ignored as much as possible. The reason? Well SuperCreep spent lots of money on the game. An obscene amount. When SuperCreep stepped out of line and got too creepy, he was quickly muted. If it was particularly vulgar, it went up the chain to a manager… and usually still went to Mute Town.

One particularly disturbing series of incidents with SuperCreep was when they decided that another player was their wife. This player repeatedly told SuperCreep to leave them alone. SuperCreep would then proceed to follow them into any match they were playing and go on and on about how they were such a “good little wifey” for winning games.

Advertisement

SuperCreep’s target reported the problem to the moderators in chat. Our process, as set by management, was to message the player back with an automated chunk of text that links them to the support portal and how to file a conduct report. We could not do anything from that private message window. No investigations could be started until we got that email in the support queue. So from the start, there are hurdles for harassment targets.

This target probably replied to that chunk of automated text, as most players do. Reporters usually say something akin to “Please help me” or, “Please make this stop.” We could only say “Sorry to hear that this is happening, please report it to the link above so we can investigate it.” Targets probably see this as “This mod is lazy and doesn’t want to do their job” which is not the case. We can’t because management says so. Even though we’re right there in chat, we’re not able to start helping.

'Gwent' screenshot courtesy of CD Projekt Red

Once we get that email, we look into both victim and harasser. The victim is checked to make sure they aren’t trying to leverage the moderators to attack someone. I handled the first report from SuperCreep’s victim, so I know that they didn’t include screenshots or a game replay number, so we couldn’t do anything. The game I worked on had no way to search through or for specific replays. We needed the reference number to see what happened. So as per management, I made a note about it on SuperCreep’s file, forwarded the report to a manager, and sent a canned response of “Thanks for reporting. We can’t tell you any results of our investigation, but we take it seriously.”

Advertisement

Normally, once we can prove there has been an issue, we send a soft warning to the offender, which is just a way of saying “don’t do that again”. The next step would be a hard warning which is just another message saying “knock it off” if they keep it up. After that comes a 30 minute mute, followed by a one-day mute after that. Three-day mutes come after that, but required management approval, as did bans in general (three-day bans and mutes were usually handed out directly by management).

In the case of SuperCreep though, they had such a long infractions list that once we had a screenshot proving the harassment they were already well past the point of even three-day mutes and bans. This actually slowed things down: Before they would be punished for their harassment, management would have to review the case and then issue the mute or ban of the appropriate duration. At a minimum SuperCreep was looking at a three-day mute.

So instead of waiting for help, the target personally blocked SuperCreep.

In most cases this is where the story ends. The harasser gets a mute or a ban. When they come back, their victim has blocked them. They then toe the line for a bit, and then start their negative behavior again.

Except, SuperCreep was determined to keep harassing this target. They found a loophole and continued their harassment, because players blocking another player didn’t work against people observing a game. This was a software problem, and we, the moderators, could do nothing about it. So management decided that SuperCreep was getting a three-day mute while they looked into a more long term solution.

Advertisement

The mute ended and SuperCreep came back and continued their harassment. They would message their victim as an observer, and they would lament in open chat that their “wife” wouldn’t talk to them, trying to get other players to ask them what was wrong and in turn use them talking about it to harass the victim. A ban would have helped the harassed player. A permanent ban would have prevented anyone else from being targeted by Supercreep.

'Duelyst' screenshot courtesy of Bandai Namco

The company finally issued a long-term mute after multiple three-day mutes. SuperCreep then started heckling moderators again. Once unmuted, they went back to being periodically weird. Months later SuperCreep finally got permanently muted. Not banned. Muted.

SuperCreep was a known issue and yet the company chose money over banning the player and making the game safer for everyone else. It was clear that the safety and emotional well-being of the victims were not valued as much as the money coming from SuperCreep.

The specifics of SuperCreep’s case were unique, but they were by no means the only serial harasser or toxic player on the game that would get mutes rather than bans. And most, if not all, of my coworkers at the time were thrilled, because we were all just so damned tired of dealing with this player’s bullshit. What we didn’t realize was that it was also management’s bullshit.

We should have known better, of course. They were up-front about their priorities and values before we started. During my introduction to the company I was told that our goal as moderators was to, and I quote, “embiggen the money hose.” To let people throw money at the game as much as they wanted. Our primary objective was to help facilitate that.

Advertisement

To put a fine point on this, moderators often also processed refund requests for in-game purchases when not dealing with chat moderation and tech support. And oddly enough, the rules and processes for that were much more cut and dry than those of moderation. They could be boiled down to: 9 times out of 10, give players back their store credit. If the player wanted a charge back, that went up to management.

'The Fire of Rome' by Hubert Robert

All the while that we were “embiggening the money hose,” the moderation staff was paid minimum wage. When I left the company, I was making a little over $10 an hour, less when you factor in paying for electricity and my internet bill. All but two of the staff were nowhere near the office in California, and thus worked from home, on our own equipment. The only provided equipment was for in-office use only. Many of my coworkers were stay-at-home moms earning extra money while taking care of their kids. We’d get sent promotional items and t-shirts with the company logo on them occasionally. This was in lieu of reinvesting in the staff with raises, equipment reimbursement, etc.

The company would also run a Skype call every week to tell the staff how much money it was making. There were also no holidays off, being an online service and all. There was definitely a Thanksgiving where I had my laptop and a spare monitor next to the dinner table and had to explain to my in-laws why.

On top of this, we had several players who relished insulting moderators.

Advertisement

Moderator harassment had a much higher threshold (as dictated by management) than that of a player being harassed. Meaning that the toxic players would have to consistently and maliciously harass a moderator before any action was taken. I know my coworkers and I had more than one player cuss us out for not refunding them or banning an opponent immediately, etc. In some instances, it escalated to threats of violence by the players.

Actual repeated threats of violence were met with one-day mutes and sent up to managers which commonly resulted in a longer mute. Permanent bans were practically mythical.

Threats of self-harm were met with a block of automated text that gave a suicide prevention hotline number and an immediate one-day ban.

So someone jokingly says “This sucks, I should just kill myself”—ban for day. Someone else says some foul, vile shit and threatens to kill another player or a moderator—mute.

While muted they can still: Play the game, talk to moderators, buy items from the shop, trade, etc. They just can’t post in chat. They can still contribute to the money hose.

'Hearthstone' screenshot courtesy of Blizzard

You know who does get banned with zero tolerance? Scammers, hackers, cheaters, and gold sellers. If you harm the company bottom line directly, you get smote instantly. Companies are upfront on how many bans they have issued for “Real Money Transactions” and hacking.

By making a point of quickly removing scammers and cheaters, companies send a very clear message about the steps they’re taking to protect the integrity of the game and its economy. By being silent on what is being done about harassers, they send a very mixed message about how they will protect the community. Are they serious about addressing toxicity, or not?

Advertisement

And this highlights the biggest problem with rules enforcement and community improvement. If we aren’t allowed to ban, then there is no real punishment to the toxic players. Muting a toxic player does nothing to teach them to behave. They can still play. They can still harass moderators and would do so because they could not target other players.

In regards to the Ubisoft bans, it should be noted that in the game I worked for the swearing and hate speech policies were incredibly easy to abide by. Players needed to actively try to violate these rules. If you said something and the swear filter caught it and blocked it, it was fine. The swear filter was on by default for all players, but they could choose to disable it for themselves and it also blocked a wide swath of hate speech (It really isn’t hard to define, contrary to what Reddit says).

Here’s the thing: So many players felt it was their right to offend people. They would add @’s or !’s to their vulgarities and hate speech to make sure that people saw what they said. Further, by substituting characters in those words, they showed that they understood the rules and knew that they shouldn’t say it. But I cannot tell you the number of times I had players argue with me about it.

Let’s talk about arguments in bad faith for a second. The number one response these violators would use is “Well 1st Amendment says I can.”

Let’s be clear, the first amendment applies to the government. When you create an account for anything, be it Twitch or Steam or any other service under the sun, you are agreeing to the rules of that service. You are saying “Yes, I will play by the rules that I’m just skimming over and not reading.” Once you do that, you are in their house and you abide by their rules. It is thus their right and responsibility to enforce the speech rules of their game.

Advertisement

'Rainbow Six: Siege' screenshot courtesy of Ubisoft

The trouble is, by going out of its way to avoid meaningfully punishing players for using that speech, the company was also saying that it wasn’t that big a deal. Not enough to get you banned. Hate speech wasn’t anything worth taking seriously. Not like account selling, or other victimless crimes that they do enforce the rules on.

So the problems our team of moderators faced were on multiple fronts. We had players who felt it was their right to offend and/or didn’t respect the rules that they agreed to upon making their account.

On the other end we had our hands tied by management. We weren’t there to serve players, or deal with bad actors. We were there to service and maintain—“embiggen”—the Money Hose. And that meant not being granted the agency to deal with problems, which in turn further undermined the authority of the moderators.

Players wondered about us, which became its own problem. People who reported problems would send emails asking about what happened or why we hadn’t helped. And they perceived it as the moderators having not helped because the harasser came back and continued to be toxic.

They were never told anything that was done. So from their perspective, I imagine they thought “Why do I even bother reporting this? Why do I play this game?”

This sense of futility propagates bad player behavior because a reporting player may go “Well, screw it. They didn’t get punished so I’m gonna be a jerk too.” Which compounds this type of behavior, because now more people normalize it in the game. And we are seeing this happen as people try to defend slurs as ‘normal’ parts of the community they are in. This should never be the case.

Advertisement

To reiterate, we cannot post banned players up like heads on pikes. It makes them as much a target for harassment as their victims. But permanently banning the player, and making sure they have trouble coming back by banning them based on other criteria, such as denying them a new account with the same email, credit card number, etc. are steps that can be taken to improve these communities. Creating an expectation and standard for community behavior is also another step. Making it clear that the stance is “If you are toxic, we will show you the door” is crucial.

Ubisoft rolling out bans is a good start, but every company needs to follow suit.

As we see with some of the responses to Ubisoft’s bans, it is difficult to make someone have a moment of introspection and go “Oh, wait. I did do something awful”. But I can tell you for sure that muting them is not the answer.

If a ban or multiple bans don’t get through to this player, the companies in question are wasting their time and money on them. They are letting their active players get abused by these openly hostile players. They are letting their employees get burned out on these emotional time sinks.

And if serious action is not taken, these hostile players feel like they’ve won. They become empowered. They feel like they have control over the community and the game itself. This undermines the moderators and the company itself. It puts people's livelihoods and safety at risk. Over a game that's supposed to be fun.

'Guild Wars 2' concept art courtesy of ArenaNet

And there is only so much that the players/community can do. The community can say “Hey that’s not ok” but then they are drawing a target on themselves for abuse. They can report the problems, but it still comes down to the company to enforce the rules. It is rarely their intent, but every time a company sets a precedent or an example by not enforcing its own rules about player conduct, it is saying it doesn’t care about the community.

Which is a problem because online games or “live” games are communities far more than they are stories. The customer can’t always be right because you’ve got thousands or maybe even millions of them and they need to roughly get along, which means fostering a healthy community. But if you put the Money Hose above the people in your community, then eventually it’s going to poison the well for everyone: players, moderators, and developers alike.

Have thoughts? Swing by Waypoint’s forums to share them!