‘Valorant’ Will Use Your Voice to Train AI to Detect ‘Disruptive Behavior’

The popular first-person shooter will soon be listening to everything you say in voice chat.
Image: Riot Games art.

Starting July 13, Valorant will begin listening and recording in-game voice communication with the goal of training a language model to help it identify toxic chat. This system will only work in North American/English-only servers. The goal is to launch the language model later this year in an effort to clamp down on toxicity in the first-person shooter.

Like in any first person shooter that lets players talk to each other, the voice chat in Riot Games’ Valorant  can be toxic. It’s also incredibly hard to moderate. When someone drops a racial slur in text chat, a clear log is maintained that mods can look through later. But the processing and storage power required to do the same for voice chat just isn’t possible. “Voice chat abuse is significantly harder to detect compared to text (and often involves a more manual process),” Riot Games said in a February 2022 blog post.


Riot first indicated it would do something about abusive voice chat in February 2022. “Last year Riot updated its Privacy Notice and Terms of Service to allow us to record and evaluate voice comms when a report for disruptive behavior is submitted, starting with Valorant,” it said at the time. “Please note that this will be an initial attempt at piloting a new idea leveraging brand new tech that is being developed, so the feature may take some time to bake and become an effective tool to use in our arsenal. We’ll update you with concrete plans about how it’ll work well before we start collecting voice data in any form.”

Now we know what that brand-new tech is: some kind of language model that automatically detects toxic voice chat and stores it for later evaluation. The updated terms of service applied to all of Riot’s games, but it said its current plan was to use Valorant to test the software solution before rolling it out to other games.

The ability to detect keywords from live conversations is not new, however. Federal and state governments have been using similar systems to monitor phone calls from prisons and jails for at least several years—sometimes with the ability to automatically disconnect and report calls when certain words or phrases are detected.     

Riot Games did not share details of the language model and did not immediately respond to Motherboard’s request for comment. According to a post announcing the training of the language model, this is all part of “a larger effort to combat disruptive behavior,” that will allow Riot Games to “record and evaluate in-game voice communications when a report for that type of behavior is submitted.”

The updated terms of service had some more specifics. “When a player submits a report for disruptive or offensive behavior in voice comms, the relevant audio data will be stored in your account’s registered region and evaluated to see if our behavior agreement was violated,” the TOS said. “If a violation is detected, we’ll take action. After the data has been made available to the player in violation (and is no longer needed for reviews) the data will be deleted, similar to how we currently handle text-based chat reports. If no violation is detected, or if no report is filed in a timely manner, the data will be deleted.”

Riot Games said it would only “monitor” voice chats if a report had been submitted. “We won’t actively monitor your live game comms. We’ll only potentially listen to and review voice logs when disruptive voice behavior is reported,” it said in a Q&A about the changes. That is still monitoring, though, even if it’s not active. What this probably means is that a human won’t listen to it unless there’s a report—but that doesn’t mean a computer isn’t always listening.

The only way to avoid this, Riot Games said, was to not use its in-game voice chat systems. Monitoring starts on July 13. “Voice evaluation during this period will not be used for disruptive behavior reports,” Riot Games said in the post announcing the project. “This is brand new tech and there will for sure be growing pains. But the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.”