If you’ve ever played a multiplayer game online with voice chat, you know that it can be a minefield of expletives, trolling, slurs, and threats. It’s not always the case, but it’s safe to say everyone has experienced it at least once. For the popular shooter VALORANT, Riot Games has decided enough is enough with the toxicity. Riot has already been moderating text chat and following up on user reported abuse, but they are now ready for the next step. Starting July 13th, Vanguard, the game’s anti-cheat system, will start moderating your in-game voice chat.
That’s right, Riot will record your in-game voice chat. To start with, the system will merely be listening to improve its language recognition and accuracy. Sometime later this year, the active moderation system will go live in beta. So how will this work to fight toxicity and abuse? When a player is reported for verbal abuse or toxic behavior, the system will analyze the recorded voice data to see what, if any, of Riot’s policies they violated. If a violation is found, punishment will be dealt out. That can range from being muted in chat to being banned from the game. It can also vary in length of time from hours or days to a permanent ban.
Of course there are some concerns here, especially when it comes to privacy. Collecting and storing voice data is not something to be taken lightly. Riot has addressed this, stating that the data will only be used for moderating in-game chat for toxic behavior and that they will protect the data as if it were their own. They don’t intend to sell it or ship it off anywhere for any other use. While that’s nice to hear them say, the reality of personal data like this somehow getting out is more possible than anyone wants to admit. While the concerns are valid, this does not appear to be going away. If you don’t want your voice data recorded, you can opt to not use the in-game voice chat. Maybe it’s not ideal, but it does let you still play the game without giving up your personal data.
Image from: Riot Games