Riot Games and Ubisoft have announced a partnership in a research project, “Zero Harm in Comms,” that uses artificial intelligence to detect and prevent toxicity between players.
Developers Riot Games and Ubisoft are teaming up in a technological project using AI to detect and prevent toxicity in in-game chats across their various titles.
The press release about the initiative describes the project as aiming to create a “cross-industry shared database and labeling ecosystem” for in-game data that will train AI moderation tools to “detect and mitigate disruptive behavior.”
“Disruptive player behaviors is an issue that we take very seriously but also one that is very difficult to solve. At Ubisoft, we have been working on concrete measures to ensure safe and enjoyable experiences, but we believe that, by coming together as an industry, we will be able to tackle this issue more effectively,” Yves Jacquier, Executive Director, Ubisoft La Forge said in a press release.
The two developers are going to explore the technological foundations for the future of industry collaboration with “Zero Harms in Comms” and create an ethical and privacy initiative with this project, according to the announcement.
Riot and Ubisoft hope that by combining their respective titles, with Riot’s competitive games and Ubisoft’s diverse set of titles, the database created will cover every type of player and in-game behavior to train the new AI system.
The “Zero Harm in Comms” project is still in the early stages, but Riot and Ubisoft have committed to sharing their findings of the initial stages of the project with the gaming industry next year “no matter the outcome.”
Riot Games announced in April 2021 that the company will begin monitoring in-game voice chats in North American Valorant servers and teased the release of this AI initiative.
The developer provided an update on the initiative in February 2022 saying they have chat banned over 400,000 accounts in January 2022 alone.