As the tussle between Activision and the FTC continues over the Microsoft lawsuit, the Call of Duty publisher announces that it has partnered with the California Institute of Technology to counter online toxicity through advanced AI-based analysis software Neural.
The new initiative promoted by Activision Publishing therefore involves researchers from Californian Caltech to create a new automated moderation tool for the comments and videos exchanged by users during a game session, through social platforms or through the messaging services integrated into the software ecosystem of PCs, consoles and mobile systems.
The new technology under development by Activision’s team of researchers and data engineers led by the AI expert Neural Anima Anandkumar and the political scientist Michael Alvarez it will use artificial intelligence to detect toxic behavior and messages of trolling, racism, sexism, doxing, insults and ‘generic harassment’.
In the intentions of Activision and the Californian researchers, the software in question will be developed over the next two years: once completed (not before the end of 2024), the system based on Neural AI of the California Institute of Technology and Activision will not operate autonomously but will be made available to the US publisher’s moderators to improve their work through real-time analysis of ‘negative trends’ on themes and terms most used in toxic behaviors.