Online gaming is massive. More people play games than watch movies or listen to music. At the end of 2021, some 2.8 billion gamers will have generated revenues of $189.3 billion playing online games.
Ubiquitous with children and young adults, games have become “increasingly complex, diverse, realistic, and social”. Games do so much good, like aiding cognitive development, promoting teamwork and skills development. Playing across cultures and geographies is an opportunity for exposure to diversity, and helps isolated people make social connections.
But instead of influencing players in positive ways, rising online toxicity puts both gamers and industry players at risk.
Harassment, violent speech and toxicity in online gaming environments puts brands and business revenues at risk. Online harassment and toxicity is rampant, and steadily rising, particularly in digital game experiences.
81% of US players of online multiplayer games say their experience was shaped by harassment in 2020. A third of players who’ve experienced toxicity avoid games. 22% stop playing altogether. These numbers could be larger because younger players do not see the value of reporting toxicity, are afraid, or feel they are somehow to blame for being victimised.
The FairPlay Alliance, a global coalition of gaming professionals and companies committed to developing quality games champions the idea that: “Every player deserves a fair, safe, and inclusive space to play.”
But with millions of players online, the challenge of monitoring every single problematic interaction becomes harder; artificial intelligence (AI) helps sift and sort massive amounts of data which makes finding and acting on toxicity easier.
Research by StopBullying.gov shows that banning players doesn’t change the ecosystem. What does is active moderation where adults intervene and address toxic behaviour on digital properties.
But human moderation is psychologically taxing, impossible to scale, inefficient and inconsistent. Why torture humans by exposing them to horrific video or audio, when advancements in natural language processing and AI can deliver better systems and results that are kinder to humans?
OTO’s SafeVox solution uses acoustic and speech analytics to keep online gaming and community spaces safer. SafeVox analyses voices to discern conversational risk, and delivers insights on acoustic intelligence so human moderators can make better, faster decisions while minimising the exposure of these people to toxicity.
If you’re curious about how SafeVox works and what it can do, join OTO by registering for the Google for Startups Accelerator: Voice AI - Demo Day on Thursday, May 20th, 2021 at 12:30pm EDT. OTO will be part of a virtual showcase that will include innovative and impactful voice technologies from across the US.
SafeVox enables greater understanding of humans and social interactions in virtual spaces. By building flexible rule sets, every conversation in a digital ecosystem can be rapidly benchmarked and then ranked in terms of severity. This more easily enables human content moderators to review potential abuse, starting from the most problematic.
OTO has made major advances in extracting and creating value out of these often cryptic cues, using proprietary algorithms in our next-generation acoustic engine. OTO’s aim is to cut down the work of human content moderators by 90% and to improve the overall efficacy of content moderation.
At a time when games launch to millions of users simultaneously and when multiplayer games use audio conversations to drive immersive and emotional gameplay, by using AI, brands can create systems that assess levels of toxicity in real time, and intervene as toxicity unfolds.
When a conversation is flagged, a human moderator can find information about the game session and listen to the recording. OTO provides an augmented non-linear review experience that allows the moderator to move directly to the problematic area. This further saves time as the moderator does not need to listen to the whole recording in a sequence to understand the events contained therein.
Some interactions are automatically flagged, such as when there is a minor in conversation with adults. Europe, the USA, and other countries have very strict laws on curbing child bullying and harassment, placing the onus on the gaming company to manage this.
When there is a disparity in the polarity of vocal and emotional outbursts between the older and the younger participants, OTO adjusts the toxicity score accordingly, ensuring human moderators quickly move to examine the interaction and act on it. If a gamer laughs, and another in the same space elicits a negative emotional response SafeVox picks this up as an indicator of possible bullying.
Gender bias is also tackled. When OTO picks up a rapid-fire interaction between male and female gamers, combined with high negative emotion scores, it also raises a toxicity ‘red flag.’ This is critical to enable gender equality in gaming because studies show females are more or equally skilled in games as men, when given the same game time.
But research shows females adopt coping mechanisms, like not using voice chats, to hide gender and this puts them at a disadvantage. “Such experiences discourage women and girls from playing, which means they are less likely to gain the cognitive benefits of gaming such as spatial rotation skills, which are associated with success in technological career paths—an area in which there is already rampant gender disparity,” reports Wired Magazine.
The OTO engine is far more effective, and thus cheaper, to run at scale than most speech-to-text approaches because the computational overhead is much lower, and it is language-agnostic. In the first phase of the roll-out, a human moderator will always be involved to review a flagged interaction before it can be acted upon. In the future, the solution will benefit from increasing automation and proactive flagging by AI.
OTO offers gaming companies, live streaming platforms, and game ecosystems a simpler, smarter and faster way to make online games safer for everyone.
The impact of toxicity lingers far beyond digital experiences and has real life consequences. Research by the Anti-Defamation League reveals that 16% of gamers who experience harassment become less social, 14% feel isolated, while 11% have depressive or suicidal thoughts. Only nine percent of these ask for help.
As our lives become increasingly digital and social, toxicity will have a material and meaningful impact on both humans and the brands that create and publish online experiences. OTO’s vision is to reduce online toxicity to create safe spaces where humans can connect, collaborate and play.
Our dream is to realise empathy at scale to recreate the virtual world as a kinder, more secure place where humans can realise their true potential while efficiently addressing the growing harm, and alarm, of toxicity.