Before assembling a working metaverse, there are many technical challenges that still need to be resolved. But most of all, companies like Meta need to create safe and fun environments for users. And metaverse moderation is the key to making that a reality.
In recent months, we’ve heard a lot about the “metaverse”. Facebook has already launched its elements as the “successor to the mobile Internet”. There are different definitions and ways of what a “metaverse” is. But it has to be a series of interconnected, avatar-focused digital spaces. You can do things there that you can’t in the physical world.
Today, Facebook offers products, such as Horizon Home, a social platform to help people create and interact with each other in the metaverse. However, there are some things we don’t know. And one of the most important aspects we still need to understand is: how to moderate the metaverse? This is an important question, as content moderation on social media platforms is now an important legislative topic in almost all G20 countries. These lines tell you more about the importance of metaverse moderation and the different ways to achieve it.
Metaverse moderation to resolve multiple issues
Many issues threaten the future of the metaverse. That’s why we need to focus on setting up moderation in this space.
Sexual harassment, on the rise
According to Meta, last year a beta tester reported something very disturbing: he used to victim of touching his avatar in Horizon Worlds. Meta’s internal review of the incident revealed that it should use a tool called “Safe Zone”. It is part of a set of security features built into Horizon Worlds. It’s a protective bubble to activate if you feel threatened. Note that in Horizon Worlds on Facebook, up to 20 avatars can come together at a time to checkhanging and building virtual space.
According to the American site The Verge, the victim explained that he was touched. But the worst is that there is other people who supported this attitude. Vivek Sharma, Vice President of Horizon, said that this event is “absolutely unfortunate”.
This is not the first time a user has experienced this type of behavior. in virtual reality. And sadly this is not the end. Just recentlyJane Patel, co-founder and vice president of metaverse research Kabuni Ventures, shared an awesome experience. Her avatar in the metaverse was allegedly sexually assaulted and violated by other users.
“They actually, but almost, gang-raped my avatar and took photos while I was trying to escape,” he said.
Child Safety in the Metaverse
Titania Jordan, Chief Parent Officer of Skin technologiesa parental restraint application to ensure the child safety online and in real life, said he was particularly concerned about what children would encounter in the metaverse. He meant it could be the abusers target children through in-game messages or talking to them through headsets.
Recently, Callum Hood, head of research at Center for Combating Digital Haterecently spent a few weeks recording VRChat game interaction. In the game, people can create virtual communities, party in a virtual club or meet in virtual public spaces. Oculus considers it safe for teenagers.
However, over the 11-hour period, Mr. Hood more than 100 problematic incidents of VRChatothers are involved of users who said they were under 13 years of age. In many cases, user avatars are pronounced sexual and violent threats against minors. In another case, someone tried showing explicit sexual content to a minor.
Incorrect Metaverse information
BuzzFeed News, an American internet media company, has set up its own private world, called “Qniverse”, to test the company’s virtual reality. He concluded that the prohibited content on Instagram and Facebook does not seem to be banned by Horizon Worlds.
BuzzFeed filled Qniverse with phrases that “apparently promised Meta to be removed from Facebook and Instagram” (e.g., “Covid is a scam”). But he knows that even after the group reports – multiple times – through the Horizon user reporting feature, the phrase problems are not taken into account. Meta content violation of VR policy.
Racism in the Metaverse
In one post, an anonymous Facebook employee reported not having a “good time” using the VR Rec Room social app on the Oculus Quest headset. He said, someone sang a mockery of the race. The employee tried to report it, but mentioned that he did not know the username.
In an email, Rec Room CEO and co-founder Nick Fajt said that a player to use the same racial ridicule is prohibited after reports from other players. Fajt believes the banned player is the same person the Facebook employee complained about.
Theo Young, 17, said he was starting to notice more toxic behaviors, including a homophobic language, in the social lobbies of Echo VR last spring. Young stopped playing when he saw other players harassing a player.
“I left the game very hard after that experience. It just wasn’t fun anymore,” he explained.
Online harassment has become a major problem
According to a study published this year on Pew Research Center, 4 out of 10 American adults are bullied onlinee. And those under 30 are not only more likely to experience harassment, but also more severe abuse. Meta declined to say how many reports Oculus has received of harassment or hate speech.
A 2019 study of virtual reality harassment by Oculus researchers also found that the definition of online harassment because it is very subjective and personalbut that the feeling of the presence of virtual reality makes the harassment more “severe”.
Different metaverse moderation systems
As part of metaverse moderation, there are several techniques, some of which have already been adopted.
This is a moderation system widely used by metaverse players. This action involves not receiving audio from a player. Riot, the publisher of League of Legends (a game known for its toxicity problems), conducted experiments on the subject, making Mute voice chat between competing teams.
The exchanges then analyzed as 33% less toxic. However, the mutation also contributed to the loneliness of the victims. And in the end, it compromises the game experience and the duration of use by the players.
Activating the space bubble
This is a dimension specific to a virtual reality world. This prevents the user from beyond the close limit (usually about 1m). In addition, it reduces the risk of physical aggression. This solution is mixed with Meta Horizon latest update under the name of “Personal Boundary “.
The player rank or situation system
And if we associate only with reliable people ? This is the proposition of the status or rating system. VR chat is advancing the trust system in this direction. Users can divide their social interaction according to the situation of other players. It’s a real “à la carte” moderation system, close to social networks, where you can only see your friends ’interactions.
The user report
If there isn’t any real direct effect, it is warn “decision makers” of disruptive behavior in the virtual world. The reason can sometimes be specific. But the use of measures it is the same responsibility of the decision maker and little follow up can be done by the person reporting.
Expelling the user from space
The ban prevents players with disruptive behavior from returning to the game.This is a moderate measure. temporary or permanent. However, this solution is likely to destroy communities.
Evicting harassers from the community as a whole is one possible route. But in virtual reality, if communities are very limited, you can consider educating and healing them.
A closer look at online poisoning numbers reveals that some of the bullies are bullies too. Users with disruptive behavior not all of them can be permanently banned. We risk seeing all the communities in the virtual world shrink, little by little.
Artificial intelligence to combat VR harassment
Meta explores a way to allow users to record retroactively on its VR platform. It also examines the best way to use artificial intelligence to overcome virtual reality harassment, said Kristina Milian, Meta spokesperson. However, the company can’t record everything people do in VR. In fact, it would violate their privacy.
Moderation in the metaverse, is a difficult mission
The metaverse is still the same more difficult to moderate than current Meta platforms. In fact, it takes up existing content moderation issues and exaggerates it even more. In a VR / AR world, an content moderator is a must keep an eye on content posted by people. He also had to watch their behavior. It means monitoring and moderating what people say and do. Bad behavior in virtual reality is common difficult to follow. In fact, the incidents happen in real time and are usually not recorded.
The Chief Technology Officer (CTO) of Meta, Andrew Bosworth, recognized that it is almost impossible to moderate how users speak and behave in the metaverse. He outlined ways the company could try to solve the problem. But experts tell the Verge that the Monitoring billions of interactions in real time will require a great deal of effort and not even possible.