Conspiracies are accepted in the Facebook metaverse

A study of metaverses shows that they are still insecure areas for users and users. Their ways of modesty do not prevent harsh and dangerous speech.

We already know that the metaverse can be a dangerous place. In December 2021, a user of Horizon Worlds, the metaverse of Meta, the parent company of Facebook, reportedly suffered a virtual sexual assault via his avatar, allegedly being the victim of touching. Our reporter Nicolas Lellouche, who spent a week in the metaverse, also witnessed bad behavior in a number of alternative worlds, such as insults, acts of violence, and attempted sexual assault. attack.

These problems are not separate works: the Sum Of Us association published a complete report on May 31, 2022 listing many problems with moderate metaverse platforms, particularly Meta. In addition to security issues for users, the authors wrote that Conspiracy theories are poorly moderated in the metaverse.

For more

The metaverse is not a safe place // Source: Canva

A QAnon server at Horizon Worlds

The report’s authors explain that extremist content is said to be very common in the metaverse. They cite in particular the example of journalists from Buzzfeed, who set up Horizon Worlds a private server entirely dedicated to fake news. Nicknamed “Qniverse”, in reference to American conspiracy theory QAnon, the group welcomed harsh comments.

On this server, Buzzfeed journalists are free to publish a huge amount of fake news, about the alleged “theft” of the 2020 American election, or even the origin of the Covid pandemic, claiming that it made from scratch. Posts by Alex Jones, one of the leading American conspirators, explaining that Joe Biden could be a pedophile and that a caste of reptiles would secretly rule the world, were also shared without problem.

Reporters are intended to use terms that are often moderated by Facebook – including QAnon references, which are often quickly removed from Facebook. But for 36 hours, the server was not seen by Horizon’s moderation teams. They then had to report certain publications multiple times before the moderators responded, and informed journalists that they had not found anything that violated the platform’s terms of use.

Meta fails to effectively moderate a group that shares intense content in the Horizon World metaverse // Source: Canva

“No” to moderation in the metaverse

How to make such a decision, when comments often get them a ban? In their article, reporters suggested a number of assumptions, such as the limited size of the server, or the fact that they did not interact internally outside the group. However, their observation shows the limitations of moderation that currently exist in Horizon Worlds, and that these problems are likely to increase as the number of users increases.

Instead of learning from its mistakes, Meta continues in the metaverse conclude the authors of the Sum Of Us report. Meta has no specific plan on how it seeks to moderate dangerous content and behavior, such as hate speech and misinformation. “, they stated. They also recall that Andrew Bosworth, Chief Technology Officer at Meta, himself admitted in an internal message that moderation in Metaverse is” almost impossible “. A message indicating that conspiracy talks will definitely not be moderated at any hour.

Leave a Comment