With the recent launch of Meta's Horizon Worlds Metaverse, the future of virtual reality is apparently here. As it just so happens, that future is full of sexual harassment, exactly like tangible reality.
Reported by MIT Tech Review, Meta's — formerly Facebook — first foray into Metaverse development is already suffering from unwanted bad characters. Just as expected, the company that's historically bad at moderation is having trouble keeping individuals in line.
Sexual harassment is already active in Meta's Horizon Worlds Metaverse
On November 26th, just before the service launched, a Horizon Worlds beta tester reported an uncomfortable experience inside The Metaverse. The beta tester reported that another user began groping her inside the application.
After the incident was reported, Meta’s internal review concluded that the player should've tried to protect herself. Specifically, the company explained that the user should have activated the “Safe Zone” tool.
The Safe Zone tool allows a user to isolate themselves inside a bubble in the Horizon Worlds Metaverse. In the bubble, a user can be kept safe from others with people being unable to talk, touch or interact with them at all.
However, by introducing this tool, Meta is essentially giving up on moderating their Metaverse app. Instead of punishing wrongdoers, Meta is putting the responsibility of keeping safe from sexual harassment on the victims instead.
Meta is unable to moderate anything
Meta’s issues with moderation have long plagued services like Facebook and WhatsApp. However, with a service as vast and interactive as The Metaverse, even more work has to go onto moderation, and Meta knows this.
Leaked internal reports from Meta describe an inevitable toxic VR future that will be impossible to properly moderate. While executives have proposed “Disney levels” of moderation, the current situation appears to be no moderation at all.