Facebook, now Meta, has always had a moderation problem. Hate Speech, such as antisemitism, is rife on the company's social media platform. But hateful content is profitable, so why would the company remove it? And with plans for a new, all-encompassing metaverse VR platform, how will moderation work in the future?
Facebook's Metaverse VR will be the most toxic platform yet
Reported by The Financial Times, future Meta CTO Andrew Bosworth believes the new platform will be incredibly toxic. A memo from March 2021 obtained by the outlet shows Bosworth describing a platform rife with bullying.
According to the outlet, Bosworth said bullying and bad behaviour is worsened in a fully interactive environment. With Internet anonymity mixed with full immersion, Bosworth believes it will be an instant “toxic environment”.
In order to combat this, the Meta executive wants “almost Disney levels of safety” in the Metaverse. However, he believes actually introducing effective moderato tools “at any meaningful scale is practically impossible”.
Read More: The Moon has enough oxygen trapped in regolith to keep humanity alive for 100,000 years
Bad moderation will kill the company
In the memo, Bosworth expressed worries that bad moderation will harm the company. Facebook has been under more pressure than ever to actually tackle the issue of hate speech on its platform. However, until recently, the company repeatedly failed to take action.
If the toxicity of a Metaverse VR platform is as bad as Bosworth expects, then it will be an “existential threat” to the company. Furthermore, it will turn away “mainstream customers from the medium entirely”.
Multiple Facebook employees have discussed the issues with actually creating the Metaverse. Most notably, legendary game programmer and VR head John Carmack has little faith that Facebook’s first generation attempt will succeed.
Read More: Tencent announces its own plans to create a metaverse