Facebook parent company Meta is working hard on The Metaverse, a dystopian virtual world for work, play and relaxation. Many have decried that the virtual world will cause harm to society, destroying reliable reality.
In a wild new development, Meta, the company behind The Metaverse push, has disbanded a team designed to find “potential harms to society” in its products. For example: The Metaverse.
Meta disbands society safety team
Revealed in a report by WSJ, Meta has completely axed its team responsible for discovering harm in upcoming products. The team’s role was to help eliminate any dangers in products such as VR headsets, social media, artificial intelligence and The Metaverse.
Via the report, the team consisted of “roughly two dozen engineers, ethicists and others”. Every new product would go through the team who would consult with privacy specialists, academics and more to discern danger.
Known as the Responsible Innovation Team, this group was integral to every Meta product launch. However, that team is now disbanded, with the company taking a different approach.
Spokesperson Eric Porterfield explained that those employees may get jobs elsewhere in the company. However, the company believes “safe and ethical product design resources were better spent on more issue-specific teams”.
Cutting out the Ethics
Meta’s issues with ethics is not a new development for the tech giant. For example, the company is still dealing with the fallout of the Cambridge Analytica scandal.
However, with the rise of The Metaverse and AI, ethical dilemmas have consistently surrounded the company. A recent issue is Metaverse moderation. With Meta calling the virtual world “impossible to moderate”, many have called out the danger of creating the immersive experience.
Nevertheless, The Metaverse is going ahead, and reports of racism, sexual harrasment/groping and more are constantly releasing. But will the company do anything to actually mitigate this?