Meta barely fights against AI with new deepfake policies

joe biden on phone next to robot with blue eyes


joe biden on phone next to robot with blue eyes

Breakdown

  • Meta has recently been dealing with tons of misinformation and deepfake content generated via Artificial Intelligence.
  • They plan on implementing a labelling system to mark AI content and misinformation similar to X's community notes.
  • The oversight board demanded that Meta also include user-generated misinformation under the umbrella of labels.

Meta has finally launched an offensive against AI generated and deepfake content which has been running rampant on Facebook. However, the new policy opts for a lenient and informative approach.

The shift in policy will focus on labelling and providing disclaimers for AI-generated or misleading content rather than outright removing it.

Monika Bickert, Vice President of Content Policy, wrote in a blog that Meta and its subsidiaries will start adding a "Made with AI" label to AI-generated content sometime around May 2024. This label will affect all media content such as images, videos, and audio.

She added that these labels will have different tiers of severity that can also be applied to non-AI generating information. There will be a particular focus on misleading content or edits that spread misinformation. The system is reminiscent of X's community notes feature, however, we're not sure how Meta aims to judge content.

They've previously mentioned an automated system that can judge AI-generated content from certain companies through invisible markers in the image. However, we've received no update on its implementation.

Meta aims to launch their "high-risk" label with immediate effect as elections draw near globally. New political campaigns have already started utilising AI, and AI-generated images run rampant in discussion forums online.

Meta's oversight board has been highly critical of the company's current rules and policies on media. They quote a video of Joe Biden, which has been altered by someone to show that he behaved inappropriately and made some questionable remarks about the Ukraine drafts.

In response to this, the oversight board demanded back in February that Meta alter and expand its policies to also include non-AI generated content under the misleading label umbrella.

However, Meta has recently been struggling with handling AI-generated content and deepfakes. Jenna Ortega and many other celebrities have had their persona exploited by companies and advertisement agencies using deepfake technology to have them perform explicit acts.

The new policy makes no mention of how they plan on dealing with these companies and exploitative campaigns that exploit the advertisement system. Hopefully, Meta can implement a method to quickly and swiftly deal with such campaigns.

This Article's Topics

Explore new topics and discover content that's right for you!

NewsTechAIMetaverse