AI Journalist Brands Innocent Man As Sexual Predator

A robot man in a suit typing on a computer in a dimly lit office
Credit: StealthOptional

A robot man in a suit typing on a computer in a dimly lit office
Credit: StealthOptional

The technology industry continues to allocate increasing resources to artificial intelligence each year, but there remains a significant lack of safety measures and regulations for AI. While AI offers numerous benefits to humanity, it also has the potential to cause substantial harm.

Most people use the best AI chatbots like the free ChatGPT 4o for harmless tasks, such as editing text or summarizing long articles and YouTube videos. However, for every positive use of AI, such as enabling a country star to release a new song after years of inactivity, there are equally troubling uses. These include generating AI nudes of celebrities to sell on eBay or Google advising users to drink their own urine.

Once again, there's another reason to fear the imminent AI takeover. While the "Godfather of AI" warns that the technology could lead to human extinction, there are equally terrifying examples of AI misuse. One recent case involves an Irish DJ and talk show host who was falsely labeled as a sexual predator by an AI-generated news article.

In an investigation by The New York Times (via Futurism) into an AI news site called BNN Breaking, one story stands out: an article claimed that Irish broadcaster Dave Fanning was facing trial for sexual misconduct. Fanning was not the broadcaster involved in the trial, but the AI-generated article falsely named him, presenting him as a potential sexual predator to the website's readers. Matters worsened when MSN, Microsoft's media aggregation site, promoted the story to a larger audience. It was featured for several hours before being taken down, but by then, the damage was done.

The seemingly legitimate news site and the promotion by MSN, owned by Microsoft—a company deeply involved in AI with products like Copilot Pro and the upcoming and concerning Recall AI feature—led Fanning to sue both Microsoft and BNN Breaking for defamation, as his reputation was tarnished despite the short-lived article.

Microsoft is already under scrutiny, especially after an engineer warned the FTC about Copilot generating harmful content. This incident will add more pressure on one of the largest tech companies to implement safety measures around its AI tools. However, it serves as another example of AI being a problem rather than a tool.

This Article's Topics

Explore new topics and discover content that's right for you!

NewsAIDystopia
Have an opinion on this article? We'd love to hear it!