An AI designed 40,000 new chemical weapons in 6 hours


One of the most impressive uses of artificial intelligence is to generate thousands of different solutions to a single problem. But what happens when the strength of AI is used for nefarious purposes?

AI used to create chemical weapons

Reported by The Verge, a team of researchers took an as artificial intelligence program designed to create helpful drugs and twisted it for evil. Instead of designing drugs to treat illnesses and cure disease, the software was asked to do the opposite.

The researchers altered the AI’s dataset that determined toxicity of chemicals. Instead of avoiding toxicity, this new model would actively seeking it, backed by its already learned knowledge of chemical dangers.

In just six hours, the program developed a mind-bending number of chemical weapons that could wreck havoc on humans. Around 40,000 chemical substances were developed, shaking the researchers.

Most importantly, the software by sheer iteration managed to develop chemicals similar to VX. That chemical is revered as the most powerful nerve agent ever developed. According to the CDC, VX is toxic via skin entry and inhalation. They say “any visible VX liquid contact on the skin, unless washed off immediately, would be lethal.”

Read More: AI job interviews border on eugenics, claim experts

A cause for concern?

Fabio Urbina, lead author of the researchers’ paper on the study explained that the study was designed to inform the scientific community on the misuse of machine learning. The researchers were invited to the Convergence conference to discuss AI and its implications on bioweaponry.

“We got this invite to talk about machine learning and how it can be misused in our space,” Fabio Urbina. “It’s something we never really thought about before. But it was just very easy to realize that as we’re building these machine learning models to get better and better at predicting toxicity in order to avoid toxicity, all we have to do is sort of flip the switch around and say, “You know, instead of going away from toxicity, what if we do go toward toxicity?”

Urbina explained that the team wasn't sure if they wanted to publish their findings, just in case people use them for wrongdoing. However, after much discussion, the team decided that it was for the best.

“We looked around, and nobody was really talking about it,” he said. “But at the same time, we didn’t want to give the idea to bad actors. At the end of the day, we decided that we kind of want to get ahead of this. Because if it’s possible for us to do it, it’s likely that some adversarial agent somewhere is maybe already thinking about it or in the future is going to think about it. By then, our technology may have progressed even beyond what we can do now.”

This Article's Topics

Explore new topics and discover content that's right for you!

AINews