One of the most impressive uses of artificial intelligence is to generate thousands of different solutions to a single problem. But what happens when the strength of AI is used for nefarious purposes?
AI used to create chemical weapons
Reported by The Verge, a team of researchers took an as artificial intelligence program designed to create helpful drugs and twisted it for evil. Instead of designing drugs to treat illnesses and cure disease, the software was asked to do the opposite.
The researchers altered the AIās dataset that determined toxicity of chemicals. Instead of avoiding toxicity, this new model would actively seeking it, backed by its already learned knowledge of chemical dangers.
In just six hours, the program developed a mind-bending number of chemical weapons that could wreck havoc on humans. Around 40,000 chemical substances were developed, shaking the researchers.
Most importantly, the software by sheer iteration managed to develop chemicals similar to VX. That chemical is revered as the most powerful nerve agent ever developed. According to the CDC, VX is toxic via skin entry and inhalation. They say āany visible VX liquid contact on the skin, unless washed off immediately, would be lethal.ā
Read More: AI job interviews border on eugenics, claim experts
A cause for concern?
Fabio Urbina, lead author of the researchersā paper on the study explained that the study was designed to inform the scientific community on the misuse of machine learning. The researchers were invited to the Convergence conference to discuss AI and its implications on bioweaponry.
āWe got this invite to talk about machine learning and how it can be misused in our space,ā Fabio Urbina. āItās something we never really thought about before. But it was just very easy to realize that as weāre building these machine learning models to get better and better at predicting toxicity in order to avoid toxicity, all we have to do is sort of flip the switch around and say, āYou know, instead of going away from toxicity, what if we do go toward toxicity?ā
Urbina explained that the team wasn't sure if they wanted to publish their findings, just in case people use them for wrongdoing. However, after much discussion, the team decided that it was for the best.
āWe looked around, and nobody was really talking about it,ā he said. āBut at the same time, we didnāt want to give the idea to bad actors. At the end of the day, we decided that we kind of want to get ahead of this. Because if itās possible for us to do it, itās likely that some adversarial agent somewhere is maybe already thinking about it or in the future is going to think about it. By then, our technology may have progressed even beyond what we can do now.ā