Artificial Intelligence is one of the biggest pieces of emerging technology there is. However, many are concerned that powerful AI will end up destroying the world, and AI scientists are not unaware of this belief. In fact, many of them hold it too.
AI scientists and dangerous AI
In a report by New Scientist (paywall), it was revealed that around a third of AI scientists believe the technology is dangerous. Furthermore, the scientists believe that Artificial Intelligence has the potential to cause a global catastrophe.
The study was conducted by New York University Center for Data Science’s Julian Michael. In the study, Michael asked a sample of 327 AI scientists whether AI could end the world. A large number of them claimed that it absolutely could.
36% of all respondents claimed that artificial intelligence has the power to wreck catastrophe on Earth. A massive 73% believed that AI will undoubtedly change society due to increased automation. The respondents believed that the technology could lead to nuclear catastrophe.
Center for a New American Security’s Paul Scharre explained: “If it was actually an all-out nuclear war that AI contributed to, there are plausible scenarios that could get you there. But it would also require people to do some dangerous things with military uses of AI technology.”
However, not all of the respondents believed that AI’s dangers revolve around nuclear disaster. Instead, they believe that AI is dangerous, but in a less severe way than nuclear detestation.
“Concerns brought up in other parts of the survey feedback include the impacts of large-scale automation, mass surveillance, or AI-guided weapons,” said the survey’s creator. “But it’s hard to say if these were the dominant concerns when it came to the question about catastrophic risk.
Military AI is an issue
While AI scientists are wary of the dangers that artificial intelligence holds, so are the general public. However, militaries are still working on advanced AI technologies for use in warfare.
In recent years, military AI has even evolved into predictive technologies. For example, the American military has been working on a Minority Report style AI system to predict war events.
Furthermore, AI is being paired with robotics for warfare. Military defence company Ghost Robotics has designed a robot dog with a rifle attached to its back. Other military robotics have to undergo “common sense training” before being used in the field.
While the United Nations has attempted to restrict the use of military AI and robotics, the governmental board has been unsuccessful. As it stands, AI and robotics in the military is here to stay.