AI is heading towards a Terminator future, claims Ethicist

With AI and robotics becoming the norm, many are scared of us heading into a Terminator future. One ethicist believes that this future is arriving soon, at least if we keep making sweeping advances with the technologies.

While an actual dystopian future like The Terminator is a bit farfetched, we sure are getting closer than ever. Should we be worried about robots going haywire in the future? Or are we watching too many movies that have evil robots in them?

A Terminator future is…possible?

Philosopher and ethicist William MacAskill addressed his concerns over a Terminator-like future in an interview with Big Issue, amongst other real concerns. Though MacAskill does bring up some interesting points about thinking of the future, his comments about a possible Terminator scenario are a bit much.

“There is one aspect of The Terminator that I think is correct. In the Terminator universe, they start developing more and more powerful AI systems,” explains MacAskill. “And then they create this one – Skynet – that’s particularly powerful. Skynet realises that humanity is a threat to itself, and therefore takes defensive action. It’s a kill or be killed scenario.”

With more robots being made and mass-produced, not to mention reliance on AI for trivial matters, MacAskill might have a point. Hopefully, corporations don’t completely rely on robotics in the future, as we’ve seen enough sci-fi movies to know how that ends.

Read More: Emotion Reading Robots unveiled at World Robot Conference

What can we do about it?

Before anyone thinks about fighting the machines, William MacAskill argues that we should think about the consequences first before taking action. Of course, this could lead to some people becoming overly aggressive to robots, leading to a Terminator or Matrix future.

“There are many, many problems in the world, many things that impact the long term. How do you prioritise among them?” asks MacAskill. “Even asking the question, that’s the most important starting point. But what I argue is that when we take an action, we should think in terms of: what’s the probability that we can make a difference? And then secondly, how good or bad with a result of that difference be? You can use that to start to prioritise among these different challenges.”

Here’s hoping that things never get that bad. At the least, let’s hope that we all die before that even becomes a possibility.

This Article's Topics

Explore new topics and discover content that's right for you!

AIRoboticsNews
Have an opinion on this article? We'd love to hear it!