Humanity was destined to break The Three Laws of Robotics

share to other networks share to twitter share to facebook

The Three Laws of Robotics are a long-beloved sci-fi concept. Created by I, Robot writer Isaac Asimov in 1942, the fictional rules were believed to be a guide to creating robots that would benefit humanity.

With products like Tesla Bot attempting to commercialise the household humanoid robot, these laws are more important than ever. However, as per usual, humanity has seen the cautionary tale of science fiction, and has completely avoided it.

What are The Three Laws of Robotics?

In Asimov’s 1942 short story Runaround — later published in I, Robot — the Three Laws of Robotics are rules that all aware robots most follow. These rules are designed to keep robots subservient to humanity and to protect humans from any harm that could come from robotics.

The rules are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Advertisement

These rules are already seen as unstable in Asimov’s original story. In the story, Robot SPD-13 is helping to restore photo-cell banks that power a mining station’s life support. To do this, SPD-13 must retrieve Selenium.

However, upon reaching the Selenium, SPD-13 realises that the Selenium is dangerous to him. Prior to the mission, the robot’s laws were altered to strengthen Rule 3 as the machine was too expensive to lose. As the robot sees danger, can’t disobey its direct order and failing its task would result in humans coming to harm, it’s AI brain stalls, unable to continue.

Read More: Engineered Arts CEO warns against pairing AIs with military robotics

Are these rules used in robot and AI development?

Asimov’s Three Laws of Robotics have been proposed as a guideline for the development for robots and AI for almost 80 years. However, as both fields become more complex and varied, those rules are slipping to the wayside.

For example, in 2011, In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council created a revised ruleset. Consisting of five rules and seven messages, the guidelines add claim robots should not be designed to kill people, should achieve human goals, should promote their own safety and should not emotionally manipulate humans or be indistinguishable from humans.

The law of robotics have also made their way into AI development. In 2016, Microsoft CEO Satya Nadella revealed the company’s rules for AI. As one of the largest companies in the world, Microsoft’s guidelines may be seen as the most influential. They are as follows:

  1. A.I. must be designed to assist humanity" meaning human autonomy needs to be respected.
  2. "A.I. must be transparent" meaning that humans should know and be able to understand how they work.
  3. "A.I. must maximize efficiencies without destroying the dignity of people".
  4. "A.I. must be designed for intelligent privacy" meaning that it earns trust through guarding their information.
  5. "A.I. must have algorithmic accountability so that humans can undo unintended harm".
  6. "A.I. must guard against bias" so that they must not discriminate against people.
Advertisement

Read More: Artificial Intelligence is creating patents faster than humanly possible, breaking the law

The Laws of Robotics are being ignored

Despite their prominence across ethical discussions and pop culture, the three laws of robotics are being completely ignored — purposefully. As a ruleset that has never been legalised, the guidelines do not have to be followed, and that means they never will be.

For example: the main rule — A robot may not injure a human being or, through inaction, allow a human being to come to harm — is in the midst of gross violation. With automated drones, robot dogs with back-mounted rifles and future robotics designed for warfare, this guideline will likely never be followed.

Even Microsoft’s AI regulations, a set of rules created by a company using AI for profit, can not be followed, at least by its rivals. The last rule, guarding against bias, is a huge issue across every major AI platform, from GPT-3 to GAN to Facebook’s new AI that was deemed by Meta AI researchers to be unusably racist and toxic.

Advertisement

And yet, not much is done to combat this. On the “killer robotics” side, there are a few protestors, including those in the UN. However, as countries like America, China and Russia refuse to give up Killer Robots, enforcing the first rule is impossible.

For AI, this is somewhat more egregious with companies like Google firing AI ethics researchers who highlight the dangers of current, biased AI.

Why humanity was destined to break the rules

Humanity was destined to break The Three Rules of Robotics for two reasons: money and power. The robotics industry is ballooning in size across industrial, military and even commercial sectors.

With Tesla Bot and robot pets aiming to be a part of the everyday person’s daily routine, these laws are more integral than ever. However, it the AI powering self-driving cars still can’t decide whether or not to ram into a crowd or not, how can we trust the AI powering commercial robots to stay unbiased in tasks?