Powerful AI must align with human values to remain safe, says Google

share to other networks share to twitter share to facebook

Tech giant Google is one of the top three companies for artificial intelligence, fuelled by its DeepMind technology. However, the tech company has warned that powerful AI needs to be taught human values so as not to wipe us out.

Google v Powerful AI

Speaking at Code 2022, Google and Alphabet CEO Sundar Pichai explained that AI needs to be developed with humanity in mind. However, there was no speech on how that can be achieved.

Advertisement

Via PCMag, Pichai’s statements on the development of AI were rather basic. However, the CEO did explain that more will be done to safeguard humans from Google artificial intelligence as the tech advances.

Pichai explained that it’s “important that we develop AI aligned with human values." This means that powerful AI should be designed with to emulate human ethics.

However, Google can be seen as hypocritical on this stance. After all, the tech giant has been criticised for its poor handling of AI bias. Furthermore, the company was recently embroiled the massive LaMDA scandal where one engineer claimed they had made a sentient program.

Read More: Japanese conspiracy theory claims robots are killing dozens of scientists

A lack of ethics

Google’s uncomfortable stance on AI ethics has been fought against by many experts. Despite numerous issues with its AI technology, Google allegedly has a rather relaxed take on one of the industry’s biggest problems: bias.

AI bias is prominent within the entire industry. With common biased datasets behind the majority of AI programs, many have pointed out the issue with massive neural nets. In fact, the situation is so prominent that the White House has requested an AI Bill of Rights.

Advertisement

However, Google’s issues run company deep. In multiple instances, Google has allegedly fired Ethics Researchers for speaking out against issues with Google AI projects.

One of the most famous instances of this is Timnit Gebru. Gebru has called Google’s research processes dangerous in multiple instances as the company allegedly cares more about improving technology than keeping humans safe.

At the panel, Pichai acknowledged that Google has made mistakes in the past. Hopefully, this means that future work on powerful AI will be better attuned to human safety.