AI researchers need to “stop and calm down for a second”, says ex-Google engineer

share to other networks share to twitter share to facebook

The ethics of artificial intelligence has often been a key point of science fiction. However, now that the technology is in front of us, that very conversation is one that many AI researchers want to avoid. However, ex-Google engineer Timnit Gebru believes this conversation needs to happen now.

Speaking to Wired, Gebru discussed the current issues with artificial intelligence. With human bias informing algorithms, Gebru has noticed flagrant issues in AI products from major companies, and she’s not happy about it.

AI Ethics need to be incorporated by AI researchers

Advertisement

After being forced out of Google, Gebru started talks about the issues of current AI research. Even at Google, Gebru wanted to prevent real-world damage caused by artificial intelligence. For example, she describes “terrifying” AI programs that assess “someone’s likelihood of determining a crime again”.

At Google, her role was to solve dangerous issues in Google products. But, eventually, Google left her and her team out of necessary conversations. The tech giant wanted continuous expansion, “larger and larger… models” but they didn’t want to focus on the issues of doing so.

“We had to be like, ‘Let’s please just stop and calm down for a second so that we can think about the pros and cons and maybe alternative ways of doing this.’,” Gerbu told Wired.

Read More: AI audio tool results in wrong man being arrested for murder

A result of industry trends

Gerbu’s comments reveal the tech industry’s current interest: beating out the competition with whatever means necessary. Instead of thinking about ethical roadblocks, improve the technology. If one competitor improves, you must improve again even faster.The AI researcher explained that there’s no thinking about what AI should be created and which ones shouldn’t be.

She said:

“The incentive structure is not such that you slow down, first of all, think about how you should approach research, how you should approach AI, when it should be built, when it should not be built. I want us to be able to do AI research in a way that we think it should be done—prioritizing the voices that we think are actually being harmed.”

Read More: Brain Implant allows paralysed man to talk via thought