Google’s ChatGPT competitor Google Bard has been fought against by employees at the company. Over the past few months, Google employees have begged the company not to release the inaccurate, lying AI to the general public.
Exposed in documents acquired by Bloomberg, the vast majority of workers at Google are unhappy with the state of the AI software. Employees at the tech giant claim that the software is dangerous and a “pathological liar”.
Google’s move to release Bard before it’s ready has been deemed an “ethical lapse” in the company’s AI development. However, the company has often shown disregard for AI ethics for the past few years, including the firing of numerous AI ethics leads such as Timnit Gebru.
During testing, Google Bard was revealed to give users incorrect, even life-threatening information. When one user asked the AI how to land a plane, the tool’s advice would’ve led to a crash. Another example saw one user receive information about scuba diving that was likely to “result in serious injury or death.”
At Google, employees fought against releasing an inaccurate product but, not wanting to fall behind ChatGPT and Microsoft’s BingAI implementation, the Google Bard release was pushed forward.
Within Google, AI ethics have taken a backseat in order to stay relevant. Management has reportedly deemed that risky technology such as Bard can be released to all as long as it is labelled as an experiment, even if the mainstream public using it isn’t aware of the consequences that the tech poses.
After all, while many are using ChatGPT and its alternatives to create university essays, write SEO content and more, the information the technology actually knows is very little. As such, it results in endless misinformation, and Google Bard is following the same path with plans to directly incorporate it into the world’s most popular search engine.
Employees at Google are still facing issues with developing ethical AI at the company. Just like the rollout of Google Bard, in-development projects at the company are often rushed to release while managers fight against any ethical barriers.
According to the Bloomberg report, employees have fought to work on fairness in machine learning algorithms. However, management at the company has cut all attempts at making its AI products fair, ethical and right, claiming all time and effort on those areas hampers “real work” that makes profit.
This has continued with Bard. The report claims that Google AI governance lead, Jen Gennai overruled a her own team’s risk evaluation that believed Bard’s performance was “next to useless” as well as dangerous.
It’s worth noting that Google Bard is not the only AI service that is inaccurate and dangerous, they all are. However, Google’s unwillingness to fix its ongoing AI ethics issues, especially after its CEO said it would do so, proves that those in the AI field are not looking to do good by humanity, just profit.