Robotics are quickly becoming more prominent in hospitals. With medical robots now able to perform complex surgeries, many are excited for machines to speed up healing across the world.
Just like any form of advanced robotics, medical robots make use of high-end AI to function. However, as is common with AI, dataset biases are becoming increasingly problematic.
Are medical robots racist or sexist?
An international study by researchers, including scientists at Johns Hopkins, has shown evidence of racial and gender bias within medical robotics. This is seen as a highly dangerous
In the study, evidence shows that medical robots will show bias for operational procedures. Much like surgeons deciding who to operate on for success rates, a robot will pick between people simply after looking at their faces.
The study reads:
“A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples' jobs after a glance at their face."
These biases have been taught to robotic tools via datasets that are built upon them. For example, massive neural nets are designed by humans with their own biases for information; those biases then move into the finalised dataset either accidentally or on purpose. But what does this mean for humanity?
There’s a risk for the majority of people
Andrew Hundt, one of the scientists working on the study, explained that the biases found in datasets showcase an increasingly common flaw within systems. Furthermore, companies utilising these datasets — even within the medical field — do not care about the risks.
"The robot has learned toxic stereotypes through these flawed neural network models,” Hundt explained. "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues."
One of the reasons why AI biased are making their way into more and more medical robots is because of the datasets that companies are willing to use. Just like any industry, many companies are looking for the cheapest R&D possible. This means that some firms are simply picking free and easily available neural nets from the internet, the same internet that releases AI models designed after 4Chan comments.
Ethics don’t matter to companies
AI Ethics is a vast and complicated subject, and one that many companies are not taking seriously enough. With commonly used datasets used across countless products, the same issues are permeating through multiple industries.
Even worse, huge AI companies are not working hard enough to mitigate these issues. Instead, they seem more interested in actively encouraging them with companies like Google instead firing ethics researchers.