While artificial intelligence has helped to reform mathematics and create life-changing pharmaceuticals, it's still not entirely trusted. With many terrified of AI's bias-filled issues, or the creation of a world-ending Ultron-esque program, it may be up to a group of ethical hackers to restore trust.
Experts call for ethical hackers to fix AI
Reported by TechXplore, experts from The University of Cambridge have claimed that large AI models need fixing by ethical hackers. These hackers would be poised to fix some of the largest issues with modern artificial intelligence.
For starters, researchers on the Study of Existential Risk suggested that major companies recruit red team hacking crews to complete “bias bounties”. Much like bug bounties for software exploits, these bounties would pay hackers for finding dangerous biases in AI. This would give hackers a financial incentive to help alleviate the software’s issues.
The experts argue that this would result in less reliance on regulation and more on traditional morality. Additionally, this moral high ground is stated to be of importance to large companies to keep the public sold on AI products.
If ethical issues continue to plague AI, then the general population may have a “crisis of trust”. This means that major populations could denounce artificial intelligence in general, leading to a total crash of the technology.
Read More: Metaverse tracking could monitor expressions and heart rate to push ads
The backlash has already begun
The Study of Existential Risk researchers argue that we are already starting to see a “tech-lash” against artificial intelligence. Lead author Dr. Shahar Avin claims that the backlash “can be all encompassing: either all AI is good or all AI is bad.”
He continued: “Governments and the public need to be able to easily tell apart between the trustworthy, the snake-oil salesmen, and the clueless. Once you can do that, there is a real incentive to be trustworthy. But while you can't tell them apart, there is a lot of pressure to cut corners."
Alvin argues that the majority of AI developers want to create safe programs. However, it's often unclear how programs will affect civilians without concrete testing, which is often done on full release instead of in closed testing.
Alvin argues that ethical hackers should play “the role of malign external agent”. This means that they “would be called in to attack any new AI, or strategise on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm."
As AI gets more prominent in everyday life, this form of attacking software could be integral in protecting civilians. In any case, making artificial more secure and safe can only ever be a good thing for humanity.
For more articles like this, take a look at our AI and News page.