The late Stephen Hawking said AI “will either be the best thing that’s ever happened to us, or it will be the worst thing”. Years after his passing, the United Nations is looking to stomp out AI programs that threaten human rights.
United Nations vs Dangerous AI
Reported by Al Jazeera, the UN has called for a moratorium on dangerous artificial intelligence programs. This is due to the lack of regulation surrounding AI.
The United Nations isn't stopping the development of AI outright. However, it is hoping to block the “sale and use of artificial intelligence (AI) systems that threaten human rights”. Afterwards, the UN will introduce safeguards to limit the use of dangerous AI software in the future.
In a press release, UN High Commissioner for Human Rights Michelle Bachelet explained the reasons behind the moratorium. She said:
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.
The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society.”
WATCH: Tesla self-driving AI swerves towards human crowd in viral video
AI bias in the real world
Artificial Intelligence relies on human input. This means that human bias inevitably shapes the bias of supposed-to-be impartial software. This is why “human rights guardrails” need enforcing on all artificial intelligence. Bachelet said:
“AI systems... determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.
The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks.”
At this moment, we're at a pivotal moment for the future of artificial intelligence. AI is currently smart enough to be widely used, but not smart enough to drive itself down a straight road. In the coming years, AI will be getting rapidly smarter — environment be dammed.
Read More: NASA gives into petty Jeff Bezos demands with $26 million contract