The United States’ Department of Defense has a rocky history with artificial intelligence. There’s controversy everywhere such as Minority Report style systems or hiring game developers to secretly build war software. In response to this, AI ethics guidelines are being established.
Reported by MIT, The Department of Defense is attempting to rekindle trust in the American public. With this in mind, the department wishes to create transparent AI guidelines that its contractors must abide by.
What are the AI ethics guidelines for the Department of Defense
The new ethical guidelines for artificial intelligence has been labelled ““responsible artificial intelligence” guidelines”. These rules will restrict developers crafting military AI programs from creating “unethical” software.
DoD Defense Innovation Unit’s Bryce Goodman said: “There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail.”
The new guidelines will back every new piece of AI-powered tech coming out of the DoD. Every project must be built around key questions: “Who might use the tech? Who could the tech harm? How will the tech harm people?”. However, it's not stated what will happen if projects don't pass the guidelines.
Killer Robots not included
Goodman admits that AI ethics guidelines are not going to immediately bring trust to the department. “It’s important to be realistic about what guidelines can and can’t do,” he explained. “There are going to be people who will never be satisfied by any set of ethics guidelines that the DoD produces because they find the idea paradoxical.”
As MIT notes, the current guidelines do not even attempt to tackle some of the most controversial AI technologies: robotics. With AI powered weaponry and killer robots being developed for war, where are their restrictions?
Goodman states that restricting robotics is not in the DoD’s purview. Instead, guidelines on those technologies rest with higher forces. At this point, DoD can only make sure the AI software they run is impartial.