Microsoft kills sales of Emotion Analyser AI after realising the dangers


Emotion analysing artificial intelligence is becoming an emerging technology with companies like Zoom horrifically pitching its version to schools. However, as emotion analysing AIs “get good”, many are realising the dangers of the fledgling software, including tech giant Microsoft.

Microsoft kills sales of Emotion Analysing AI

In a press release on Tuesday, Microsoft announced that they will no longer be selling Emotion Analysing AI technology. The company notes that the tech is too dangerous to become commonplace.

Microsoft’s research has resulted in AI software that can detect the gender, age and emotions of a person via facial recognition. However, due to dataset biases, the technology is often deemed as racist due to the vast dominance of white male data that makes up its neural net.

The tech giant announced that it would be restricting access to its AI services. This is because the tech restricts “subjecting people to stereotyping, discrimination, or unfair denial of services.”

Furthermore, Microsoft explained that Emotion Analyser AI software is far more limited than many believe. They said:

“These efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions,’ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics.”

The commitment to killing the company’s dangerous AI is part of the company’s “Responsible A.I. Standard”. This means that Microsoft has new “Limited Access” products that will further research while not selling products.

Read More: Facebook AI deemed highly racist, toxic and dangerous by Meta researchers

The backlash towards emotion AI

Emotional-based Artificial Intelligence has been a controversial topic over the past few years. In fact, the technology has been frequently protested by human rights groups.

Organisations including the American Civil Liberties Union, Electronic Privacy Information Center and Fight for the Future have all protested the technology, especially its use against children.

Multiple human rights groups have argued that “facial expressions can vary significantly and are often disconnected from the emotions underneath such that even humans are often not able to accurately decipher them.” Essentially, human emotions are not binary and compares can and will mislabel them.

This Article's Topics

Explore new topics and discover content that's right for you!

AINews