AI with human thinking developed by Fujitsu and MIT


In its current form, the intelligence of “artificial intelligence” is intrinsically locked to datasets. While programs like Megatron Transformer (dumb name) have access to many millions of Internet pages to build their knowledge, they're still not AI with human thinking.

If these programs did have human thinking they would be able to learn outside of datasets. However, according to researchers at Fujitsu and MIT, AI with human thinking is not just coming in the future, but it's already here.

Fujitsu and MIT create AI with human thinking

Reported by TechRadar Pro, Fujitsu and MIT Center for Brains, Minds and Machines (CBMM) researchers have created the next generation of artificial intelligence. In an attempt to improve AI image recognition accuracy, the researchers created a model that can learn on its own.

The team claim the new intelligence program has the same thinking process as humans. Essentially, the program can create “out-of-distribution data”. This data is based on interactions that have occurred outside of trained data sets.

Just like the human brain, the AI program will teach itself from context what objects are and what their purpose is. Of course, this does introduce more margins for error as the AI can only compare against information it knows.

Read More: Sophia, an AI-powered robot, wants to become a mother

The start of flexible AI

One of the most important aspects of a self-learning AI in this manner is to try and combat dataset bias. For example, intentional or unintentional biases often make their way into datasets that are used to train artificial intelligence.

These biases are incredibly problematic and have an adverse effect on marginalised groups. This bias-filled reality is so well-known that White House officials are attempting to create an AI Bill of Rights to fight it.

MIT Department of Brain and Cognitive Sciences professor Dr. Tomaso Poggio believes that this new form of AI is the only way to combat bias. He said:

“There is a significant gap between DNNs and humans when evaluated in out-of-distribution conditions, which severely compromises AI applications, especially in terms of their safety and fairness. The results obtained so far in this research program are a good step [towards addressing these kinds of issues].”

This Article's Topics

Explore new topics and discover content that's right for you!

AINews