General AI News Opinion
Mar 28, 2019 ● Tim Sandle
How People Can Teach AI To See Like Humans

A new study demonstrates that humans can think like computers

A new study demonstrates that humans can think like computers. The research indicates how artificial intelligence is narrowing the gap between the visual abilities of people and machines, resulting in computers edging closer to how people think.

The focus of the research is to investigate if people can think like computers, rather than finding ways to get computers to think more like people. The reason for adopting this path was so that Johns Hopkins University scientists could learn to appreciate how artificial intelligence systems perceive the world around them.
Such an undertaking is important for driving improvements, such as the way that self-driving cars operate. For example, if a self-driving car interprets a series of scribbles as a fence this can result in an accident happening. To overcome this, humans need to understand why the artificial intelligence that controls the autonomous vehicle ‘sees’ a series of scribbles as a fence (in this particular incidence).
According to lead researcher Professor Chaz Firestone: “Most of the time, research in our field is about getting computers to think like people. Our project does the opposite—we're asking whether people can think like computers."

The researcher expands further in the following video: 

While AI systems are superior to humans in terms of running calculations or with holding and retrieving data, visual awareness has long been a challenge. While neural networks have improved, it remains possible to present images to neural networks that either cannot be seen or which are interpreted incorrectly. This presents safety risks and a possible way for hackers to disrupt AI systems.

To test out differences between human and machine perception, the researchers looked at human and machine classification of adversarial images (nonsense patterns that machines recognize as familiar objects). Across eight experiments looking at five diverse adversarial image sets, it was found the human volunteers could correctly anticipated the machine’s preferred label for the different images. Here the scientists asked people which of two options the computer decided an object was. one being the computer's real conclusion and the other a random answer.
This was even where the images were unrecognizable to human eyes. This means that human intuition is a reliable guide to assessing when machine mis-classification is likely to occur.
The new research has been published in the journal Nature Communications. The research is titled “Humans can decipher adversarial images.”


This article originally appeared in Digital Journal 

Article by:

Tim Sandle