Trained neural nets perform much like humans on classic psychological tests

Today we get an answer thanks to the work of Been Kim and colleagues at Google Brain, the company’s AI research division in Mountain View, California.

The researchers have tested various neural networks using the same gestalt experiments designed for humans.

And they say they have good evidence that machines can indeed perceive whole objects using observations of the parts.

Sign up for the The Algorithm Artificial intelligence, demystified hbspt.

forms.

create({ portalId: “4518541”, formId: “687d89a5-264a-492d-b504-be0d4c3640f2” }); Kim and co’s experiment is based on the triangle illusion shown in the figure.

They first create three databases of images for training their neural networks.

The first consists of ordinary complete triangles displayed in their entirety.

The next database shows only the corners of the triangles, with lines that must be interpolated to perceive the complete shape.

This is the illusory data set.

When humans view these types of images, they tend to close the gaps and end up perceiving the triangle as a whole.

“We aim to determine whether neural networks exhibit similar closure effects,” say Kim and co.

The final database consists of similar “corners” but randomly oriented so that the lines cannot be interpolated to form triangles.

This is the non-illusory data set.

By varying the size and orientation of these shapes, the team created almost 1,000 different images to train their machines.

Their approach is to train a neural network to recognize ordinary complete triangles and then to test whether it classifies the images in the illusory data set as complete triangles (while ignoring the images in the non-illusory data set).

In other words, they test whether the machine can fill in the gaps in the images to form a complete picture.

They also compare the behavior of a trained network with the behavior of an untrained network or one trained on random data.

The results make for interesting reading.

It turns out that the behavior of trained neural networks shows remarkable similarities to human gestalt effects.

“Our findings suggest that neural networks trained with natural images do exhibit closure, in contrast to networks with randomized weights or networks that have been trained on visually random data,” say Kim and co.

That’s a fascinating result.

And not just because it shows how neural networks mimic the brain to make sense of the world.

The bigger picture is that the team’s approach opens the door to an entirely new way of studying neural networks using the tools of experimental psychology.

“We believe that exploring other Gestalt laws—and more generally, other psychophysical phenomena—in the context of neural networks is a promising area for future research,” say Kim and co.

That looks like a first step into a new field of machine psychology.

As the Google team put it: “Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.

” The German experimental psychologists of the early 20th century would surely have been fascinated.

Ref: arxiv.

org/abs/1903.

01069 : Do Neural Networks Show Gestalt Phenomena?.An Exploration of the Law of Closure Keep up with the latest in computing at EmTech Digital.

The Countdown has begun.

March 25-26, 2019San Francisco, CA Register now.. More details

Leave a Reply