Tech by VICE

Computers Might Just 'See' Like Humans After All

Computers are getting better at mimicking the brain.

by Jordan Pearson
Apr 29 2016, 1:00pm

Image: Flickr/

We're made of meat and they're made of silicon, but according to a new study, humans and computers might actually "see" using the same mechanisms.

When you break it down, all vision really is, physiologically speaking, the transformation of light into electrical pulses that are then processed in stages by different parts of the brain. Sounds a lot like a computer, doesn't it? But computers aren't as good at reliably "seeing" and recognizing objects as humans are, at least not yet.

According to some folks, this is because the brain simply isn't like a computer at all. Others, like computer scientists Timothée Masquelier and Saeed Kheradpisheh, believe that computers may already work a lot like the human vision system—we just haven't taken the right lessons from biology and applied them to machines yet.

"One of the main goals of artificial intelligence is to replicate different aspects of human behavior including vision," Masquelier and Kheradpisheh wrote me in an email. "Despite the huge complexity of analyzing the constantly changing visual world, we can quickly and accurately recognize surrounding places and objects, and plan actions accordingly."

Masquelier and an international team of researchers hailing from France, Iran, and Australia recently compared one of the most advanced deep learning systems out there—such systems mimic the brain with layers of digital "neurons" that "learn" to complete tasks by rearranging themselves—to human vision. The paper is available on the arXiv preprint server and has not yet been peer reviewed.

Watch more from Motherboard: Inhuman Kind

They concluded that machines and humans experience greater difficulty recognizing the same kinds of images—when objects were rotated in three dimensions, for example—suggesting that deep learning networks actually follow similar "internal mechanisms" to human vision. Knowing this, the team hopes that we can take lessons from the brain's visual processes and use them to make computer vision better.

If these computer models do indeed work like the brain, then they could also be used to test hypotheses about how the brain works to and learn more about ourselves via our inhuman creations.

"It is true that [neural networks] may appear magic, as any machine learning algorithm, in the sense that we tell the machine what to do without explaining how to do it, and the machine figures it out by itself—but when we take a closer look, there is nothing magic," Masquelier and Kheradpisheh wrote. "What occurs in the brain is far more mysterious."

But we do understand quite a bit about how the brain processes information, even if our picture isn't quite complete, and that knowledge can be used to improve computers, Masquelier and Kheradpisheh said. For example, modeling the data "spikes" caused by individual static inputs so that motion over time can be taken into account (this would improve a computer's ability to process video), and implementing feedback loops and the ability to pay closer "attention" to one part of an image over another.

It's anybody's guess as to how well these biologically-inspired tricks will really work, but when you're dealing with black boxes—biological or mechanical—sometimes you have to take a risk.

neural networks
Deep learning
motherboard show
computer vision
image processing