“Seeing eye to eye” is an expression of harmony, but do different people literally see the same things in the external world? “The short answer is – no,” says Dr. Liron Gruber. “Even the same person sees the same thing differently each time they look at it,” adds Prof. Ehud Ahissar.
After Weizmann mathematicians (headed by Prof. Shimon Ullman) established that a computer algorithm was much worse than humans at interpreting image fragments, Gruber and Ahissar built upon these findings. In an earlier study, they were able to show that, contrary to the widely accepted view, the human eye does not work like a camera, taking passive snapshots. In the new study, Gruber and Ahissar teamed up with computer scientist Ullman to put human vision to the test.
The researchers recorded and timed human eye movements, then simulated the resulting activities of neurons in the retina. These activity patterns not only varied with different eye movements, they differed depending on whether or not people managed to recognize the object in the picture. On average, recognition took four sets of scanning by the eyes of different points in the picture; at each point the eyes drifted locally in all directions for several hundred milliseconds. The results indicated that the interactions between eye movements and the object are critical to recognition.
“The retina doesn’t create copies of the outside world [...] Rather, human vision is an active process that involves interactions between the external objects and eye movements,” Ahissar says. “The eyes of different people follow different paths when viewing the same thing, and even the eyes of the same person never copy the same trajectory, so in a way, each time we look at something, it’s a one-off experience.” Says Gruber: “[...] light picked up by each receptor in the retina changes in intensity with every eye movement. The resultant patterns of neuronal activity can be interpreted and perhaps stored by the brain.”
These findings represent a new direction in the search for the neural code which, unlike the ubiquitous genetic code, probably varies from one brain region to another. Gruber and Alissar have shown that the retinal code results from a dynamic process in which the brain interacts with the external reality. They explain why it takes time to recognize a blurred object or to figure out optical illusions. As human vision is better understood, it may be possible to develop efficient artificial aids for the visually impaired and to teach robots to catch up with humans in recognizing objects under challenging conditions.