Far from being the static nerve cells researchers believed them to be, capable of performing only a single function, scientists at the Salk Institute for Biological Studies found these neurons rapidly shift back and forth between two ways of collecting information about moving objects.
In the March 1 issue of the journal Neuron, the scientists say their study overturns the prevailing notion that two types of motion-sensitive neurons exist- those that integrate features that belong to a single moving object (a process called “integration”) and neurons that distinguish between features that belong to different objects in the visual field (known as “segmentation”). The Salk researchers found that neurons can do both – but not at the same time.
“Depending on what passes through the visual field, these motion-sensitive neurons can do either task, switching from one to the other within milliseconds,” says the study\’s lead researcher, Gene R. Stoner, Ph.D., a neuroscientist in the Vision Center Laboratory at the Salk Institute. “The properties of these nerve cells are much more complicated than we thought, but this complexity underlies an unexpectedly elegant and efficient use of the brain\’s limited resources.
The findings are just the latest to suggest that sensory neurons are more “plastic” than believed, says co-author Thomas Albright, director of the Vision Center Laboratory. “One of the tenets of sensory neurobiology over the last 40 years has been that neurons are hard wired, that there is not a lot of room for change,” he says. “That view is being increasingly eroded due to evidence that these cells can change the way they represent information based on sensory input.”
The goal of researchers like Stoner, Albright and Xin Huang, first author and a former postdoctoral researcher in the Vision Center Laboratory, is to understand how a complete three-dimensional visual “picture” seen by humans is formed by millions of neurons that view the world only through their “receptive fields,”, the small portion of the visual image individual neurons “see” or respond to.
That gives rise to what researchers call the “aperture” problem – the question how the brain makes senses of the restricted view provided within each neuron\’s receptive field.
That problem is especially acute in the perception of motion: How can the speed and direction of one or more moving objects be understood if each neuron sees a discrete piece of the picture through its small aperture? “Cells are only seeing one thing and can\’t interpret what part of the object it is and how it is moving,” explains Albright.
To find answers, neuroscientists study motion selective neurons from the middle temporal area of the primate visual cortex. Research to date has suggested that there are two separate sets of nerve cells in this area and that they use opposing processes to put the image together. One is integration, in which neurons pool the information they have on a single object in order to “see”, for example, a moving car. The other is segmentation, in which neurons distinguish between features that belong to different objects, such as the relative motions of two moving cars. Vision is thus an act of dynamic interpretation requiring highly flexible processing.
In this study, the responses of individual neurons of monkeys were recorded while those monkeys watched stimuli displayed on a video monitor. Stimuli extended beyond each neuron\’s “receptive field.” Although the motion within these receptive fields was ambiguous due to the limited view of each neuron, these neurons were able to access information provided by other neurons and recover the true direction of motion. This property disappeared, however, when the stimulus outside the receptive field was not part of the same perceptual object as that inside the receptive field. Under these conditions, the same neurons implemented segmentation: the motions of the different features were represented independently.
This shows that neurons quickly adapt to the computational problems they are facing, says Stoner, adding that this same type of adaptive mechanism could underlie processing of other visual attributes, such as brightness or depth.
The study was supported by the Howard Hughes Medical Institute and the National Institutes of Health.