Neurons, depth and motion

As part of my training in electrophysiology, I have focused on designing, conducting, and analysing experiments to assess neuronal responses to the motion and the depth of objects on a screen.

For this I have trained macaque monkeys to sit in front of a screen, to push a button to interact to the task, and to tolerate polarised glasses needed to show objects on the screen ad different depths (similarly to 3D Movies).

While the animal was sitting, ready to push the button whenever a small square at the center of the screen would dim, clouds of moving dots were flashed around the screen, very fast, in order to map the neurons’ receptive field and characterise the its sensitivity to motion and depth.

According to a few studies available in literature, the Middle Superior Temporal Area (MST) in the macaque brain is responsible for integrating motion and depth information to infer self-motion. In the video on the right, for example, when the player moves sideways while looking at the cross in the center, the tree and the flower move to opposite directions.

Therefore, the intuition that Roy and colleagues had in 1992 was that a single neuron could in principle be a self-motion detector, if it would selectively respond to leftward motion for objects in the near space AND to rightward motion for objects in the far space (see plots above and schematics on the right)!

And this is exactly what I set to find out during my PhD in System Neuroscience at the University of Göttingen! Therefore while the animal was sitting in front of a large, back-projected screen, with polarised lenses, a straw next to his mouth for drinking, a button under is hand, and a recording chamber implanted on his head, focused on his dim detection task, I would acquire neuronal signals from single cells in area MST of his brain, while a cloud of dots changed very rapidly in motion direction and depth.

Thanks to a technique called Reverse Correlation I was able to reconstruct, for each neuron, the what combination of motion and depth produced the highest response and thus finding the neuron’s joint selectivity for motion and depth.

I found that in area MST there are several types of neurons that encode depth and motion differently. Some have a very clear selectivity for a specific combination (such A, selectively tuned to downward motion and the middle/far space), some selectively encoded only motion (C) or depth (D), and some had two hotspots (B) exactly as expected by self-motion neurons!

In order to quantify this statistically, I run a series of Generalised Additive Models gradually increasing in complexity and evaluated the response of each neuron!

Across 194 neurons acquired, 46 (24%) seemed to be optimally captured by the multiplicative interaction model. On the other hand the variance explained (in GAM better known as deviance) by this model did not significantly improve on the additive model. Also, the multiplicative model would account for and arbitrary combination of motion and depth, not only the self-motion selectivity. When we inspected the neurons, we found only 1 (neuron B) that could realistically function as self-motion detector.

The implications of this study are:

  1. Self-motion is not a prevalent feature encoded in area MST

  2. MST neurons provide all the necessary building blocks for self-motion computation

  3. But that the functional neural substrate for self-motion perception is located in areas beyond MST

In conclusion: even at the highest area of the visual processing hierarchy, visual information is encoded by overlapping substrates of neuronal populations but it is not yet computationally integrated to form more sophisticated percepts or functions.