By Tunmise Olayinka
Behavior in in the visual world necessitates reciprocal feedback between the environment and the observer. To act, an animal need to 1) sense the outside world, 2) compute upon this percept, and 3) generate an optimal response.
The long-held canonical view of the primary visual cortex (V1) is that its major role lay within only 1 and 2; that is, it functions as the initial computing gateway for the processing of visual sensory stimuli. This has been supported by the correlated spatial distribution of neurons in V1: they share a topographic mapping of response that in turn reflects the spatiotemporal structure of the visual data they process.
However Vijan Mohan K. Namboodiri and others in the audacious lab of Marshall G. Hussain Shuler have now posited that the V1 may play a broader, more instructive function; viz., in the direction of visually-responsive actions. They knew that, in visually-cued tasks, V1 predicated the learned interval between the stimulus and reward, in turn correlating with the action-response. The temporal consistency of this correlation led them to ask: is the learned timing they see in V1 used for solely sensory processing, or does it play a governing role in making actions and directing behavior?
Namboodiri et. al specifically ask these questions in rats, using a visually-cued interval timing task. In the task, the rats attempt to optimally time an action—when to a lick on a spout-in order to receive the maximal reward: water. The longer the rat patiently waits after the visual stimulus, the more reward it gets. However, this is up to a point: delays longer than the maximal delay, i.e. exceeding the target stimuli-lick interval, receive no reward. Thus within this task, the rats must compute an optimal timing for their licks, instead of simply waiting arbitrarily longer. With this task, Namoboodiri can now attempt to answer their focal questions 1) can we see representations of the the timing-delay between stimulus and reward across V1 neurons, and 2), does this representations instruct the action itself, by computing the prediction of the lapse of reward from the stimulus.
Thus with this design, they evaluated neurons in the V1 one at a time (i.e. via single unit recordings) finding that they had a variety of receptivities. Some neurons were entrained to this <i>mean expected interval between the stimulus and the reward, as predicted, while others instead represented the interval between the stimulus and the rat’s response itself, i.e. they were timed to the actual action (nosepoke entry), rather than the prediction of the delay-from-stimulus of the impending reward. Importantly, these visuotemporal representations correlated with the rats behavior: they only noted ‘interval-timing’ neurons in V1 when the rat successfully responded in visually-timed manner. In contrast, in non-visually timed trials, V1 neurons showed a consistent delay from nosepoke entry, independent of the visual stimulus. Only 2% (7/351) neurons in early training show significant action-timing (on the order of their false positive rate), helping to support that this ‘action-timing’ is indeed a computation on the interval itself; viz. the wait time-reward contingency, and not just timed to the action, and which <i>a priori, would not require visual feedback from V1.
If activity in V1 was entirely top-down processive, and driven by the action itself, their hypothesis is that it would present throughout V1, with no selectivity between the visually timed and non-visually timed neurons. However, if V1 played in a role in specifically generated and instructing the action, one would expect such corresponding visual task-based selectivity, with activity in V1 correlating with action on the visually-timed trials and not on the non-visually timed trials.
In addition, the intervals represented by these visually-timed, directive neurons were expected to demonstrate a trial-by-trial correlation with the neural representation of the interval and the action. The interval-representing neurons would reflect this timing in their firing profile, modulating their firing rate in correlation with the mean expected delay. So to instruct a delayed lick, for example, an interval neuron would simply increase the duration of its firing response. Similarly, the responding population—the action-timed neurons decoding the activity of these interval neurons, would in turn modulate their population firing rate with respect to the timing of the lick: the later they fire, the later the lick.
Namboodiri and his colleagues not only observed this behavior, they furthermore found that they could even predict the response of the action-timed neurons using the visually timed neurons, and only in that direction. Altogether, these results seem to demonstrate V1’s role in instructing timed action.
However, while suggestive, these results were only that—merely suggestive. The denouement of the experiment was to see if they could validate their hypothesis on the instructive nature of visually-timed neurons, by seeing if they could modulate this very instruction. Using the glorious power of optogenetics, they were able to consistently shift the firing rate, and thus the timing, of the action. To further delve more into a mechanistic explanation, they simulated their model using a reduced computational model. Therein, they demonstrated that this interval—the average delay between the predictive cues and the reward—could be locally generated and represented in V1.
So does V1 play an instructive role for stereotyping timing behavior on visual-cued tasks? This paper definitely motivates that idea. For visually-cued tasks, they could show that some neurons seemed to encode the interval, while others correlated to the action. They then were able to show that the antecedent firing response of the interval neurons could predict the population firing of the action-timed neurons. Then they not only showed they could directly perturb this effect—by modulating the visually timed interval neurons—but could validate it mechanistically within a computational model. Though one might argue that further elucidation into the nature of the interval representation is warranted (i.e., do these neurons represent the entire duration of the interval, or the endpoints / expiry?), altogether their results heavily suggest an excitingly novel and instructive role for the primary visual cortex.
Tunmise Olayinka is a third-year MD-PhD candidate at UCSD, currently in the labs of Bradley Voytek & Alysson Muotri.