What if neural prosthetics could enhance our sensory experiences, transmit them across thousands of miles to someone else like a text message, or even play back what we dreamed the night before?
That’d be cool.
A whole host of talented neuroscientists, including Dr. Geoff Boynton are interested in developing our ability to reconstruct perceptual experiences from brain imaging. This problem involves a core issue in neuroscience, neural decoding, or understanding how neurons code for a stimulus, and then going backwards from a neural signature to the stimulus that generated it. For exceedingly simply stimuli decoding can still be a challenge, but Dr. Boynton has made strides in understanding more complex visual representations.
One problem, well known to those who work with fMRI, is the plethora of seemingly contradictory results from labs asking the same questions. While many scientists are quick to utilize fMRI, relatively little is known about how blood oxygen level dependent (BOLD) signal correlates with underlying neuronal activity. Early on Geoff Boynton was interested in this problem, developing a linear response model that most fMRI analyses build on today. The methodology of fMRI remains an interest for Dr. Boynton (3), as only with a clear idea of what fMRI shows us can we develop robust decoders.
As an example, a few years ago Geoff Boynton and our own Dr. John Serences set out to reconcile why motion perception is almost entirely mediated by the middle temporal (MT) area in primate unit recordings, while novel areas are implicated in motion encoding in human fMRI studies. They asked subjects to identify the direction of (unbeknownst to them) two categories of random dot patterns, ambiguous (0% coherence) and unambiguous (50 or 100% coherence) as shown below:
Indeed, Drs Boynton and Serences found that only in the unambiguous case did all the higher order areas seem to encode for motion, while the MT was the only area still predictably active during the ambiguous trials.
This may mean that other visual areas are encoding other relevant features of the coherent sensory stimulus in the unambiguous case. Since these features are likely eliminated in the ambiguous trials (since the stimulus was incoherent), this study confirms the MT as a specialized “hub” for encoding motion-selectively, and finds this may be due to the convergence of inputs on MT.
So, getting back to reading out my dreams, how can we put together this type of information relating the spatial distribution of voxel activation and perceptual experience? Dr. Boynton’s lab is interested in receptive field models that are able to identify which natural image the subject was looking at from voxel analysis. This schematic from Dr. Jack Gallant’s lab at UC Berkeley is quite useful for understanding how this works:
This older model worked well, with 92% accuracy for one subject (shown below) and 72% for another. There is obviously enhanced correlation along the diagonal, and newer models fare even better (4).
I am rather excited to see what comes out of the Boynton lab in coming years. This noninvasive way to obtain an image of the brain’s perceptual state would be hugely influential in all of neuroscience, and would also be the basis for future of neural prosthetics. Can I put my name on some sort of wait list?
Come to CNCB this Tuesday at 4pm to hear Dr Boynton’s talk: “I can get that song out of your head: Decoding perceptual representations with retinotopic and tonotopic maps.”
Stephanie Nelli is a first year UCSD PhD student rotating in the Multimodal Imaging Lab
(1) Serences, J. T., & Boynton, G. M. (2007). The representation of behavioral choice for motion in human visual cortex. The Journal of Neuroscience, 27(47), 12893-12899.
(2) Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452(7185), 352-355.
(4) Reconstructing visual experiences from brain activity evoked by natural movies. Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant. Current Biology, published online September 22, 2011.