What if neural prosthetics could enhance our sensory experiences, transmit them across thousands of miles to someone else like a text message, or even play back what we dreamed the night before?

That’d be cool.

Neural decoding Neuron issue- notably published after (1)

A whole host of talented neuroscientists, including Dr. Geoff Boynton are interested in developing our ability to reconstruct perceptual experiences from brain imaging.  This problem involves a core issue in neuroscience, neural decoding, or understanding how neurons code for a stimulus, and then going backwards from a neural signature to the stimulus that generated it. For exceedingly simply stimuli decoding can still be a challenge, but Dr. Boynton has made strides in understanding more complex visual representations.

One problem, well known to those who work with fMRI, is the plethora of seemingly contradictory results from labs asking the same questions.  While many scientists are quick to utilize fMRI, relatively little is known about how blood oxygen level dependent (BOLD) signal correlates with underlying neuronal activity. Early on Geoff Boynton was interested in this problem, developing a linear response model that most fMRI analyses build on today. The methodology of fMRI remains an interest for Dr. Boynton (3), as only with a clear idea of what fMRI shows us can we develop robust decoders.

As an example, a few years ago Geoff Boynton and our own Dr. John Serences set out to reconcile why motion perception is almost entirely mediated by the middle temporal (MT) area in primate unit recordings, while novel areas are implicated in motion encoding in human fMRI studies.  They asked subjects to identify the direction of (unbeknownst to them) two categories of random dot patterns, ambiguous (0% coherence) and unambiguous (50 or 100% coherence) as shown below:

RDPs

Indeed, Drs Boynton and Serences found that only in the unambiguous case did all the higher order areas seem to encode for motion, while the MT was the only area still predictably active during the ambiguous trials.

Image

Brain region vs. accuracy in the two coherence conditions

This may mean that other visual areas are encoding other relevant features of the coherent sensory stimulus in the unambiguous case. Since these features are likely eliminated in the ambiguous trials (since the stimulus was incoherent), this study confirms the MT as a specialized “hub” for encoding motion-selectively, and finds this may be due to the convergence of inputs on MT.

So, getting back to reading out my dreams, how can we put together this type of information relating the spatial distribution of voxel activation and perceptual experience? Dr. Boynton’s lab is interested in receptive field models that are able to identify which natural image the subject was looking at from voxel analysis. This schematic from Dr. Jack Gallant’s lab at UC Berkeley is quite useful for understanding how this works:

Image

2008 model for identifying natural scenes (2)

This older model worked well, with 92% accuracy for one subject (shown below) and 72% for another. There is obviously enhanced correlation along the diagonal, and newer models fare even better (4).

Image

Entry aij represents the correlation between measured voxel activity for image i and model predicted voxel activity for image j

I am rather excited to see what comes out of the Boynton lab in coming years. This noninvasive way to obtain an image of the brain’s perceptual state would be hugely influential in all of neuroscience, and would also be the basis for future of neural prosthetics. Can I put my name on some sort of wait list?

Come to CNCB this Tuesday at 4pm to hear Dr Boynton’s talk: “I can get that song out of your head: Decoding perceptual representations with retinotopic and tonotopic maps.”

Stephanie Nelli is a first year UCSD PhD student rotating in the Multimodal Imaging Lab

Cite

(1) Serences, J. T., & Boynton, G. M. (2007). The representation of behavioral choice for motion in human visual cortex. The Journal of Neuroscience, 27(47), 12893-12899.

(2) Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452(7185), 352-355.

(3) Boynton, G. M. (2011). Spikes, BOLD, attention, and awareness: a comparison of electrophysiological and fMRI signals in V1. Journal of vision, 11(5), 12.
 

(4) Reconstructing visual experiences from brain activity evoked by natural movies. Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant. Current Biology, published online September 22, 2011.

About these ads

One response »

  1. […] brain: Novel microchips imitate the brain's information processing in real time — ScienceDaily fMRI as an unprocessed movie of the mind | UCSD Neurosciences Don't miss Smart Fabrics and Wearable Technology 2014! Reply With […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s