California Dew

 

You’re staring steely-eyed at the camera, when your friend hurls a beer at a wall to glance it towards you. You want to reach and catch that beer, crack it open, and let it spray for the camera. Maybe you’ll get a million YouTube views. Maybe you’ll get sponsored by Old Milwaukee. Maybe this is the day you finally make it in the world. But first you have to reach for that beer.

C’mon, brain, you can do it! But how do you do it? Well, if you close your eyes, you’ll probably miss the beer. No million YouTube views. So we need information from the eyeballs. But you don’t necessarily have to be looking directly at the beer. Any juggler knows that you don’t need to be darting back and forth with your eyes to juggle three balls – you can fixate at a point and space and juggle just fine.  And if you’ve ever caught a ball, you know that you can do that perfectly well without looking at your hands, or even having your hands in your field of view. In so many of the movements we make, there’s a beautiful coordination between our direction of gaze, the position of our arms and hands, and the position of the intended target we hope to catch, push, punch, or slide to unlock.

One way to think about the complexity of this coordination is in terms of references frames. When you say “look left” or “look right,” what you mean is “look left relative to the reference frame of your body.” When you decide to look left, your body is pointing forward, and you want to look left of the body vector that’s pointing straight ahead. But while you’re looking left, you could also just as well say that you’re looking straight ahead, but your body vector is pointing right relative to your gaze vector.

Visual information is initially represented in a gaze-centered reference frame – your retina doesn’t know what the rest of your body is doing, so from its perspective, it’s always pointing dead ahead. But if you’re trying to reach your hand towards a target, your brain must transform and integrate this gaze-centered reference frame into a hand-centered reference frame in order to direct proper reach movement. Say you’re at the Kentucky Derby, and your eyes are tracking California Chrome along the track, your hands clutching your armrests in excitment. But you’re parched, and you want a grab that Mountain Dew in the seatback cup holder in front of you. No matter where you’re looking along the track, the movement your hand needs to make is the same, yet the information about where the Mountain Dew is relative to your center of gaze is constantly changing. If your hand or the Mountain Dew were elsewhere, that’s no problem for the brain either. This sensorimotor transformation of reference frame is exquisitely flexible and exquisitely accurate, and all happens behind the scenes.

What are the identities of the neural substrates that underlie this transformation, and how can their organization inform how this area of the brain plans, computes, transforms, or directs information? Previous studies have led to a consensus that posterior parietal and frontal cortex are important for sensorimotor transformation of reference frames, yet two primary models have emerged with hypotheses as to how subregions within these areas encode and represent relative space. In a hierarchical model, distinct populations of neurons encode individual representations of space centered around separate reference frames. In a contrasting model, encoding of different reference frames does not occur in distinct subregions, but instead single areas encode mixed and even intermediate reference frames.

In a recent paper, Lindsay Bremner and Richard Andersen explore this question by obtaining single-unit recordings in a subregion of posterior parietal cortex in monkeys trained to reach toward a target after a ‘go’ signal. By systematically varying the starting position of the hand, the direction of gaze, and the location of the target, the authors hoped to understand how a reach target is encoded by neurons in posterior parietal cortex area 5d. Do area 5d neurons encode target position relative to hand position, target position relative to gaze direction, or hand position relative to gaze direction? Or do they encode the target location in a combination of these and intermediate reference frames within a single brain region?

Examining the tuning curves of hundreds of neurons during different permutations of hand, gaze, and target locations, and controlling for important potential confounds not previously addressed in other studies, Bremner and Anderson provided strong evidence for a nuanced target encoding scheme in area 5d. In conjunction with previous data from the Anderson lab, their data strongly suggests that distinct reference frames are more strongly encoded in different cortical areas, supporting the hypothesis that there exist modular reference frames encoded by specific brain regions. In area 5d, for example, they discovered that the reach target is most strongly represented in a hand-centered reference frame. However, while this representation is predominant, they found that neurons in area 5d also encode mixed and intermediate reference frames, demonstrating that regional encoding is not entirely exclusive to specific reference frames, at least at the level of specificity examined. Their analytical methodology strongly improves upon what has been performed in previous studies addressing similar questions, and I highly recommend diving into their paper for a more thorough account of their study.

Richard Anderson will be speaking at the Center For Neural Circuits and Behavior large conference room on May 6th, 2014 at 4:00 PM. His talk is entitled “Posterior parietal cortex in action.” Join us there!

Patrick is a first-year student in the UCSD Neurosciences Graduate Program

References

Bremner L. & Andersen R. (2012). Coding of the Reach Vector in Parietal Area 5d, Neuron, 75 (2) 342-351. DOI:

About these ads

One response »

  1. Marina says:

    Excellent article!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s