A Correlation Tug of War

When you think of codebreakers, you usually think of Alan Turing.  But times have changed, and the hip new code everyone is trying to break these days is the Enigma of the neural code.

This week’s speaker is Dr. Alex Reyes.  His work has established some fundamental properties about the relationships between neural spike trains and sensation and the implications these relationships have for neural codes. Much of his work has been focused on understanding how sensory information is represented in neural networks.  Of particular interest to Dr. Reyes is the auditory cortex.  Dr. Reyes’ work blends computational, theoretical, and experimental approaches to understand the general principles behind signal processing in cortical circuits.

A fundamental concept in signal processing is correlation.  Measuring correlation is essentially asking, “how similar are these two signals?”   If you’re not familiar with how one calculates correlation, a simple analogy is comparing drawings on sheets of paper: overlap the sheets of paper and look at them through the light.  The darker the overlap, the more they match up. Similarly, to measure correlations between signals, one overlaps them and sums them up by taking a few integrals.

One question involving correlation that comes up frequently in neuroscience is,”How are correlations between neural signals relevant for coding information in spike trains?”

This is a huge question that is actively being debated and researched.  One of the reasons the debate is so persistent is pointed out in Dr. Reyes’ recent paper (Graupner and Reyes 2013).  It boils down to this: a wide range of correlation values between neural signals are measured in different parts of the brain, or even in the same part of the brain during different experiments.  Theoretical arguments have been put forward claiming that correlations enhance population coding, or claiming that they are detrimental to population coding.  Additionally, correlation patterns have been shown to change over time.

Clearly, understanding the mechanisms that effect neural correlations is a critical step in piecing together this apparently contradictory observations of neural correlations, and ultimately what the “neural code” is.

Given the wide range of correlations observed, a natural question to ask is whether these correlations simply come from correlations in the input to the neurons.  In (Graupner and Reyes 2013), the events that lead to correlations between neurons sharing synaptic inputs were investigated.

One might expect that neurons receiving similar (correlated) inputs would show correlated outputs.  Surprisingly, they found that neurons in the auditory cortex actively suppress input correlations to produce only weakly correlated outputs.  How could this be?

Graupner and Reyes performed their experiments in slices taken from the auditory cortex of mice. Of interest were pairs of pyramidal cells in layer IV: the “input” layer.  They did their recordings in media that had higher potassium concentrations to induce greater levels of spontaneous activity.  They noted two kinds of activity in their membrane potential recordings: low amplitude epochs and high amplitude epochs. They used these categories to epoch their data for separate analysis.

First, they analyzed the low amplitude epochs. A major goal of the paper was to look at the sub-threshold computations taking place on the correlated inputs. To do this, they measured the correlations between isolated inhibitory post-synaptic potentials (IPSPs), isolated excitatory post-synaptic potentials (EPSPs), and the composite EPSP and IPSPs.  To isolate the EPSPs and IPSPs, they did their recordings while holding the membrane potential at the reversal potentials for the inhibitory and excitatory inputs.  Thus, they were able to measure the correlations of the different kinds of input between pairs of neurons. Strikingly, they found that while the excitatory and inhibitory inputs were correlated between the two neurons, the combination of the two (the sub-threshold membrane potential) was significantly less correlated than either of the kinds of inputs! A similar effect was observed during the high amplitude epochs, where correlations were generally higher across the board.

The following figure from their paper captures this effect:

reyesfig

You can see that the red and green lines, representing only EPSP-EPSP correlations and only IPSP-IPSP correlations, are much higher than the blue, representing the actual membrane potential.

Somehow, the neurons were actively canceling out correlations in their shared inputs!

One possible explanation for this is that if the excitatory and inhibitory inputs are tightly coupled in time, then since they are of opposite sign, their combination will lead to cancellation of the correlations.  To test this hypothesis, Graupner and Reyes measured IPSCs and EPSCs to determine the correlations between the excitatory and inhibitory input currents and the relative timing of these inputs.

Similar to the results obtained from measuring post synaptic potentials, they found that excitatory-excitatory and inhibitory-inhibitory input currents were correlated between the neurons in each recording.  As predicted by the hypothesis, the excitatory-inhibitory correlations were relatively strong and negative – indicating that the excitatory inputs are tracked by correspondingly opposite inhibitory inputs. Moreover, the time delay between excitation and inhibition was short and got shorter with increased activity.  This indicates that inhibitory feedback happens on a very short time scale and so inhibition can effectively track excitation.  This is consistent with the idea that cancellation of the correlations between these inputs leads to the decorrelation in sub-threshold membrane activity.

At this point, readers may notice that these conclusions are critically dependent on being able to isolate the excitatory inputs from the inhibitory inputs to each neuron.  In a perfect world, each synapse of each kind of input would have the same reversal potential, and the potential to which the neuron was held at the electrode would be the same throughout the neuron.  Graupner and Reyes very astutely point out that because these pyramidal cells are spatially extended, the assumption that the membrane potential is the same throughout the cells is probably false.  Thus, it is unlikely that they are truly isolating the individual kinds of inputs.  Therefore, the correlations they measured between the inputs are probably not the true correlations.  How are they able to believe their conclusions? Two words: computational neuroscience.

The next section of their paper contains a beautiful example of the use of computational neuroscience techniques to address experimental questions. Graupner and Reyes constructed a model to estimate the impact of the spatial extent of the neurons on their measured correlations.  They set up a recurrent neural network to drive two test neurons.  These neurons were either point neurons, or spatially extended neuron models with different kinds of spatial input distributions. Essentially performing the same experiment on the model, they found was that the spatial extent of the neurons leads to an underestimate of the membrane correlations.  However, the amount of this underestimate depends on which correlations they were measuring.  When measuring excitatory input, the model indicated an underestimate by a factor of 8.2.  When measuring inhibitory input, the model indicated an underestimate by a factor of 3.1.  The exciting result was that the model indicated an underestimate by a factor of only 1.77 at the resting membrane potential, when both inputs were acting together.

This means that it is likely that even though they underestimated the correlations in the slice, the overall conclusions still stand.  Since the correlations at resting membrane potential were less underestimated than the individual inputs, the inputs still have higher correlations individually than together in the neurons and their conclusion still holds.  This is an excellent demonstration of how computational neuroscience offers tools to forge a reasoned path around experimental difficulties.

So, what implications does this study have?  For one, it demonstrates that this cortical circuit seems to be wired to decorrelate its inputs.  Thus, the observed correlations observed between neurons in previous studies can’t necessarily always be explained by correlations in their input.  The correlations are coming from somewhere, though! This suggests something deeper is happening, and future work must find where these correlations are generated and what they mean in the context of network activity. Graupner and Reyes suggest that modulators of overall network activity could shape these correlations.

For now, the tug of war between excitation and inhibition in our brains continues as we keep trying to break the neural code.

To hear from Dr. Reyes himself, be sure to come to his talk this Tuesday at 4 p.m. in the Center for Neural Circuits and Behavior Farquhar Conference Room.  His talk is titled, “Mathematical and eletrophysiological bases of maps between acoustic and cortical spaces.”  Based on that title alone, the seamless integration of theory and experiment that permeates Dr. Reyes’ work is sure to be apparent!

Brad Theilman is a first-year Neurosciences student currently rotating with Dr. Eran Mukamel. When not processing signals from the brain, he is probably out searching for signals from satellites on his amateur radio. 


Graupner, M. and Reyes, A. (2013). Synaptic input correlations leading to membrane potential decoration of spontaneous activity in cortex. J. Neurosci. 33(38): 15075-15085. doi: 10.1523/JNEUROSCI.0347-13.2013.

Ode to the Circuitry of the Visual Cortex

If you decide to take a short noon walk

And pass the cliffs that mark the edge of Muir,

You’ll see the labs founded by Jonas Salk

A worthy home for a bright professor–

Ed Callaway, let me tell you ‘bout him!

For a masterfully shaped talk Tuesday

Be sure you don’t miss Of mice and monkeys:

A journey into the visual system

Where we’ll learn of the circuits that relay

Signals telling cortex what the eye sees.


Pursing the mechanisms behind

Functional cortical activity,

He has pioneered mapping techniques and

Helped us understand connectivity.

To trace the complex circuits creating

The visual cortex, a new marker,

For cells connected synaptically,

Was created and has been worth the slaving.

The rabies virus vector was winner

Moving ‘cross synapses retrogradely.


But here’s the cool part kids, pay attention.

Without glycoprotein (G) the virus

Can only spread through direct connection

Thus mapping neural inputs, lucky us!

To know which inputs cause inhibition,

Cell-specific markers become crucial.

So the vector was further improved to

Make its envelope glycoprotein fit in

A receptor, not found in a mammal,

But expressed in the target cell by you!1


a and b part 2

An example of targeting the Glycoprotein (G)-deficient rabies vector to infect cells where the TVA-receptor, which binds to an envelope glycoprotein on the rabies virus, is expressed.  Taking advantage of selective expression of Camk2a-Cre in CA1 pyramidal neurons, the rabies vector was injected into the hippocampi of Camk2a-Cre:TVA double transgenic mice to study which neurons have direct synaptic connections with the CA1 pyramidal neurons.  In A and B, fluorescent neurons have been infected with the rabies virus.2


Between electrophysiology,

Rabies vectors, and high-tech imaging

Callaway has many tricks up his sleeve.

He’s shown inputs to layer 2/3 exciting,

If they’re from layer 5, but not always

If they’re from layer 4 or layer 2/3.

Wow, that’s some fine-tuned specificity!3

If this wets your whistle, just wait a few days.

Both talk and food are complimentary

Tuesday, 4 PM at CNCB.

Kelsey is a first year student in the Neurosciences Graduate Program and started the MD/PhD Program in 2012.  She is doing her thesis work in Dr. Subhojit Roy’s lab.  Outside of work, she enjoys singing and soccer but struggles with iambic pentameter.

1.  Osakada F. and Callaway E.M. (2013). Design and generation of recombinant rabies virus vectors. Nature Protocols. 8, 1583-1601.

2.  Sun Y., Nguyen A.Q., Nguyen J.P., Le L., Saur D.,  Choi J., Callaway E.M., and Xu X. (2014). Cell-Type-Specific Circuit Connectivity of Hippocampal CA1 Revealed through Cre-Dependent Rabies Tracing. Cell Reports. 7.1, 269-280.

3.  Yoshimura Y., Dantzker J.L.M., and Callaway E.M. (2005). Excitatory cortical neurons for fine-scale functional networks. Nature. 433, 868-873.

Delicious and Nutritious: The Tale of a Multitasking Taste Receptor

Amid much controversy, the Drosophila Department of Agriculture recently published a new food pyramid and updated the serving size recommendations for various fruits. Members of the fly community across the nation are now struggling to adjust their diets, hoping these new regulations will result in healthier body weights, longer lifespan, and increased reproductive success.

Just kidding.

We live in a society in which the words “nutritious” and “healthy” are informed by decades upon decades of scientific research on what food components help us stay energized, keep our bones strong, boost our immune systems, lower cholesterol, protect against cancer, etc. Subject to innovations in research tools and data collection, these perceptions of nutrition have morphed over time. (For example, a diet based on Wonder Bread today seems entirely comical, but apparently it was once in vogue.)

Other organisms, however, in the absence of a Department of Agriculture, World Health Organization, or other such institutions, must rely on internal, biological systems for determining what foods will keep them alive and healthy (I know, mind blown). Studying the mechanisms of nutrition perception in model organisms may lead to a more complete understanding of the interactions between human metabolic processes and brain systems, many of which remain unknown.

This week’s seminar speaker, Dr. Hubert Amrein from Texas A&M, studies taste and internal nutrient perception in Drosophila. He recently published a study (Miyamoto et al. 2012) throwing a spotlight on a certain taste receptor also endowed with fantastic nutrition-sensing ability. The star of the show is Gr43a, a member of the gustatory receptor (GR) family. GRs are present on gustatory receptor neurons (GRNs), located in taste sensillae (basically Drosophila taste buds) on various external body parts such as the proboscis and legs.

Gr43a caught the interest of Dr. Amrein and his lab because it is one of few Gr genes to have orthologs across insect species; this conservation suggests that Gr43a might play a more important role than its evolutionarily dispensable Gr counterparts. Dr. Amrein and his team established that Gr43a is a specific, high-affinity fructose receptor present not only in the sensory neurons but also in the brain. But the fly discovered that the food was sweet the moment he extended his little proboscis towards it, so what is the purpose of having a gustatory receptor in the CNS??

As neuroscientists, we spend a lot of time thinking about neurons and synapses, how we create memories, move muscles, and observe our environment. Many of us tend to forget about the vasculature that winds its way through the brain delivering oxygen and sugar. It turns out that the Gr43a receptors in the brain act as a sensor for fructose circulating in the hemolymph (essentially a fancy name for Drosophila blood)! The authors found that after a fly feasts on the sugarlicious fruit on your kitchen counter, the circulating levels of other sugars (namely glucose and trehalose) remain relatively constant while fructose levels, which are normally quite minimal, shoot up until they are high enough to stimulate those Gr43a receptors in the brain. In this way, Gr43a stimulation acts as an indicator of a nutritious meal.

Delving further into the idea of Gr43a as a nutrition sensor, the authors used the capillary feeding (CAFE) assay in which hungry flies had a choice between water and sorbitol, which is tasteless but nutritious. Gr43a mutant flies had no preference, but wildtype flies preferred the sorbitol (somehow—aka via their Gr43a receptors—they just knew it was providing much-needed sugary goodness). The Gr43a mutants, on the other hand, entirely nonplussed by both meal choices, showed no preference. Look at Gr43a, saving flies from their pangs of hunger. What a hero! But is he just a clutch hitter or does he also perform when flies are happily satiated and the game isn’t on the line?

The authors did a second CAFE assay with satiated flies and allowed them to choose among a array of nutritious, sweet-tasting sugars. Intriguingly, the Gr43a mutants went hog-wild and devoured 60-80% more sugar than the wildtype flies! However, when the assay was repeated with sweet, non-nutritious sugars (sugars that cannot be converted to fructose), there was no difference in feeding amount between the two groups, suggesting that in satiated flies, Gr43a acts to suppress rather than promote nutrient intake. So not only is our hero a lifesaver in sticky situations, he also endows flies with a modicum of self-control.

Could Gr43’s awesome bidirectional powers arise from an ability to change valence depending on the feeding state? To address this question, the authors created flies that had temperature sensitive TRPA1 ion channels in their Gr43a brain neurons. Effectively, if flies at ambient 23 °C were moved to above 25 °C, their TRPA1 channels opened, activating neurons expressing Gr43a. This allows control of Gr43a neurons completely independent of hemolymph fructose concentration. Flies were exposed to odor A at 29 °C (activated Gr43a-expressing neurons) and to odor B at 23 °C (inactive Gr43a neurons). Then they had to decide between A and B in a T-maze odor choice assay. Odor A was a magnet for hungry flies, but satiated flies wanted nothing to do with it (Fig. 7A-B). These results imply that, via the Gr43a receptor, fructose triggers a pleasant sensation in hungry flies and an aversive sensation in satiated flies (Fig. 7C)!

Screen Shot 2014-10-31 at 2.31.45 PM

I should mention that it gets more complicated for mammals; in mice, fructose promotes feeding behavior while glucose suppresses it. So don’t go seeking all of the high fructose corn syrup you can find—it likely will not aid your food-related self-control as it does for our fly friends.

If you’re hungry for more on this fascinating topic, please come listen to Dr. Hubert Amrein’s talk this Tuesday at 4pm in the CNCB large conference room… It’s going to be sweet!

Catie Profaci is a first-year Neurosciences student currently rotating with Dr. Byungkook Lim. When she is not in lab, you likely can find her running barefoot on the beach, listening to NPR, cooking and consuming spicy food, or watching the Yankees game.


Miyamoto T., Xiangyu Song & Hubert Amrein (2012). A Fructose Receptor Functions as a Nutrient Sensor in the Drosophila Brain, Cell, 151 (5) 1113-1125. DOI: http://dx.doi.org/10.1016/j.cell.2012.10.024

OMgp! Molecules that make Neuron Growth a Nogo

Beware: Semaphorin3A will repulse even the most determined of growth cones.

Beware: Semaphorin3A will repulse even the most determined of growth cones.

In the dark, cramped setting of the newborn brain, you slither along your guidepost. You have been slowly creeping on your chemically-determined path for what seems like ages wanting only to make a connection. As you stretch your greedy filopodia out, you hit something. An obstruction? A competing axon? Nothing obvious blocks you, yet you are suddenly forced backwards as your growth cone crumples. Only here do you finally identify the phantom forcing you back: the Sema3A clinging to your Plexin receptors.

Roman Giger, an associate professor and long-time researcher at University of Michigan, works to identify and understand properties of “phantom inhibitors” such as Sema3A. By using knockout mice and subsequent structural and functional analyses, his studies have illuminated a wide array of molecules that have the capacity to inhibit axon growth, dendritic growth, and synapse formation. There is nowhere near enough space to delineate all of Dr. Giger’s findings (seriously, he has had 32 publications in 12 years, take a look! https://www.ncbi.nlm.nih.gov/pubmed), so I will focus on only two here: Semaphorin5A and the Nogo-Receptors(NgRs).

When axon tracts in the CNS are damaged, several biological mechanisms swing into action to inhibit axon regeneration. Astrocytes will soon begin to aggregate and link tightly together to form a physical obstruction called a glial scar; this tissue bunch will also secrete an assortment of molecules (called myelin associated inhibitors, or MAIs) to chemically inhibit the busted axon from regrowing. For example, chondroitin sulfate proteoglycans (CSPGs), a mouthful of an MAI, will inhibit axon regeneration by binding to its receptor (RPTPsigma) on the axon membrane. Yet, knockout of RPTPsigma will incompletely disinhibit neurite growth from the injury site. This indicates that CSPG must bind to another partner to do its repulsive dirty work… which is where NgR1/2/3 comes in.

NgR1/NgR2/NgR3, three receptor subtypes found on the axon, are the well-known binding partners of an oligodendrocyte membrane protein and MAI that is aptly named Nogo. However, Dr. Giger noticed something strange: if you KO all three NgRs then axon regeneration is elevated, but if you KO these receptors in combinations (NgR1, or NgR2, or NgR1/NgR2, etc.), ONLY NgR1/NgR3 double-KO is sufficient to replicate the triple-KO’s level of regeneration. This implies that NgR1/3 inhibit neurite outgrowth by the same mechanism. Further studies revealed that NgR1/3 bind to CSPGs by a previously unknown binding site, mimicking RPTPsigma. When these three CSPG binding partners (RPTPsigma, NgR1, and NgR3) are KO’ed, crushed axons exhibit neurite outgrowth more extreme than even NgR1/3 double-KO; combined, these findings indicate that NgR1, NgR3, and RPTPsigma are all functionally redundant, and so are playing the same role in neurite inhibition.

Damaged axons are not the only target of molecularly-mediated growth inhibition. In his most recent study, Dr. Giger unveiled the ability of a membrane protein called Semaphorin5A (SEMA5A), to hinder excitatory synapse development. The Michigan scientist showed that when this molecule is KO’ed in mouse hippocampus, density of dendritic spines (where most excitatory synapses form) is significantly increased compared to controls (Figure 2a-g). Based on increase of PSD-95 (in excitatory synapses) but not gephryn (in inhibitory synapses) in KO animals, SEMA5A’s effects were determined to be excitation-specific.

With structure affected, function would logically be affected too, right? This is what Dr. Giger finds, by way of hippocampal patch clamp recordings- an increase in both the size and number of excitatory currents (called “mEPSCs”) entering hippocampal neurons in SEMA5A-KO mice (Figure 2h[left], 2i-k). These overly exuberant currents can be silenced using CNQX (an AMPA receptor blocker), evidencing that this excitatory receptor’s response is increased in the KO condition (Figure 2h[right]).

Screen Shot 2014-10-26 at 10.38.01 PMScreen Shot 2014-10-26 at 5.40.11 PM

These findings aren’t just exciting for neuroscientists. SEMA5A is one of the SNPs (single nucleotide polymorphisms, or one-base genetic differences) seen in autism patients, which will surely catch the ear of clinicians; patients typically have a lower expression level of SEMA5A compared to those who are non-autistic (1,2). This would, following what Dr. Giger has found, indicate that autism patients should have more spines than normal, right? Elevated spine densities have been found in mouse models of autism, aligning perfectly with Giger’s findings(3). If SEMA5A-KO mice exhibit similar morphological phenotypes to autism mouse models, shouldn’t behavior be similar in these mice as well? It is: SEMA5A-KO mice show anti-social behavior, as exhibited by aversion to interaction with a “stranger” mouse, replicating what current autism mouse models have displayed(4). Dr. Giger’s SEMA5A-KO findings fit shockingly well with the current autism literature, implicating it as a strong candidate for further study of this disorder. SEMA5A research also shows us that stopping growth, in addition to initiating it, is essential for a healthy nervous system.

So, what’s stopping YOU? Roman Giger will be speaking at the Center For Neural Circuits and Behavior large conference room on October 28th, 2014 at 4:00 PM. His talk is entitled “Neural circuit assembly, plasticity, and repair in CNS health and injury”. Don’t inhibit yourself, come check it out!

 

Norah is a first-year student in the UCSD Neurosciences Graduate Program. She is currently rotating with Jared Young, working with a mouse model of bipolar disorder. She is adamant about psychiatric disorder research and has a penchant for bad puns.

 

1) Melin M, Carlsson B, Anckarsater H, Rastam M, Betancur C, Isaksson A, Gillberg C, Dahl N (2006). Constitutional downregulation of SEMA5A expression in autism. Neuropsychobiology 54:64-69.

2) Weiss LA, Arking DE, Daly MJ, Chakravarti A (2009). A genome-wide linkage and association scan reveals novel loci for autism. Nature 461:802-808.

3) Hutsler JJ, Zhang H (2010). Increased dendritic spine densities on cortical projection neurons in autism spectrum disorders. Brain Res. 1309:83-94.

4) Yang M, Silverman JL, Crawley JN (2011). Automated three-chambered social approach task for mice. Curr Protoc Neurosci Chapter 8:Unit 8 26.

5) Leslie J, Imai F, Zhou X, Lang RA, Zheng Y, Yoshida Y (2012). RhoA is dispensable for axon guidance of sensory neurons in the mouse dorsal root ganglia. Front. Mol. Neurosci.

6) Duan Y., Juan Song, Yevgeniya Mironova, Guo-li Ming, Alex L Kolodkin & Roman J Giger (2014). Semaphorin 5A inhibits synaptogenesis in early postnatal- and adult-born hippocampal dentate granule cells, eLife, 3 DOI: http://dx.doi.org/10.7554/elife.04390

7) Dickendesher T.L., Yevgeniya A Mironova, Yoshiki Koriyama, Stephen J Raiker, Kim L Askew, Andrew Wood, Cédric G Geoffroy, Binhai Zheng, Claire D Liepmann & Yasuhiro Katagiri & (2012). NgR1 and NgR3 are receptors for chondroitin sulfate proteoglycans, Nature Neuroscience, 15 (5) 703-712. DOI: http://dx.doi.org/10.1038/nn.3070

The Persistence of Memory

The Persistence of Memory by Salvador Dali, 1931

The Persistence of Memory by Salvador Dali, 1931

In 1942, Jorge Luis Borges wrote about a boy named Ireneo Funes who, following a horseback riding accident, developed a phenomenal memory. He was able to recall entire volumes (in his non-native language) and recite them fluidly. In contrast to others with hypermnesia (see The Mind of the Mneumonist by A.R. Luria), what made Funes’ memory particularly curious was his inability to forget and the specificity of his memories. When he remembered, for example, a cloud in the sky, he not only remembered the specific cloud, but the time he viewed it, the direction of the wind, if he was hungry, what smells were in the air, etcetera. He even created distinct memories each time he saw an object at a particular angle. Sage from the left would be a distinct memory from Sage from the right.

Though fiction, this 70 year old story brings to the fore an important question regarding memory formation: How do we remember some things and forget others? Surely, there is a tremendous amount of informational throughput: We have countless experiences during the course of even a single day. But why is it that we only remember a minority of these experiences?

Deep in the heart of Missouri, Kausik Si and his team are researching just this. To phrase the question from a more biological standpoint: “How does the altered protein composition of a synapse persist for years when the molecules that initiated the process should disappear within days?” (Majumdar et al., 2012; 515)

Proteins are recycled with a certain regularity (mostly through proteolytic processing by the lysosome). Proteins that stick together and oligomerize (think of legos being stacked together) can be immune to such degradation (a child would have a harder time eating a stack of legos than they would an individual piece). This process has been explored in a pathological context – for example: beta-amyloid plaques and Alzheimer’s disease – but what if it has physiological relevance in a biological context?

Based upon previous studies with the sea slug (Aplysia), Si et al. hypothesized that oligomers of the Orb2 protein could provide a substrate for the persistence of memory.

Through careful biochemical experiments, they determined that Orb2 was expressed as a monomer (one lego) and an oligomer (multiple legos). These were often hetero-oligomers comprised of splicing variants of the orb2 transcript (different color legos). In fact, it seems as though the smaller Orb2a variant, which is very sparsely expressed, plays a catalytic role in the oligomerization of Orb2b. Disruption of Orb2a expression had no effect on memory acquisition, however it showed a marked defect in memory retrieval after 48 hours – thus suggesting that Orb2a expression, and subsequent oligomerization of Orb2, plays a causal role in the persistence of memory.

Figure 7

Figure 7

They performed two separate behavioral memory tests to show that this is a generalizable phenomenon. Let’s focus on the first one, because it is first and also ripe for humorous extrapolation.

In the “Male Courtship Suppression” task, males are exposed, repeatedly, to unreceptive females (guys, I think you can relate). Over time, these males, discouraged and probably in the throws of a melanogaster-existential crisis, suppress their haughty courtship and stand down. However, flies with a mutation in the Orb2a isoform had no difficulties remembering being spurned in the first 36 hours (figure 7c). But within 48 hours, they were back on the horse, pursuing the same unreceptive female (ladies, you may know the type).

This suggests that the Orb2a isoform is necessary for the persistence, and not the formation, of long-lasting memories.

While further work needs to be done to explore how these oligomers represent a memory at the level of a single neuron, as well as in a network of neurons, it provides a novel pathway completely distinct to the well-studied activity-dependent immediate early genes. I encourage you all to come see Doctor Kausik Si’s talk at 4PM in the Large Conference Room in the Center for Neural Circuits and Behavior!

Sage Aronson is a first-year Neurosciences student currently rotating in Roberto Malinow’s lab. He spends an inordinate amount of time on a bicycle and has a peculiar fondness for the word “spelunking.”

Borges, Jorge Luis. “Funes the Memorius.” Ficciones. Buenos Aires: Emecé Editores, 1956. N. pag. Print.

http://www4.ncsu.edu/~jjsakon/FunestheMemorious.pdf

Majumdar A., Erica White-Grindley, Huoqing Jiang, Fengzhen Ren, Mohammed “Repon” Khan, Liying Li, Edward Man-Lik Choi, Kasthuri Kannan, Fengli Guo & Jay Unruh & (2012). Critical Role of Amyloid-like Oligomers of Drosophila Orb2 in the Persistence of Memory, Cell, 148 (3) 515-529. DOI: http://dx.doi.org/10.1016/j.cell.2012.01.004 

http://www.sciencedirect.com/science/article/pii/S0092867412000050

Behavioral/physiological tolerance and making regulations based on THC blood levels

crossposting from tlneuro.wordpress.com

___

Drug “tolerance” is a fairly simple concept (Wikipedia). It means that with successive exposure to a given drug, in many cases the same dose produces a reduced effect. This can be for any number of mechanistic reasons including a change in the metabolism and/or excretion of that drug, a change in the number or sensitivity of the receptor sites through which the drug interacts with the body or a change in the neuronal circuitry, or physiological processes, that are affected. Of course, things are complicated since tolerance may or may not be produced depending on the behavioral or physiological measure in question, on the drug in question, on the circumstances (dose, freqency, etc) of drug exposure and a whole host of other factors. Drugs can even produce sensitization, which is a progressive increase in the effect with successive exposure.

Legalization of marijuana for medical use purposes in many US states and the recent decriminalization of purely recreational marijuana use in Colorado and Washington states has been associated with an effort to determine legal impairment. This is most typically in the context of the limit for operating an automobile. In WA, the decriminalization initiative set 5 ng THC per mL of blood is the “per se” limit for presumed impairment of the ability to operate an automobile. In Colorado, the State Senate passed a similar limit.

Leaving aside the question of what the limit should be, today I want to discuss a paper that makes some of the issues involved clearer and shows why there are not any straightforward answers.

Ginsburg BC, Hruba L, Zaki A, Javors MA, McMahon LR. Blood levels do not predict behavioral or physiological effects of Δ⁹-tetrahydrocannabinol in rhesus monkeys with different patterns of exposure.Drug Alcohol Depend. 2014 Jun 1;139:1-8.
[PubMed, Journal Site]

Ginsburg and colleagues report the relationship between blood levels of THC and effects on behavior and thermoregulation in rhesus monkeys. The key part of the paper is the comparison between a group of animals who had received twice-daily THC (1 mg/kg, s.c.) or animals who had received lower doses of THC (0.1 mg/kg, i.v.) only every 3 o4 4 days. These are referred to as the Intermittent and Chronic exposure regimens/groups.

Ginsburg14-Fig1The study examined the effect of a 3.2 mg/kg, s.c. dose of THC in each group. The primary outcome measures were rectal temperature (hypothermia is a classical effect of cannabinoids in laboratory models), response rate on stimulus-termination operant procedure and blood levels of THC. Response rate may not be the most complex behavior going but it does tend to be sensitive to general intoxication level. As you can see in Figure 1, reproduced here, the groups differ in the effect of an identical THC dose on both temperature and behavior. The Chronic treatment group had minimal to no response to THC whereas the Intermittent group had a significant drop in body temperature and a slowing of response rate. The key consideration was that there was no difference in the blood levels of THC between the groups. Thus, the tolerance that was observed cannot be due to metabolic tolerance, i.e., a change in the rate of drug metabolism and excretion. Importantly, this means that chronic and occasional users of marijuana being tested for possible DUI will not differ due to metabolism of the drug.

As I noted, this study does not really speak to what blood level would be associated with impaired human driving after THC. The behavioral measure is simply too distantly related for good inference-particularly since driving crashes are more about failures of attention and judgment then about physical control over the car. What it does show, however, is that a given THC blood level is fairly meaningless as a predictor of the impairment of a given individual without any knowledge of that person’s history of exposure to THC.

Hey, pass me a beer


California Dew

 

You’re staring steely-eyed at the camera, when your friend hurls a beer at a wall to glance it towards you. You want to reach and catch that beer, crack it open, and let it spray for the camera. Maybe you’ll get a million YouTube views. Maybe you’ll get sponsored by Old Milwaukee. Maybe this is the day you finally make it in the world. But first you have to reach for that beer.

C’mon, brain, you can do it! But how do you do it? Well, if you close your eyes, you’ll probably miss the beer. No million YouTube views. So we need information from the eyeballs. But you don’t necessarily have to be looking directly at the beer. Any juggler knows that you don’t need to be darting back and forth with your eyes to juggle three balls – you can fixate at a point and space and juggle just fine.  And if you’ve ever caught a ball, you know that you can do that perfectly well without looking at your hands, or even having your hands in your field of view. In so many of the movements we make, there’s a beautiful coordination between our direction of gaze, the position of our arms and hands, and the position of the intended target we hope to catch, push, punch, or slide to unlock.

One way to think about the complexity of this coordination is in terms of references frames. When you say “look left” or “look right,” what you mean is “look left relative to the reference frame of your body.” When you decide to look left, your body is pointing forward, and you want to look left of the body vector that’s pointing straight ahead. But while you’re looking left, you could also just as well say that you’re looking straight ahead, but your body vector is pointing right relative to your gaze vector.

Visual information is initially represented in a gaze-centered reference frame – your retina doesn’t know what the rest of your body is doing, so from its perspective, it’s always pointing dead ahead. But if you’re trying to reach your hand towards a target, your brain must transform and integrate this gaze-centered reference frame into a hand-centered reference frame in order to direct proper reach movement. Say you’re at the Kentucky Derby, and your eyes are tracking California Chrome along the track, your hands clutching your armrests in excitment. But you’re parched, and you want a grab that Mountain Dew in the seatback cup holder in front of you. No matter where you’re looking along the track, the movement your hand needs to make is the same, yet the information about where the Mountain Dew is relative to your center of gaze is constantly changing. If your hand or the Mountain Dew were elsewhere, that’s no problem for the brain either. This sensorimotor transformation of reference frame is exquisitely flexible and exquisitely accurate, and all happens behind the scenes.

What are the identities of the neural substrates that underlie this transformation, and how can their organization inform how this area of the brain plans, computes, transforms, or directs information? Previous studies have led to a consensus that posterior parietal and frontal cortex are important for sensorimotor transformation of reference frames, yet two primary models have emerged with hypotheses as to how subregions within these areas encode and represent relative space. In a hierarchical model, distinct populations of neurons encode individual representations of space centered around separate reference frames. In a contrasting model, encoding of different reference frames does not occur in distinct subregions, but instead single areas encode mixed and even intermediate reference frames.

In a recent paper, Lindsay Bremner and Richard Andersen explore this question by obtaining single-unit recordings in a subregion of posterior parietal cortex in monkeys trained to reach toward a target after a ‘go’ signal. By systematically varying the starting position of the hand, the direction of gaze, and the location of the target, the authors hoped to understand how a reach target is encoded by neurons in posterior parietal cortex area 5d. Do area 5d neurons encode target position relative to hand position, target position relative to gaze direction, or hand position relative to gaze direction? Or do they encode the target location in a combination of these and intermediate reference frames within a single brain region?

Examining the tuning curves of hundreds of neurons during different permutations of hand, gaze, and target locations, and controlling for important potential confounds not previously addressed in other studies, Bremner and Anderson provided strong evidence for a nuanced target encoding scheme in area 5d. In conjunction with previous data from the Anderson lab, their data strongly suggests that distinct reference frames are more strongly encoded in different cortical areas, supporting the hypothesis that there exist modular reference frames encoded by specific brain regions. In area 5d, for example, they discovered that the reach target is most strongly represented in a hand-centered reference frame. However, while this representation is predominant, they found that neurons in area 5d also encode mixed and intermediate reference frames, demonstrating that regional encoding is not entirely exclusive to specific reference frames, at least at the level of specificity examined. Their analytical methodology strongly improves upon what has been performed in previous studies addressing similar questions, and I highly recommend diving into their paper for a more thorough account of their study.

Richard Anderson will be speaking at the Center For Neural Circuits and Behavior large conference room on May 6th, 2014 at 4:00 PM. His talk is entitled “Posterior parietal cortex in action.” Join us there!

Patrick is a first-year student in the UCSD Neurosciences Graduate Program

References

Bremner L. & Andersen R. (2012). Coding of the Reach Vector in Parietal Area 5d, Neuron, 75 (2) 342-351. DOI:

Bacteria: a real pain in the…nociceptor?

If you’re over the age of 10, you’ve probably experienced the joys of having a pimple, and all the pain – physical and emotional – that goes along with it. But have you ever wondered why pimples hurt?

Typically we’ve assumed that the pain of an infection comes primarily from the inflammatory response your body produces to fight off the bacteria – cytokines, prostaglandins, and other mediators that activate nociceptors on pain-sensing neurons in your peripheral nervous system. Much like the misery of being feverish when you have the flu, this inflammatory pain is an unfortunate but necessary side effect of the immune response, protecting your body against invasion. But bacteria can be real jerks all on their own, and Dr. Clifford Woolf’s lab has uncovered evidence that some bacteria can directly activate nociceptors through N-formyl peptides and the pore-forming toxin a-haemolysin (aHL).

Dr. Clifford Woolf, of Harvard University, studies pain, regeneration and neurodegenerative diseases. Dr. Woolf uncovered the phenomenon of “central sensitization”, in which peripheral inflammation and tissue damage leads to sensitization of the nociceptive neurons in the dorsal horns of the spinal cord. This sensitization is mediated by NMDA receptors, and can be treated by opiates. His research on this subject is the driving force behind the current practice of treating pain early (for example, by giving morphine before surgery to preempt post-surgical sensitization). Dr. Woolf’s work has been key to better understanding mechanisms of human pain sensation, and plays an important role in the way that patient pain is treated in hospitals around the world.

In a recent study, his group looked directly at the molecular mechanisms of pain generation during Staphyloccocus aureus infection. Never heard of S. aureus? Maybe you know it as MRSA – that’s right, the antibiotic-resistant form of this bacteria is the bane of many hospitals.

Dr. Woolf’s group injected the hindpaws of mice with S. aureus and, as you might expect if someone injected your foot with a bunch of nasty bacteria, the mice showed mechanical, heat, and cold hypersensitivity within one hour. This lasted for 48-72 hours, with a peak at six hours after infection (Fig. 1a). By examining the kinetics of immune activation, they found that tissue swelling did not correlate with pain (Fig. 1a), and the influx of immune cells and cytokines increased in infected tissue but did not correlate with hyperalgesia (Fig. 1b, c). The bacterial load, however, showed a similar time course as that of pain hypersensitivity, peaking at 6 hours and decreasing over time as myeloid cells ingested the bacteria (Fig. 1d). This pain time course doesn’t quite match up if the inflammatory response is causing the pain – but it does match up with the presence of bacteria at the injury site.

Image

Figure 1: S. aureus infection induces pain hypersensitivity paralleling bacterial load but not immune activation.

The lab decided to examine whether key immune response pathways were necessary for S. aureus-induced pain using TLR2 and MyD88 knock-out mice, which removed the animal’s protection against S. aureus skin infection. The same mechanical and thermal hyperalgesia was seen, indicating that the pain response is not dependent on the immune activation. They also tried removing neutrophils and monocytes, important for immunity against the bacteria by limiting its survival and spread, by injecting a GR1 antibody before infection. This treatment resulted in an increase in mechanical and heat hypersensitivity, accompanied by a higher bacterial load. Finally, using NOD scid gamma (or, if you want to get extra-sciencey, NOD.Cg-Prkdcscid Il2rgtm1Wjl/SzJ) mice, they saw that knocking out natural killer T and B cells did not alleviate the acute bacterial pain. This seems to indicate that the pain sensation associated with S. aureus injection is not dependent on the immune response.

The strong correlation between pain and bacterial load led Dr. Woolf’s group to examine whether or not bacteria interact directly with nociceptors by applying heat-killed S. aureus to dorsal root ganglia (DRG) sensory neurons. This induced a calcium flux response and action potential firing in a subset of neurons that also respond to capsaicin (the chemical that makes spicy peppers “hurt so good” in your mouth) (Fig. 2a,b).

Image

Figure 2: N-formylated peptides activate nociceptors.

 

So how do the bacteria activate nociceptors? Woolf and his group targeted N-formylated peptides, bacterial molecules used by leukocytes to mediate immune chemotaxis during infection. Application of fMLF (E. coli-derived) and fMIFL (S. aureus-derived) both induced calcium flux in a subset of DRG neurons that also responded to capsaicin, similar to the response seen with bacterial application (Fig. 2e), and resulted in hyperalgesia when injected into mice (Fig. 2f).

FPR1 is the receptor that recognizes fMLF and fMIFL in immune cells, so the lab tried knocking it out in mice. Fpr1-/- mouse DRG neurons showed decreased calcium flux, and Fpr1-/- mice had reduced mechanical hyperalgesia after treatment with fMIFL relative to wild-type (Fig. 3g).

Image

Figure 3: FRP1 effects on S. aureus pain response.

The lab also targeted aHL, a pore-forming toxin involved in tissue damage and bacterial spread. aHL can assemble pores in cell membranes allowing non-selective cation entry – which might be enough to depolarize cells. Like fMLF and fMIFL, aHL induced calcium flux in nociceptors on DRG neurons (Fig. 4a, b). When injected into mice, aHL induced pain behavior in a dose-dependent manner (Fig 4c). These effects did not involved voltage-gated calcium channels or large-pore cation channels, but did require external calcium – this would seem to indicate that the pores aHL assembles in the membrane are sufficient for depolarization. Knocking out aHL expression in S. aureus led to significantly less hyperalgesia than wild-type bacteria, indicating a robust role for aHL in pain during S. aureus infection.

Image

Figure 4: aHL activates nociceptors and contributes to injection-induced hyperalgesia.

Dr. Woolf’s group neatly summarized the mechanisms by which bacteria directly activate nociceptors in the diagram below:

Image

N-formyl peptides activate nociceptors by binding to FRP1 and inducing calcium flux, while αHL forms pores in the nociceptive cell membrane allowing cation exchange. These mechanisms both appear to be at play in mechanical nociceptive cells, but other mechanisms, especially those related to heat-sensitive cells, remain to be explored.

Finally, the lab opted to ablate (remove) the nociceptive cells responsible for the S. aureus pain response to examine the role of nociceptors in modulating the immune response. Ablation of these cells led to significantly increased tissue swelling with increased infiltration of neutrophils and monocytes at the infection site and enlarged lymph nodes – indicating that nociceptor ablation led to increased local inflammation. This hints at a role for nociceptors directly modulating immune activation, and bacteria may be directly activating the nociceptors as a means to increase immunosuppression and reduce the ability of the host to clear the pathogen.

So not only do the bacteria directly activation your pain receptors, but they also might be making it harder for your body to fight them off. Makes those bacteria sound extra evil, doesn’t it? Think about that the next time you have a sore pimple – or, if you are blessed with good skin, the next time you have a gnarly hangnail.

 

Be sure to check out Dr. Clifford Woolf’s talk, “Studying human pain in a dish”, at 4 PM on Tuesday, April 29th in the CNCB Large Conference Room, if you’d like to hear more on this subject!

 

Alison Caldwell is a first year student in the UCSD Neurosciences Graduate Program. She is currently rotating under Dr. Chitra Mandyam studying the effects of addiction on neuronal proliferation and morphology in the hippocampus. She can be found on Twitter at @alie_astrocyte


Source:

Chiu I.M., Heesters B.A., Ghasemlou N., Von Hehn C.A., Zhao F., Tran J., Wainger B., Strominger A., Muralidharan S. & Horswill A.R. & (2013). Bacteria activate sensory neurons that modulate pain and inflammation, Nature, 501 (7465) 52-57. DOI:

Developing Gain Control in Single Cortical Neurons

xi1

If you are reading this sentence, it is quite likely that you have heard of “gain control” in a neuroscience context. You may notice that the picture provided above has very little to do with the context in which this blog post shall discuss “gain control”. You may also notice that this blog post has a dry, technical, and boring title, which promises a fair amount of eventually enlightening but difficult-to-wade-through mathematics. Given the limited time and intellect of yours truly, however, there will be no equations in this blog post. Instead, a summary/teaser will be provided.

First, a definition of “gain control” according to Dr. Adrienne Fairhall (University of Washington, Seattle) and others in their 2013 Journal of Neuroscience article1:

“…a neural system’s mapping between inputs and outputs adjusts to dynamically span the varying range of incoming stimuli. In this form of adaptive coding, the nonlinear function relating input to output has the property that the gain with respect to the input scales with the [standard deviation] of the input.”

In other words, it is known that neurons can become more or less sensitive (in terms of the absolute input amplitude required to generate the same response) depending on how variable the stimuli are, i.e. how noisy the inputs happen to be. This gain-control property ensures that neurons extract the relevant information (e.g. determine whether someone in a chattering crowd is calling your name) from constantly varying stimuli in a consistently context-dependent manner (e.g. one has to shout louder in order to be heard when everyone else is, oddly enough, shouting). From both behavioral and neural perspectives, gain control has been investigated in different sensory modalities (visual/auditory) 2, 3, and in different animal models3, 4. It is also known that mature single neurons are capable of gain control, based on electrophysiology3. But where does that capability come from?

xi2

An example of mature neuron with excellent gain control. The legend contains details that may or may not be of interest to you. (Fig. 2, Mease et al.)

Dr. Fairhall and her colleagues chose to investigate this problem with both in vitro recording and biophysical models of single neurons. Recording from developing (E18-P1) and mature (P6-P8) mouse somatosensory cortex neurons revealed that mature neurons had better gain scaling than immature ones. In less ambiguous terms, the “symmetrized divergence” in spike-triggered average stimulus (STA, roughly describing the stimulus variability) distribution, which reflects how different the input–output function shapes (which loosely translates to “gain”) for a pair of STA series are, was smaller for mature neurons than immature neurons. The STA series pairs were generated by applying two stimuli with different standard deviations to the same neuron, as well as by applying the same stimulus to different neurons of the same maturity. Therefore, not only were mature neurons recorded in this study better at intrinsic gain control, they were also better by the same degree over the immature counterparts, suggesting that an intrinsic and consistent developmental programme underlies gain control improvement for this type of neurons.

Top left: histogram showing the propensity of immature neurons to have higher pairwise symmetrized divergence (read: more variable gain scaling). Bottom left: Voltage clamp for immature (P0) and mature (P7) neurons, showing an elevation of sodium currents during development. Right: Each dot represents a neuron with color-coded maturity, again showing an elevation of sodium currents during development. (Fig. 3C-E, Mease et al.)

Top left: histogram showing the propensity of immature neurons to have higher pairwise symmetrized divergence (read: more variable gain scaling). Bottom left: Voltage clamp for immature (P0) and mature (P7) neurons, showing an elevation of sodium currents during development. Right: Each dot represents a neuron with color-coded maturity, again showing an elevation of sodium currents during development. (Fig. 3C-E, Mease et al.)

How might this program work, if it exists? Glad that you asked. The short answer, of course, is “not sure.” But Fairhall and colleagues had a promising clue1:

“We have shown previously that INa increases in density much faster than IK during early postnatal development (Picken-Bahrey and Moody, 2003a).”

Indeed, using a biophysical model of single neurons (EIF, or exponential integrate-and-fire) where a single parameter describes how a fixed spike generating kinetics interact with ion channel expression.  By modifying this parameter (which is inversely proportional to the difference between spiking threshold and effective resting potential), a proportionate change in the ratio between sodium channel and potassium channel numbers was implied, and this parameter alone was sufficient to bring about the improvement of gain scaling observed during in vitro cortical neuron development. In confirmation, using sodium and potassium channel blockers, Fairhall and colleagues found in organotypical culture that partial blockage of potassium channels improved gain scaling (less variability/better distribution coverage), whereas partial blockage of sodium channels did the opposite.

xi4

Two pharmacological manipulations of INa/IK change gain-scaling behavior in agreement with model results. Sample estimate of standard deviation (Fig. 5A-B, Mease et al.)

This series of encouraging results spurred Fairhall and colleagues to further apply the EIF model and test the model neurons for gain control abilities based on their sodium/potassium conductance ratios, as well as for conditions under which the model might fail. For the sake of brevity and clarity, however, this blog post will not go into further details.

Based on the results so far, Fairhall and colleagues proposed that the development of gain control in neurons of mouse somatosensory cortex (and, perhaps, beyond) may be a property intrinsic to the single neuron’s gradual self-mediated differential expression of ion channels. An alternative that remained undiscussed, however, is that the differential rate of ion channel production could be mediated in vivo by, say, astrocytic factors, or even dependent on nascent synaptic activities and subsequent calcium entry. Generally speaking, while manipulating only one parameter in a model with good fit seems to reproduce experimental results, it may also be a good idea to keep in mind that said parameter can be altered in different ways in vivo. Another important factor to consider, of course, is the relatively small sample size used in the experimental part of this study, which would in turn impact the model’s generalizability.

xi5

Mease et al., Fig. 7

As part of the UCSD Neurosciences Graduate Program Seminar Series, at 4:00pm on Tuesday, April 22, 2014, in the CNCB Large Conference Room, Dr. Adrienne Fairhall will give a talk on the computational properties of single neurons, as well as how they interact with network-level functions. Come for what might be a refreshingly basic perspective in this age of “map everything”.

Xi Jiang is a first year student in the UCSD Neurosciences Graduate Program. He is now a rotation student under the guidance of Dr. Mark Tuszynski, studying neural stem cell fate determination.

References:

1. Mease R.A., Famulare M., Gjorgjieva J., Moody W.J. & Fairhall A.L. (2013). Emergence of Adaptive Computation by Single Neurons in the Developing Cortex, Journal of Neuroscience, 33 (30) 12154-12170. DOI:

2. Piëch V, Li W, Reeke GN, Gilbert CD. Network model of top-down influences on local gain and contextual interactions in visual cortex. Proc Natl Acad Sci U S A. 2013, 110(43):E4108-17.

3. Hildebrandt KJ, Benda J, Hennig RM. Multiple arithmetic operations in a single neuron: the recruitment of adaptation processes in the cricket auditory pathway depends on sensory context. J Neurosci. 2011, 31(40):14142-50.

4. Chen Y, Li H, Jin Z, Shou T, Yu H. Feedback of the amygdala globally modulates visual response of primary visual cortex in the cat. Neuroimage. 2014, 84:775-85.

Neuronal memory

Francis Crick‘s astonishing hypothesis (1995) is that “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.” Actually, more than a hypothesis, this is the basis of modern neuroscience. Understanding how the tiny cells in our brains can generate everything that is in our mind is what motivates the research of Dr. Ben Strowbridge, a professor of Neuroscience and Physiology/Biophysics at Case Western University. Dr Strowbridge is particularly interested in the mechanisms that neurons use to remember things.

Dr. Strowbridge majored in Biology at MIT in 1984, when he started to develop his passion for neurons and their amazing properties. He received his PhD in Neuroscience in Gordon Shepherd’s laboratory at Yale University, studying local circuits that mediate neural activity in neocortex. Later, he moved to the University of Washington, first as a postdoc with Philip Schwartzkroin, and then as an Assistant Professor, before moving to his currently lab at Case Western in 1998, where he has been investigating the neural circuits in the hippocampus, the brain region that is crucial for the generation of many kinds of memory (he has also been interested in understanding how neurons process and generate the sense of smell, but this is subject for another day).

Dr. Strowbridge has been studying one particular type of memory called short-term memory (or working memory), which is the kind of memory that allow us to remember what we did seconds or minutes ago and, in this way, make sense of the world as a continuous story. Most of these things we will later forget, like what we ate for breakfast this morning, or that phone number that we memorized for a couple of seconds and that disappeared from ours minds seconds later.

As Francis Crick said, this kind of memory also needs to be physically stored somewhere inside our brains, during the time of seconds or minutes that it lasts. The most famous theory, first proposed by Donald Hebb, states that short-term memories can be stored by reverberating activity circulating through networks of neurons that fades after a certain period of time (Hebb 1949). Another possibility is that some neurons with exquisite properties would be able to fire persistently during many seconds after the end of the stimulus and, in principle, could store information during this period of time.

Dr. Strowbridge and a graduate student in his lab, Philip Larimer, decided to look at the circuits in a specific part of the hippocampus called dentate gyrus. Using slices of the rat brain, they started looking for some cellular and/or network mechanism into this brain region that would allow the storage of information for periods of at least a few seconds. Since they were working with brain slices in vitro, Larimer and Strowbridge (2010) had more control of what is going on and could use electrophysiology to record the activity of specific neurons while stimulating axons at precise locations.

Despite looking for reverberating activity at neural networks, which would support Hebb’s theory, what they found was that a specific neuron called semilunar granule cells (SGC) showed plateau potentials and remained firing for seconds after the end of the stimulus. Interestingly, this cell was first described by our godfather Santiago Ramón y Cajal about a century ago and was almost neglected since then.

The semilunar granule cell (SGC) Left: Ramón y Cajal drawing of the guinea pig hippocampal dentate gyrus. A semilunar granule cell, highlighted by the arrow, is located in the inner molecular layer, right above the layer of granule cells (GC), the most common cell type in the dentate gyrus. Figure adapted from Cajal (1995).  Right: Intracellular responses of a SGC to graded stimulation in the perforant pathway (PP - main input to dentate gyrus). Note the plateau potential and the persistent firing that lasts for seconds after stimulation. A GC response is showed below for comparison. Figure modified from Larimer and Strowbridge (2010).

The semilunar granule cell (SGC)
Left: Ramón y Cajal’s drawing of the guinea pig hippocampal dentate gyrus. A semilunar granule cell, highlighted by the arrow, is located in the inner molecular layer, right above the layer of granule cells (GC), the most common cell type in the dentate gyrus. Figure adapted from Cajal (1995). Right: Intracellular responses of a SGC to graded stimulation in the perforant pathway (PP – main input to dentate gyrus). Note the plateau potential and the persistent firing that lasts for seconds after stimulation. A GC response is showed below for comparison. Figure modified from Larimer and Strowbridge (2010).

The firing properties of this cell was then characterized and demonstrated to depend on NMDA receptors and specific voltage gated calcium channels. After characterizing the SGC, Dr. Strowbridge and colleagues also demonstrated that downstream neurons in the hilus of the dentate gyrus receive inputs from SGCs and showed SGC dependent persistent firing. Furthermore, they showed that the activity of these hilar neurons varies, depending on the site of the stimulating electrode, but is reliable at a specific site. In other words, the persistent firing of these cells can discriminate between different stimuli, based on their site of origin, and also on their temporal sequence (Larimar and Strowbridge 2010; Hyde and Strowbridge 2012).

Schematic of hippocampal dentate gyrus showing semilunar granule cells (SGCs) in the inner molecular layer (IML) and their projections to hilar neurons (excitatory mossy cells and inhibitory interneurons). Red indicates active neurons, dashed lines indicate inactive pathways. Open circles indicate inhibitory synapses and closed circles indicate excitatory synapses. Hilar neurons can show different patterns of activity, depending on the stimuli, modulating and refining the pattern of granule cell firing (GC), as illustrated by the scheme.  EC: entorhinal cortex; IML: inner molecular layer; GR/ML: granule cell/molecular layer. Figure slightly modified from Walker et al. 2010.

Schematic of hippocampal dentate gyrus showing semilunar granule cells (SGCs) in the inner molecular layer (IML) and their projections to hilar neurons (excitatory mossy cells and inhibitory interneurons). Red indicates active neurons, dashed lines indicate inactive pathways. Open circles indicate inhibitory synapses and closed circles indicate excitatory synapses. Hilar neurons can show different patterns of activity, depending on the stimuli, modulating and refining the pattern of granule cell firing (GC), as illustrated by the scheme. EC: entorhinal cortex; IML: inner molecular layer; GR/ML: granule cell/molecular layer. Figure slightly modified from Walker et al. 2010.

Therefore, Dr. Strowbridge and colleagues have demonstrated that specific cells in the hippocampal dentate gyrus, the SGCs, and their downstream neurons in the hilus have the potential to store the information related to short-term memory in their persistent firing activity patterns.

In spite of all short-term memory work, use a bit of your long-term memory and don’t forget to join us this Tuesday April 7, 2014 at 4:00PM at CNCB Large Conference Room to hear more about this story from Dr. Ben Strowbridge in his talk entitled “Cellular mechanisms of short-term mnemonic representations in the dentate gyrus in vitro”.

Leonardo M. Cardozo is a first year student in the UCSD Neurosciences Graduate Program. He is currently rotating at Dr. Massimo Scanziani’s lab, investigating if long-range projections can also originate from inhibitory neurons, which would be able to control cortical excitability not only locally, but also at distant sites, coordinating activity across the brain.

Primary reference:

Larimer P. & Strowbridge B.W. (2009). Representing information in cell assemblies: persistent activity mediated by semilunar granule cells, Nature Neuroscience, 13 (2) 213-222. DOI:
Other references:

Cajal S.R.Y. (1995). Histology of the Nervous System of Man and Vertebrates. Oxford University Press.

Crick F.H.C. (1995). The Astonishing Hypothesis: The Scientific Search For The Soul. Touchstone.

Hebb D. (1949). The Organization of Behavior. John Wiley & Sons.

Hyde R.A. & Strowbridge B.W. (2012). Mnemonic representations of transient stimuli and temporal sequences in the rodent hippocampus in vitro.  Nature Neuroscience 15 (10) 1430-1438. DOI: 10.1038/nn.3208

Walker M.C., Pavlov I., Kullmann D.M. (2010). A ‘sustain pedal in the hippocampus? Nature Neuroscience 13 (2) 146-148. DOI: 10.1038/nn0210-146