Bench, Bytes, & Beyond: An investigation into the complexities of neural network analysis

Over the past several years, the term “big data” has frequently been mentioned, often in passing, as a vague and generalized concept that represents the accumulation of inconceivable amounts of information demanding storage, management, and analysis—often ranging from a few dozen terabytes to several petabytes (1015) in a single data set1. In essence, the “big data” movement seeks to capture overarching patterns and trends within these sets, where current software tools and strategies are incapable of handling this volume of information. Big Data is particularly relevant in the neurosciences, where a vast amount of high-dimensional neural data arising from incredibly complex networks is being continuously acquired.

As with Big Data as a whole, one of the central challenges of modern neuroscience is to integrate and model data from a variety of sources to determine similarities and recurring themes between them2. That is, the vast array of techniques used in neuroscience–from GWAS to viral tracing to electrophysiology to fMRI to artificial network simulations–generates data sets with varying characteristics, dimensions, and formats3,4. In order to meaningfully combine data from each of these sources, we need a comprehensive and universal strategy for integrating findings of all types and from all scales into a simple and cohesive story.

Drs. Peiran Gao and Surya Ganguli of Stanford University highlight the difficulty in extracting meaningful trends from big neural data sets in their review titled “On simplicity and complexity in the brave new world of large-scale neuroscience”. They emphasize the particular hurdle of extracting a coherent conceptual understanding of neural behavior and emergent cognitive functions from circuit connectivity. Moreover, they offer insight into what it might mean to truly “understand” the brain on every level of in all of its hierarchical complexity.

One of the central ideas highlighted in Gao and Ganguli’s review is the notion of Neuronal Task Complexity (NTC) that uses neuronal population autocorrelation across various parameters to place bounds on the dimensionality of neural data5. Thus, NTC seeks to parse meaningful differences in neuronal firing patterns from random fluctuations in signaling: in a broader sense, it attempts to increase the signal-to-noise ratio (SNR) to extract relevant information about circuit dynamics. In characterizing NTC, the authors demonstrate that it can be used to derive a general understanding of neuronal circuit behavior with relatively little information. That is, recording from more neurons in the brain does not necessarily result in a better encapsulation of the phenomena we seek to explain–more data is not always better—and NTC enables us to better quantify the experimental data needed to draw these broad conclusions.

Using NTC, we can better clarify the distinction between effective and excessive data collection, and hone in on the intrinsic principles that govern our cognition without gathering data that needlessly cloud our ability to distinguish these complexities. In the figure below from Gao and Ganguli’s review, the essential components of NTC are explained using neuronal modulation of behavioral states as an experimental example:


In addition to describing the essential components and uses of NTC as a means to measure the complexity of neural data given particular task parameters and assumptions, the authors also explain the broader meaning of what NTC tells us: that rather than simply focusing number of neurons we record from, we instead need to develop more intricate and clearly-defined behavioral experiments that will cause predictable and observable alterations in neural activity. This will help to ensure that any significant patterns we see can be meaningfully interpreted in-context and effectively incorporated into a broader perspective of how neuronal firing patterns give rise to behavior and cognition.

By underscoring the need for interaction between experimental work, data analysis, and theory behind the operation and dynamics of neural circuitry, Gao and Ganguli argue that gaining a comprehensive understanding of the brain will require a communal effort and inputs from all areas of neuroscience. By extension, the ability to test the validity and interaction between several models at once will be indispensable in determining which ones best align with acquired data and with one another. By comparing multiple models from all sub-fields of neuroscience, a more complete and accurate understanding of the brain can be derived. Thus, in catalyzing the generation of broader and more accurate conceptual frameworks, both artificial simulations of neural activity and adoption of a wider range of experimental techniques will enable us to gain a more complete understanding of how the brain makes us who we are.


Marley Rossa is a first-year graduate student currently rotating in Jeff Isaacson’s lab. She is, as of now, content with studying the electrophysiology of individual neurons and will leave the hardcore petabyte-level analysis to the more computationally-inclined.

1Ibrahim; Targio Hashem, Abaker; Yaqoob, Ibrar; Badrul Anuar, Nor; Mokhtar, Salimah; Gani, Abdullah; Ullah Khan, Samee (2015). “big data” on cloud computing: Review and open research issues”. Information Systems47: 98–115.
2Stevenson IH, Kording KP: How advances in neural recording affect data analysis. Nat Neurosci 2011, 14:139-142.
 3Chung K, Deisseroth K: Clarity for mapping the nervous system. 
Nat Methods 2013, 10:508-513.
 4Prevedel R, Yoon Y, Hoffmann M, Pak N: Simultaneous whole- 
animal 3D imaging of neuronal activity using light-field microscopy. Nat Methods 2014, 11:727-730.
5Gao P, Ganguli S: On simplicity and complexity in the brave new world of large-scale neuroscience. Curr Opin in Neurobiology 2015, 32: 148–155.
6Gao P, Trautmann E, Yu B, Santhanam G, Ryu S, Shenoy KV, Ganguli S: A theory of neural dimensionality and measurement. Computational and Systems Neuroscience Conference (COSYNE). 2014.
7Gao P, Ganguli S: Dimensionality, Coding and Dynamics of Single-Trial Neural Data. Computational and Systems Neuroscience Conference (COSYNE). 2015.









The (Hippocampal) Social Network

The hippocampus is a brain region that plays a central role in learning and memory. Due to its importance, most of the subregions of the hippocampus have been studied exhaustively. However, one of these subregions, the CA2, has avoided the limelight allotted to its neighbors. What are the functional properties of the CA2? What does it wire together with? What role does it play in behavior? Until recently, little has been known.

Hitti and Siegelbaum attack these questions head on in their paper “The hippocampal CA2 region is essential for social memory”. This comprehensive paper walks us through the story of the CA2, from mouse model creation, to circuit tracing, to a series of well-controlled behavioral experiments.

First, the authors start by demonstrating the validity of their Amigo2-Cre mouse line that dominantly targets CA2 pyramidal neurons with both imaging and electrophysiology. Next, they explore the inputs and outputs of the CA2 with molecular tracing. Unsurprisingly, the region is densely interconnected with the other hippocampal structures. Broadly, the CA2 coordinates a strong disynaptic circuit that links the entorhinal cortex to the CA1.


Next, the authors delve into their method for creating a CA2 knock-out. Using an AAV driven virus that expresses tetanus neurotoxin, they demonstrate the silencing of post-synaptic potentials (and preservation of fiber currents) from the CA2.

Now that all this is working, Hitti and Siegelbaum begin the most interesting portion of the paper – the behavior! A slew of social interaction tests are used, the expected result of which is that normal mice will habituate to previously encountered rodents. In comparison to controls, the paper shows that CA2 knockout mice fail to remember previously encountered mice! They fail to habituate, treating every encounter with a familiar mouse as it its first.


What’s truly remarkable is that the authors control for a multitude of confounding factors, social interest, spatial memory, novel object, locomotion, and even olfaction. They find that none of these other factors are significantly different across groups. Quite selectively, the CA2 to mediates social memory.

Taken together, this paper offers a comprehensive demonstration of the necessity for the CA2, a previously poorly defined hippocampal region, in the functioning of social memory in rodents.

Please come join us on Tuesday October 7th, at 4pm in the CNCB  Marilyn Farquhar Seminar Room to hear more about this story from Dr. Steven Siegelbaum!

Debha Amatya is a first-year neurosciences graduate student working in the Gage Lab to understand the relationship between common and rare variants in autism genomics

More than the MTL: Parietal Activity in Episodic Memory

The medial temporal lobe (MTL) has traditionally received credit as supporting episodic memory, a type of declarative memory that enables access to one’s past experiences. However, converging evidence suggests that parietal areas may also contribute to episodic retrieval–in particular, the retrosplenial (RSC) and posterior cingulate cortices (PCC) in the left medial parietal cortex (MPC) and the angular gyrus (AG) in the left lateral parietal cortex (LPC) (Cabeza et al., 2008; Wagner et al., 2005). As nodes of the default mode network (Greicius et al., 2003; Raichle et al., 2001)–a cluster of interacting areas implicated in self-referential thinking–the RSC/PCC and AG would seem prime candidates for promoting retrieval of autobiographical episodes. Indeed, functional imaging has corroborated these intuitions (Cabeza et al., 2008; Wagner et al., 2005). Nevertheless, fMRI has its limitations, and a technique with both high spatial and temporal resolution would potentially provide further insight into the dynamics  of the parietal lobe in episodic memory. Fortunately, Josef Parvizi’s Laboratory of Behavioral and Cognitive Neuroscience at Stanford University recognizes the shortcomings of functional imaging, frequently incorporating intracranial electrocorticography and/or intracranial electrical brain stimulation to investigate the neurological underpinnings of human behavior and cognition. In a recent study, Foster et al. (2015)  exploited the spatiotemporal precision of ECoG in three human subjects during conditions of task performance, rest, and sleep. Electrode coverage of the MPC and LPC in these epileptic patients (Figure 1b) offered the valuable opportunity to obtain electrophysiological recordings in these areas and to observe parietal activity and connectivity.

Foster et al. (2015) employed a task that required participants to judge a visually presented statement as true or false, with some statements involving episodic or semantic memory, self- or other-judgments, or arithmetic (Figure 1c). Analysis of the electrophysiological data recorded under these different task conditions most apparently demonstrated greater high-frequency broadband amplitude (HFB, 70-180 Hz) both in the RSC/PCC and AG for the episodic condition relative to the other task conditions. HFB has been suggested as an index of neuronal population response and thus supposedly reflects increased activity in these regions during episodic retrieval. In addition, the HFB response profiles of the RSC/PCC and AG across task conditions were remarkably similar; activity in other parietal regions failed to so strongly resemble that of the RSC/PCC and AG (see Figure 1d/e).


Figure 1. Parietal Subregions, Electrode Locations, Experimental Task, and Task Responses

Other analyses confirmed the notable similarity in responses between the RSC/PCC and AG: a strong positive correlation was found between the trial-level mean HFB responses in these areas with the strongest correlations arising in the episodic and semantic conditions. On a temporal scale, these two sites showed essentially identical response onset latencies for the episodic condition (Figure 2). Cumulatively, these findings suggest that the RSC/PCC and AG may receive simultaneous inputs–perhaps from the MTL–and work in parallel to process different aspects of these inputs and therefore different components of episodic memory.


Figure 2. Correlated HFB Trial Responses between PCC and AG

The study conducted by Foster et al. (2015) also examined parietal activity profiles and connectivity patterns during rest and sleep to delve deeper into the dynamics of MPC and LPC. Resting-state ECoG analysis extracting slow (<1 Hz) ongoing fluctuations in HFB amplitude displayed strong correlations between RSC/PCC and AG; a similar correlation pattern held for the low beta range as well. Resting-state fMRI yielded comparable correlations. Additionally, these correlation patterns persisted in ECoG data collected during stage-2 and stage-3 sleep, reinforcing the similarity of parietal connectivity and activity across three quite behaviorally distinct states (Figure 3). The notion that resting-state activity recapitulates task-driven activity is not novel, and some assert that such spontaneous activity reflects the strength and organization of new connections shaped by recent experience and neuronal activation (Harmelech and Malach, 2013).


Figure 3. Similarity of ECoG Correlation Patterns across Task, Rest, and Sleep States

Overall, the study by Foster et al. (2015) clearly demonstrates the utility of ECoG recordings in probing the spatiotemporal dynamics of the parietal lobe–not only during episodic retrieval per se, but also during rest and sleep. The current findings support and expand upon those of past fMRI experiments (Cabeza et al., 2008; Wagner et al., 2005), revealing highly similar neuronal responses in the RSC/PCC and AG, as well as simultaneity in these responses. Nonetheless, while Foster et al. (2015) have pioneered electrophysiological investigation of parietal activity related to memory, their study is far from exhaustive and demands future research to establish a more comprehensive framework for understanding the precise roles of the MPC and LPC in episodic retrieval.

To hear Dr. Josef Parvizi elaborate on this research, attend his talk on Tuesday, October 11, 2016, at 4:00 pm in the CNCB’s Marilyn G. Farquhar Seminar Room.

Cabeza, R., Ciaramelli, E., Olson, I.R., and Moscovitch, M. (2008). The parietal cortex and episodic memory: an attentional account. Nat. Rev. Neurosci. 9, 613–625.

Foster, B. L., Rangarajan, V., Shirer, W. R., & Parvizi, J. (2015). Intrinsic and Task-Dependent Coupling of Neuronal Population Activity in Human Parietal Cortex. Neuron, 86(2), 578–590.

Greicius, M.D., Krasnow, B., Reiss, A.L., and Menon, V. (2003). Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc. Natl. Acad. Sci. USA 100, 253–258.

Harmelech, T., and Malach, R. (2013). Neurocognitive biases and the patterns of spontaneous correlations in the human cortex. Trends Cogn. Sci. 17, 606–615.

Raichle, M.E., MacLeod, A.M., Snyder, A.Z., Powers, W.J., Gusnard, D.A., and Shulman, G.L. (2001). A default mode of brain function. Proc. Natl. Acad. Sci. USA 98, 676–682.

Wagner, A.D., Shannon, B.J., Kahn, I., and Buckner, R.L. (2005). Parietal lobe contributions to episodic memory retrieval. Trends Cogn. Sci. 9, 445–453.

Gina D’Andrea-Penna is a first-year student in the neurosciences graduate program rotating in Dr. Bradley Voytek’s lab. Her strongest interests lie in the field of cognitive neuroscience, and she aspires to investigate and, one day, comprehend consciousness.

How to make a schizophrenic mouse

Dopamine is perhaps the best known neurotransmitter, almost certainly due to its association with the idea of reward. It’s often brought up to explain why we like the things we do, and how people can develop addictions to different types of rewards. However, dopamine isn’t just a reward chemical; it’s very important for a wide variety of brain processes, including voluntary movement, attention, sensory gating, evaluating the salience of a stimulus, decision making, and motivation. Given all this chemical does, it’s not too surprising that changes in dopamine signaling have been implicated in mental disorders, like schizophrenia, ADHD, and depression. But how, then, does a healthy brain regulate dopamine? And how does this system go wrong?

Larry Zweifel’s lab at the University of Washington studies these questions. Soden et al. examined the effect of a mutation in a gene called KCNN3 that was discovered in a schizophrenia patient (Bowen et al., 2001). This gene codes for an ion channel called SK3 that is activated when calcium is inside the cell and then lets potassium out of the neuron, reducing its excitability. The mutated form, however, has an early stop codon due to a frame shift, and therefore only produces a small fragment of the original protein. Interestingly, this mutation was found to be dominant in cell culture, needing only one copy to exert its full effect and suppress SK3 currents in neurons, likely because the protein fragments bind to and inactivate SK3 channels (Miller et al., 2001).

Since SK3 is expressed in dopamine neurons and was mutated in a schizophrenia patient, it seems a promising candidate for a gene regulating dopamine function. Soden et al. tested the effect of this mutation in a mouse model by adding the mutated gene into the genome of dopamine neurons using a viral vector and the Cre-lox system. Indeed, they found that dopamine neurons in the mice with the mutant gene were more excitable and fired less regularly than usual, making them more prone to firing bursts of action potentials.


These bursts are thought to be a functionally different form of dopamine signaling than the neurons’ regular spiking, causing different effects in dopamine-responsive brain regions, so this altered neuronal function should correspond to altered behavior in tasks where dopamine is important. Since dopamine is involved in sensory gating, meaning the brain’s filtering of irrelevant stimuli, the researchers tested this ability in the mutant mice. In their task, the mice were presented with two sounds, one which was was always followed by a reward (a sugary pellet), and one which was rarely followed by a reward. The mice learned to look for the pellet quickly after the more predictive sound, but not after the other. Once the mice had learned to distinguish the sounds, the researchers flashed a light at the same time the reward-predictive sound was played. The normal mice became distracted, but the mutant mice paid no attention to the novel stimulus and still proceeded quickly to the pellet, indicating that their sensory gating was altered.


The researchers also tested their mice on prepulse inhibition (PPI), a neurological process by which the startle response of an animal to a sudden, high amplitude stimulus, such as a loud sound, is reduced if the strong stimulus is preceded by a weaker one. This phenomenon occurs in both mice and humans, is affected by dopamine-modulating drugs, and is reduced in people with schizophrenia. Indeed, the control mice showed prepulse inhibition, while the mutant mice did not.


This paper is significant in that the authors were able to demonstrate a link across multiple levels of biology, from disrupted gene function to neuronal function to behavior. As the KCNN3 gene is in a chromosomal region (1q21) that is associated with schizophrenia, it’s possible that this gene, and pathological processes similar to that shown here by the author, are at play in more cases of schizophrenia. The ability understand how the brain is disrupted across different scales in psychiatric illness is crucial to developing better, targeted treatments for these conditions.

Bowen, T. et al. Mutation screening of the KCNN3 gene reveals a rare frameshift mutation. Mol. Psychiatry 6, 259–260 (2001).
Miller, M. J. Nuclear Localization and Dominant-negative Suppression by a Mutant SKCa3 N-terminal Channel Fragment Identified in a Patient with Schizorphrenia. Journal of Biological Chemistry 276, 27753–27756 (2001).
Soden, M. E. et al. Disruption of Dopamine Neuron Activity Pattern Regulation through Selective Expression of a Human KCNN3 Mutation. Neuron 80, 997–1009 (2013).
Jacob Garrett is a first-year PhD student in the neurosciences program. He has not yet narrowed his interests enough to provide any sort of useful description here.

Discovering the Origins of Bullying

Many of us have witnessed bullying or even was once a victim ourselves. We may remember seeming a cornered victim angrily fighting back against the bully. We can empathize with this person because his or her aggression is “reactive” to the threats. In contrast, “proactive aggression” –as often observed in many bullies– is intentional, independent of external cues and sometimes motivated by aggression itself as a reward. This makes us wonder: what drives a person to hurt another person, unprovoked? While most of past research addresses reactive aggression, little is known about the neural mechanisms of self-initiated aggression. Dr. Dayu Lin, an assistant professor at NYU Langone medical center and an expert of the aggression circuits, sought to unravel the mechanisms that drive aggression-seeking behaviors in a recent study published in Nature Neuroscience.

The Hypothalamus is a brain area that governs our basic survival needs, such as hunger, thirst, body temperature, sleep, sex and aggression. Dr. Lin’s research focuses on the ventrolateral part of the ventromedial hypothalamus (VMHvl), a region previously found to be critical for aggression in male mice. Silencing the VMHvl reduces inter-male aggression, while the stimulation promotes the attack toward other males and even females. An interesting question then would be: does stimulating the VMHvl trigger attack itself or does it build up the motivation to attack?

figure 1

Given resident intruder test­­, the most commonly used aggression assay in which an intruder is added to the home cage of the subject as an external cue, it mainly tests reactive aggression and it is difficult to address the motivation to attack. Using a modified self-initiated aggression (SIA) task (Fig. 1b), researchers in Dr. Lin’s lab separate the aggression-seeking phase from the phase of attack, thus can explore the motivational aspect of aggression . In each session, a male mouse was allowed to freely choose between two nosepoke ports: the mouse would receive free access to a submissive mouse (following a brief waiting period) if it poked the “social port” instead of the “null port”. After training, more than a half (56.6%) of the subjects quickly learned to prefer the social port and has become highly aggressive, attacking the submissive male for most of the trials­­–this recapitulates some core features of proactive aggression.

figure 2.png

To understand the role of the VMHvl in each phase of aggression, the authors implanted electrodes to record the neuronal activities. They found that the neurons have different activation patterns in regard to distinct phases of aggression. For instance, the neuron in Figure 2a fires only when the submissive male is present, while other neurons (Fig. 2b-e) are activated also during the aggression-seeking phase. Figure 2g demonstrates that the activities of subpopulations of neurons peak at different phases (poke/wait/interaction), indicating different populations of VMHvl neurons are relevant to various components of aggression.

To track how VMHvl population activity change along the way animals learn the behavior, the authors use fiber photometry to visualize and measure the change of intracellular calcium. They found that early in training, the population activity only increased in the interaction phase, whereas during late training a robust increase in the population activity was already observed in the aggression-seeking phase, even before poking the port. This result suggests VMHvl neurons undergoes a learning-dependent change in plasticity as the mice learn to become proactively aggressive.


To directly test the functional significance of VMHvl neurons in aggression-seeking, the author further silence VMHvl neurons by expressing inhibitory engineered receptors (DREADDi) or stimulate VMHvl neurons using optogenetics. They found that inactivation of the VMHvl reduces aggression-seeking behaviors (Fig 3f.) while stimulation of the VMHvl accelerates aggression-seeking by reducing poking latency (Fig. 3b-d).

Overall, these findings suggest the VMHvl can mediate both aggression-seeking and the action of attack. Dr. Lin’s research has brought new insights to the understanding of the aggression circuits ­– not only by identifying the VMHvl to be critical for proactive aggression, but also demonstrating its dual role in both triggering a behavior and modulating the motivational state to generate the behavior.


Please come join us on Tuesday May 3rd, at 4pm in the CNCB  Marilyn Farquhar Seminar Room to hear more about this story from Dr. Dayu Lin!



Sharon Huang is a first-year student in the neurosciences graduate program currently rotating in Dr. Xin Jin’s lab. Her scientific interests center around how neural circuits dictate social behaviors.

“We could smell so much better if we’d just work together.” – anonymous ORN

The study of sensory systems provides an opportunity to observe the stunning complexity of the nervous system. Even Darwin himself once remarked that the organization of the eye was so intricate that the odds of it arising from natural selection seemed almost “absurd in the highest possible degree.” Dr. Chih-Ying Su at UCSD focuses her research on the functional organization of olfactory systems in an effort to determine how olfactory receptor neurons (ORNs) are able to process a staggering range of different odors without requiring an infinite number of ORNs to do so.

Dr. Su carries out her research in fruitflies and mosquitoes. In such insects, ORNs are compartmentalized into units called sensilla. Although ORNs are thought to respond to specific odorants independently, the functional significance of their compartmentalization into these stereotyped groups is unknown. Dr. Su investigated this organization by analyzing the relationship between pairs of ORNs within a given sensillum. To do this, she would expose the ORNs to an odorant that would elicit consistent firing of one neuron, and then intersperse bursts of a second odorant to produce firing from the second neuron.

Screen Shot 2016-04-25 at 4.21.47 PM

How do two ORNs behave when they are both activated? As it turns out, it appears that the activation of an ORN has an inhibitory effect on neighboring ORNs. Thus, if neuron A is exposed to an odorant consistently and then a second odorant is introduced, the activation of neuron B reduces the firing of neuron A. To rule out the possibility that the second odorant acts directly on neuron A to suppress its firing, the same assay was carried out following ablation of neuron B. Without communication from its neighbor, neuron A failed to demonstrate inhibited firing. The same results were demonstrated when the roles of the neurons were reversed (i.e. neuron A can also inhibit the sustained firing of neuron B), as well as when the neurons were activated by Channelrhodopsin2 rather than by an odorant stimulus.

This phenomenon is known as lateral inhibition. Interestingly, Dr. Su observed that lateral inhibition between ORNs within a sensillum does not rely on classic synaptic transmission. In fact, it doesn’t require synapses at all. She expressed tetanus toxin (TNT) specifically in ORNs to silence synaptic transmission. Even in these genetically-modified flies, lateral inhibition of a similar degree to control flies was observed. Based on Dr. Su’s characterization of these neurons, it appears that the ORNs communicate via ephaptic transmission, which is a special kind of communication that occurs between adjacent neurons via an extracellular electrical field.

The identification of the functional significance of insect sensilla is particularly intriguing because it provides insight into how odorant mixtures are processed. This is a unique approach to the study of olfaction, as many researchers commonly use only single odorants in their experiments. While that is certainly useful, this two-odorant approach allowed Dr. Su to simulate a more life-like scenario inside the laboratory. By virtue of this, her research could give way to novel methods of insect control in the real world. Perhaps it will soon be possible to utilize odorant mixtures to suppress specific ORNs that would normally drive insects towards a specific target via activation of their neighboring ORNs.

To learn more, check out Dr. Chih-Ying Su’s talk in the CNCB Marilyn Farquhar Seminar Room at 4pm on April 26, 2016.

Caroline Sferrazza is a first year student in the UCSD Neurosciences Graduate Program. She is currently rotating in Dr. Rusty Gage’s lab at the Salk Institute where she uses stem cells to study Bipolar Disorder. When she has a chance to ditch the lab coat, she can generally be found anywhere there’s good music or cute animals. Preferably both.

Olfactory Learning: From Molecules to Behavior

“What are you eating? It smells delicious” you hear someone say from across the room as your coworker walks in with a presumably tasty meal. You breathe in as the odor wafts by you and immediately feel sick to your stomach. That nauseating smell belongs to the same meal you bought last week that made you sick. You try to stay but the smell is so repugnant that you end up leaving. If this has ever happened to you, you have experienced aversive learning in action. Learning is critical to our ability to navigate the world. We can learn because our brains ouwormr plastic, meaning that our brains can change as a result of experience. How is learning mediated by changes to our underlying neural circuits? This is the type of question that neuroscientist Dr. Yun Zhang has been studying at the Center for Brain Science at Harvard University. In order to identify the neural mechanisms of learning Dr. Zhang focuses on studying the roundworm, C. elegans. A major advantage of studying these organisms is that neuroscientist have been able to map the connectivity of all 302 neurons of the C. elegans nervous system. Coupled with a variety of molecular, cellular and genetic tools, C. elegans has allowed neuroscientists to identify molecules, neurons and circuits involved in learning.

In order to study learning Dr. Zhang’s lab has focused on olfactory learning which plays an important role in identifying and locating food sources. Because locating food is essential to the survival of all animal species it is likely that the basic mechanisms of olfactory learning are highly conserved. Dr. Zhang first showed that through olfactory learning C. elegans can learn to avoid pathogenic bacteria (Zhang et al. 2005). C. elegans eat bacteria and in the lab these organisms are typically grown on a plate containing a harmless strain of Escherichia coli OP50 bacteria. However, there are some types of bacteria such as the P. aeruginosa PA14 that are pathogenic and cause infection. C. elegans have different preferences for the bacterial odors of these two strains depending on their experience. C. elegans raised on a plate only contain OP50 bacteria showed no difference in preference for OP50 or PA14 bacterial odors. However, if C. elegans were raised on a plate containing OP50 and PA14 showed increased aversion towards PA14. The same was true for C. elegans that were raised a plate contain only OP50 but exposed to PA14 bacteria for four hours before testing. These worms avoided the PA14 bacterial odors much like you would avoid food that made you sick.

What neural circuit mechanisms allow these C. elegans to learn to avoid pathogenic bacterial odors? One clue came from studying the behavior of mutant C. elegans. The study found that tph-1 mutants, which results in a serotonin deficiency, were unable to learn to avoid pathogenic bacteria. Because C. elegans have a small number of neurons the researchers were able to identify the neurons that were not functioning properly in the mutants. They showed that expressing tph-1 in the ADF neuron in tph-1 mutants rescued aversive olfactory learning. They then showed that the MOD-1 seratonin-gated chloride channel was necessary for olfactory learning. These data suggest that serotonin plays a central role in olfactory aversive learning.

Further work in Dr. Zhang’s lab has continued to dissect the neural circuit mechanisms involved in aversive olfactory learning. They showed that two sensory neurons AWB and AWC are required for C. elegans to display their innate preference for PA14 bacteria. ADF neurons and its corresponding modulatory circuit is necessary for aversive olfactory learning to occur (Ha et al. 2010). Ha et al. were able to map out a circuit corresponding to naïve and learned preferences. Recently, Dr. Zhang’s lab identified a variety of molecular components involved in aversive olfactory learning circuits including Gα-proteins, guanylate cyclases, and cGMP-gated channels. Additionally, they showed that neuropeptides mediate food-odor preferences (Harris et al. 2014). While there is still much more to discover, these experiments have begun to demystify how experience-dependent learning occurs at a molecular level.

To find out more please join Dr. Yun Zhang in the CNCB Marilyn Farquhar Seminar Room at 4 PM on April 19th or check out some of her awesome publications on the topic here.


Ha, H. I., Hendricks, M., Shen, Y., Gabel, C. V., Fang-Yen, C., Qin, Y., … & Zhang, Y. (2010). Functional organization of a neural network for aversive olfactory learning in Caenorhabditis elegans. Neuron, 68(6), 1173-1186.

Harris, G., Shen, Y., Ha, H., Donato, A., Wallis, S., Zhang, X., & Zhang, Y. (2014). Dissecting the signaling mechanisms underlying recognition and preference of food odors. The Journal of Neuroscience, 34(28), 9389-9403.

Zhang, Y., Lu, H., & Bargmann, C. I. (2005). Pathogenic bacteria induce aversive olfactory learning in Caenorhabditis elegans. Nature, 438(7065), 179-184.

Andre DeSouza is a first year student in the UCSD Neuroscience Program. He is currently rotating in Dr. Martyn Gouldings lab where he is working on a project investigating the mechanisms of pain and itch in the spinal cord. 

Living on the Edge: how organisms harness dynamical systems to accomplish more with less

Stability is overrated. A cadaver occupies an energetically stable state. Stable systems cannot adapt to changes in their environment. For a living creature, it is more useful to be finely balanced. Like a coin on its edge, the slightest perturbation unleashes enormous changes its state. So it is with hair-cell bundles.

The hair bundle is a phylogenetically ancient sensory structure. It can be found in the mammalian hearing and vestibular systems, and in the lateral line systems of fish and amphibians. In each case the hair bundles detect mechanical disturbances in an ambient fluid, however, the nature of these disturbances does vary from system to system and from organism to organism. The hair bundles of the ears are tuned to oscillatory displacements, those of the vestibular system detect bodily accelerations, and those in the lateral line are sensitive to pulsatile changes in water pressure. These systems simultaneously boast impressive sensitivity and an extreme dynamic range. It is implausible that a linear system, with outputs directly proportional to inputs, could accommodate the millionfold range of disturbance amplitudes that hair bundles routinely process. Thus, these electromechanical structures must be actively applying non-linear amplification to their displacements. The mechanisms of this amplification, however, are poorly understood.

In order to address this mystery, Hudspeth and colleagues first built a dynamical model capable of replicating the sensory performance and variety of states hair bundles are observed to achieve [1]. While I apologize to the reader for including an equation in a blog post, Hudspeth’s relatively simple model consisted of a system of just two differential equations and two variable parameters:


in which x is the bundle’s displacement, f is the force owing to adaptation, a is a negative stiffness owing to gating of the transduction channel, τ is the timescale of adaptation, b is a compliance coupling bundle displacement to adaptation, K is the sum of the bundle’s load stiffness and pivot-spring stiffness, Fc is the sum of the constant force intrinsic to the hair bundle and that owing to the load, and F is any time-dependent force applied to the bundle. ηx and ηf are noise terms. Parameters a, b, and τ were held constant, and the state space of the system was explored by varying K and Fc; the load stiffness and constant force, respectively. The resulting state-space of these parameter manipulations is displayed in the figure below.


In the white quiescent regions, the hair bundles demonstrate no movement at all. In the green bistable regions, the structure is mostly quiescent, but will occasionally lurch from one configuration to another. In the orange region the hair bundles spontaneously oscillate between configurations. Furthermore, regular gradients of oscillation amplitude and frequency exist within this regime. This is an elegant mathematical tale, but (A) does it exist it biology? And (B) what does this model have to do with amplification? Both these questions are addressed in Hudspeth and colleagues’  experimental paper on the matter [2].

To probe their in vitro state space, a novel apparatus was devised to mechanically clamp bullfrog hair bundles in a manner analogous to the venerable patch clamp. Mechanical force was applied to the hair bundle via a flexible glass filament. The mechanical load, once set, was maintained continuously by optically measuring the hair bundles’ displacement and adjusting the force applied with a piezoelectric actuator. Thus, the state-space mapped out theoretically could be experimentally tested by adjusting the load stiffness and constant force parameters.

In general, the bullfrog hair bundles behaved in a manner consistent with the model’s predictions. In small- and medium-sized hair cell bundles there exists a contiguous region of state-space in which spontaneous oscillations occur. In addition, the amplitude and frequency tunings of these oscillations follow the predicted gradient. Thus, the dynamical model appears to be a good representation of the biological system.

As shown in the figure above, the theoretical state-space contains a special region along the edge of the oscillatory regime. This narrow space, labeled as the line of Hopf bifurcations, is where the dynamical system is least stable: precariously balanced between periodic and quiescent behaviors. This instability allows small perturbations to effect large state changes, or amplification. Indeed, when hair bundles were mechanically clamped to states near the edge of the oscillatory regime and then stimulated with periodic displacements appropriate to their state-position, the degree of amplitude non-linearity in their resonant response was maximized. Furthermore frequency tuning was most narrow in this border region, perhaps indicative of the fragility of dynamical system.

Dynamical systems are common in nature, doubly so in the complex substrates of cellular biology. Force multiplication is not a concept limited to sensory systems. A minuscule concentration of signaling molecule can trigger macroscale developmental patterning, or the release very few hormones can begin complex cascades. The leveraging of unstable dynamical systems to achieve large outputs from small inputs may one day be regarded as a general principle of biology.

Learn more when Dr. Jim Hudspeth discusses  “Making an effort to listen: mechanical amplification by myosin molecules and ion channels in hair cells of the inner ear” at 4 pm this Tuesday, March 29th in the CNBC Marilyn Farquar Seminar Room.


  1. Maoiléidigh, D.Ó., Nicola, E.M. and Hudspeth, A.J., 2012. The diverse effects of mechanical loading on active hair bundles. Proceedings of the National Academy of Sciences109(6), pp.1943-1948.
  2. Salvi, J.D., Maoiléidigh, D.Ó., Fabella, B.A., Tobin, M. and Hudspeth, A.J., 2015. Control of a hair bundle’s mechanosensory function by its mechanical load. Proceedings of the National Academy of Sciences, 112(9), pp.E1000-E1009.


Burke Rosen is a first year student in the UCSD Neurosciences Graduate Program. He is currently between rotations and is interested in using clever signals analyses to make somewhat educated guesses about unfathomably complex phenomena. 




Targeting Pain at its Source

Anyone who’s accidentally touched a hot stove can understand that pain protects us. It’s a warning signal to remove our bodies from noxious stimuli (don’t keep your hand on a stove) and a powerful reminder to avoid them in the future (order take-out). While many acute forms of pain are clearly giving us helpful information about the safety of our environment, chronic pain can be uselessly debilitating.

Chronic pain affects millions of Americans every year. Currently, opioids are prescribed to treat pain in a short term setting. These drugs work by binding to opioid receptors in the brain to reduce the perception of pain. Opioids also act on the reward centers in the brain, which can lead to dependence and abuse. According to the CDC, opioids were involved in 28,648 deaths in 2014. In addition to supporting addiction treatment, a major priority in curbing opioid addiction should be to find alternative drugs for pain management.

So what are some possible alternatives to opioids? One major area of research is to target the primary sensory neurons that detect painful stimuli in the first place. These neurons, called nociceptors, transduce chemical, thermal, and mechanical pain into an afferent signal that lets you know you’re hurt. One group of ion channels that transduces pain signals is the Transient Receptor Potential (TRP) ion channel family. Two particular channels, TRPA1 and TRPV1, have been studied as potential targets for painkillers. However, antagonists that block their function also impair many normal sensory functions, like thermal sensation and protective, acute pain. Because of these side effects, clinical trials of these antagonists failed.

Luckily, the activity of TRPA1 and TRPV1 is complex, and there are more potential ways to reduce their pain signaling than simply turning them off. Dr. Xinzhong Dong and his lab, who also study itch, targeted the interaction of TRPA1 and TRPV1 to see if they could alter pain signaling in primary sensory neurons (Weng et al., 2015).

TRPA1 and TRPV1 can form a heteromer in sensory neurons, and this interaction is thought to be involved in the nociceptive pathway. Weng et al. identified a transmembrane protein, Tmem100, that modulates the TRPA1-TRPV1 complex in a way that increases the pain-related signaling from TRPA1.

To determine if Tmem100 actually affects the perception of pain, Weng et al. performed a barrage of behavioral assays to if pain responses were different in regular mice compared to mice in which Tmem100 had been genetically deleted in primary sensory neurons. Unlike the failed TRPA1 antagonist clinical trials, the results of theTmem100 knockout showed a decrease in mechanical hyperalgesia (an undesired pain) and preservation of important TRP functions like thermal and mechanical sensitivity.

To understand how Tmem100 affects the TRPA1-TRPV1 complex, Weng et al. electrically recorded from patches of membrane where TRPA1-TRPV1 and Tmem100 were located. They subjected the cells to chemicals that normally activate either TRPA1 or TRPV1, and found that TRPA1 was much more likely to respond to the stimulus when Tmem100 was also present.

Next, after showing that Tmem100 binds to both TRP channels, Weng et al. set out to structurally alter the protein. They found a sequence of the protein on the intracellular domain of Tmem100 that was positively charged and likely to be a binding site for TRPA1 and/or TRPV1. When the putative binding site was changed to an uncharged sequence, binding between the mutant protein, Tmem100-3Q, and TRPA1 was abolished, but binding with TRPV1 was unchanged.

Then, when Weng et al. recorded from membrane patches with the TRPA1-TRPV1 complex and the mutant Tmem100-3Q present, they found a surprising result: Instead of increasing the probability of TRPA1 activity like the wild type Tmem100, Tmem100-3Q actually decreased the probability of TRPA1 responding to noxious chemicals. Moreover, they didn’t even need the full transmembrane protein to achieve this reduction in pain signaling. Weng et al. created a cell permeable peptide (CPP) that mimics the new binding site of the Tmem100-3Q mutant and injected it into wild-type mice. In addition to recapitulating the electrophysiological effects of the Tmem100-3Q mutant, this injectable peptide reduced mechanical hyperalgesia in normal animals, while leaving mechanical and thermal sensitivity intact.

Taken together, these experiments identify Tmem100 as a modulator of TRPV1-dependent TRPA1 activity and introduce a mutant CPP as a viable therapeutic agent for local pain relief.



Don’t miss Dr. Xinzhong Dong discuss “Mechanisms of Itch and Pain” at 4 pm this Tuesday, March 8th in the CNBC Marilyn Farquar Seminar Room.



Weng, H. J., Patel, K. N., Jeske, N. A., Bierbower, S. M., Zou, W., Tiwari, V., … & Geng, Y. (2015). Tmem100 is a regulator of TRPA1-TRPV1 complex and contributes to persistent pain. Neuron, 85(4), 833-846.

Weyer, A. D., & Stucky, C. L. (2015). Loosening Pain’s Grip by Tightening TRPV1-TRPA1 Interactions. Neuron, 85(4), 661-663.

Rachel Cassidy is a first year student in the UCSD Neurosciences Graduate Program. She is currently rotating in Jeff Isaacson’s lab studying the circuitry of the auditory cortex.


From Stimulus to Movement, How You Ate That Pie

At any moment in one’s day, we are confronted with a seemingly endless array of stimuli, or sensory events, and a correspondingly large number of choices. Based on incoming traffic, for instance, you might (wisely) decide to not attempt to cross the street. In this case, the stimuli impinging upon your senses include auditory (eg cars honking, or slightly angry folk cursing you out), visual (eg cars flashing their lights, or slightly angry folks making obscene gestures), somatosensory (eg if you feel a car, you may have made a poor decision), and that of any of your other favorite senses.

We know that, at some point, our brains use this sensory information to inform our decisions to commit, or not commit, an action – such as walking across the street, or staying put; in short, whether to initiate, or suppress, movement. During this almost unconscious process, neurons in primary motor cortex (M1), a region of the brain intimately involved in the learning and execution of movements, demonstrate different activity patterns. Although others have suggested that these different patterns may simply reflect different task or movement parameters, David McCormick and his lab propose that these neurons are interacting in order to control the initiation of movement. Specifically, control over movement initiation or suppression may depend on inhibition, although it is unknown how the brain implements this form of behavioral control.

To get at this mechanism, Zagha et al. (2015) first trained mice until they became veritable experts in a simple task – water-restricted head-fixed mice had their whiskers deflected, termed the ’target,’ and the mice then had to lick for the trial to be considered ‘correct’. In 80% of trials there was also a tone presented before the whiskers were deflected, and in this case the mice had to withhold licking until their whiskers were deflected; at that point, licking would allow them to receive their reward (i.e. water; Fig1A, C). These trials were particularly important, as they discouraged impulsivity, as evidenced by the higher hit rate and lower false alarm rate in expert mice compared to novices (Fig1E, F ). Since expert mice were not impulsive, the authors could look at M1 neurons as the mouse was making a conscious decision to not lick (ie a suppression of movement).


Fig 1 – Mice become experts! A) The setup B) With more obvious stimuli (i.e. faster speed), the mouse performs better C) Task structure D) Successful trial, with licking after target onset E, F) Mice do better! G) … Unless you inhibit M1

Curiously, by silencing M1 via injection of muscimol, a GABAA agonist, or via optogenetic activation of PV-containing GABAergic interneurons, Zagha et al. noticed that expert mice would have many more false alarms (Fig1G). This implied that M1 is involved in the suppression of movements.

To figure out what was going on in M1, the authors of the study then used loose patch or multi-electrode arrays to record from neurons in layer 5 of M1. They noticed that some neurons had an increase in spike rate after stimulus onset, but before whisking or licking (Fig2A, C), and another population had a decrease in spike rate for the same period of time (Fig2B, D); simultaneous recordings told them that these two populations of neurons co-occurred (Fig2E). By looking at the change in firing rates across the target stimulus on ‘hit’, or correct, trials, they found two populations of neurons, fitted by two gaussians – one with spike rate enhancement, and another with suppression (Fig2I), further supporting their claim of two distinct populations of neurons in M1.


Fig 2 – A,C) Some neurons become more active (i.e. enhanced population) B,D) Some neurons become less active (i.e. suppressed population) E) Both populations co-occur during a task F-I) By fitting to Gaussians, we see two distinct populations of neurons (blue and red Gaussians)

Because these neurons could also be active during other epochs of the task, they might not actually represent the anticipation of movement, or the formation of a decision to (not) move. The authors addressed this by looking at the average neural activity of target-modulated neurons across different conditions and epochs of the task. Their activity was stable across tone presentation, which implied that they aren’t representing a sensory response (Fig3A, F). However, after presentation of the target, one population had a sustained increase in activity, and the other a decrease (Fig3B, G). On ‘miss’ trials, these changes in activity were absent or less pronounced (Fig3C, H), and both of these populations of neurons ramped up, or down, their activity in anticipation of licking, even in off-target responses (i.e. false alarms and spontaneous licking; Fig3D, E, I, J). All of this suggested that these two populations of neurons represent an upcoming motor choice.


Fig 3 – A,F) Our two populations of target-modulated neurons aren’t modulated by the tone B,G) …But they are modulated by the target C,H) …And you see less modulation when the mouse was incorrect on that trial D,I) They can also be modulated in anticipation of an action (i.e. licking) E,J)…Even when the movement is incorrect

Since it seemed that these neurons were involved in the anticipation of movement, and their activity was anti-correlated (Fig4F), the authors next investigated the possibility that they inhibited one another, as that is one possible mechanism that could produce anti-correlated activity. Simulations verified that lateral inhibition could produce anti-correlation (Fig4A-D), and the authors, by using Granger causality, found that the enhanced firing (Enh) and suppressed firing (Supp) populations inhibited one another, and that this could be behind their anti-correlated activity (Fig4D, F). Furthermore, the authors simulated their competitive ensemble model (Fig5A), and found that a transient stimulus could be turned into a sustained response due to its intrinsic dynamics (Fig5B). In fact, their simulated data was qualitatively similar to the neural data they observed (Fig5C-F).


Fig 4 – A-D) Four different two-ensemble circuit models. You can get anti-correlated activity by either anti-correlated inputs (B), or lateral inhibition (C,D), or both. E) The Enh and Supp populations are anti-correlated F)…And it seems that mutual inhibition is to blame

The authors also found that some neurons were modulated by the target only, by the motor command only, or by both the target and the motor command. This motor-specific population of neurons displayed ramping activity, which could be used to trigger movement if a movement command is issued only when the firing activity of the population reaches a bound (this process is called accumulation-to-bound; Fig6A-F). Since these neurons modulated their activity late in the decision process, they were likely not involved in the aforementioned state transitions (Fig5B).



Fig 5 – A) Competitive ensemble model B) Phase plan shows how a transient stimulus can lead to a miss or hit C,D) Two simulations, which depict how both ensembles’ firing activity change when the stimulus does (red or blue) or does not (black) lead to a stable transition E,F) It’s cool that the neural data seems to follow along with the simulations



Fig 6 – A-C) Some neurons have enhanced activity, where the peak activity is aligned to the target, and likely act as sensory representations. Note how the timing of their peak activity is unrelated to the mouse’s reaction time (RT) D-F) Other neurons show ramping activity, where the peak is aligned to movement, not the target. Here, RT does matter, with earlier peaks for quicker RTs

Therefore, it seems that M1 neurons, with different activities in anticipation of movement, inhibit one another, and can use a transient stimulus to transition from one state of their circuit to another. The authors suggest that these ‘sensory’ neurons can then drive ramping activity in motor-enhanced neurons to trigger a movement, and this could be carried out via the sequential activation of overlapping populations of neurons. The authors carried out further analyses, and found that slow, oscillatory dynamics were correlated with poor performance in the task, seemingly by disrupting anti-correlated activity between the two populations of target-modulated neurons. The mechanisms by which oscillations disrupt anti-correlation, however, remains to be addressed.


In short, it should be worth it to come to the CNCB Marilyn Farquhar Seminar Room, on Tuesday, 3/1/16, at 4.00 pm, to hear Dr David McCormick give his talk titled “Neural mechanisms of optimal state”

The article which this post was based on is here. Enjoy!


Javier How is a 1st year student currently rotating in Dr Takaki Komiyama’s lab, where he learns about learning. Due to blistering feedback, he no longer writes haikus about complex topics, as evidenced by this little gem

My name is Javier,

I no longer write haikus

I was told to stop