Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location

Sabine Kastner studies the neural basis of visual perception, attention, and awareness using a translational approach that combines neuroimaging in humans and monkeys, monkey electrophysiology and studies in patients with brain lesions. The goal of her researches is to better understand how large-scale networks operate during cognition, using the visual attention network as a model network. Her work aims to answer the question of how large-scale networks set up efficient communication and which neural code is used in different network nodes to drive behavior. Dr. Kastner has served as the Scientific Director of Princeton’s neuroimaging facility since 2005. Her contributions to the field of cognitive neuroscience were recognized with the Young Investigator Award from the Cognitive Neuroscience Society in 2005.

In 2013, Dr. Kastner published a paper in Journal of Current Biology titled “Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location”. This paper investigated the relationship between space- and object-based selections, attentional mechanisms that are believed to play a role in directing the brain’s limited processing resources (previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations if those locations are part of the same object (i.e., resulting in object-based selection)). Up to the date of publication of this paper, it was unclear whether a single set of neuronal processes is engaged in these two selection mechanisms, or separable neural processes are involved.

The method used in the paper to resolve this question is to look at the temporal dynamics of visual-target detection under conditions of space-based selection and object-based selection, based on the facts that the nature of attentional deployment causes specific changes in synchronization of local field potentials within and between neural ensembles, and a relationship between the pre-stimulus phase of theta oscillations (at 7 Hz) and the likelihood of visual-target detection under conditions of space-based selection was previously reported.

The experimental design of this paper is shown in Figure 1. Participants (n = 14) maintained central fixation and reported the occurrence of a near-threshold change in contrast (i.e., a visual target) at the end of one of two bar-shaped objects. A spatial cue indicated the location where the visual target was most likely to occur (with 75% cue validity). Following the cue, a valid or invalid target was presented during a randomly sampled 300–1100 ms cue-to-target interval. The spatial cue was used both to guide the deployment of space-based selection and to reset the phase of ongoing neural oscillations, causing the timing of high- and low-excitability states to align across trials.

f1Fig 1. The Experimental Design

Figure 2A shows the time course of visual-target detection, followed by detrending in Figure 2B to more clearly reveal the periodic nature of the dependence of detection rates on the timing of target onset (cue-target interval). Figure 2C is the power spectrum of Figure 2B computed using the fast Fourier transform (FFT) algorithm. Statistical tests revealed significant peaks at approximately 8 Hz under conditions of space- (i.e., at the cued location) and object-based selection (i.e., at the same object location). This indicates a relationship between the phase of theta-band oscillations (at 7 Hz) at the time of detection and the likelihood of visual-target detection is not limited to the cued location (previously shown) but rather spreads to uncued locations that are part of the same object (i.e., share visual boundaries with the cued location). The result also suggests that attention-dependent increases in synchronization at the cued and same-object locations seem to arise from common underlying neural processes, operating at a frequency of approximately 8 Hz. FFT results also showed a consistent phase offset of approximately 90 degrees between traces of visual-target detection at the cued and same-object locations across subjects (Figure 2D), which suggests that brain regions representing different locations within the attended object are synchronized at a common frequency but have location-specific phases.f2Figure 2. Visual-Target Detection under Conditions of Both Space- and Object-Based Selection Reflects Increased Theta-Band Synchronization. Color coding: cued location (black line), same-object (orange line), and different-object (blue line).

There are certainly plenty of insights that are not even alluded to in this short blog due to word limit, to learn more about the beauty of them, please join us on Tuesday 3/20/2018 at 4pm at Marilyn G. Farquhar Seminar Room at CNCB.

Huanqiu Zhang is a first-year neuroscience Ph.D. student currently rotating in Dr. Maxim Bazhenov’s laboratory.


The origin of the human brain as told by divergent spatiotemporal gene expression patterns

The evolution of complex cognitive, emotional and motor abilities can be attributed to the arrival and growth of the mammalian neocortex. Amongst these mammals, humans believe their mental capacities are second to none. Amongst mammals, human brains are bigger, therefore, humans would obviously be the “dominant species”. Despite popular belief and innuendo… size isn’t everything. Neither brain size nor total neuron number can account for the functional differences between humans and other species. Furthermore, various genetic conditions in humans, lead to macrocephaly where patients have larger brains but significantly reduced cognitive functions. Ultimately, it’s not the size of the brain but the connections of distinct neurons. Neuronal connections are genetically determined and processed during development. Therefore, to understand what makes each human unique, and endows us with sentience, we need to elucidate the genetic programs that determine neuronal connections in the neocortex and how they differ amongst our evolutionary predecessors.

Dr. Sestan is a professor with several appointments including neuroscience, genetics, and psychiatry, and is the executive director of the “Genome Editing Center” at Yale School of Medicine. In general, Dr. Sestan’s lab is interested in understanding the genetic programs responsible for dictating neocortical neural connections during development. To aid this understanding Dr. Sestan uses the genetic tools available in mouse models, and also does genetic analysis of non-human primates. Ultimately, the lab wants to understand how the genes in specific genetic programs contribute to neuron identity, neocortical layer formation, and the regulatory mechanisms by which these programs may have evolved.

Recently in a Science paper titled “Molecular and cellular reorganization of neural circuits in the human lineage” (André M. M. Sousa et al. Science 2017;358:1027-1032), the Sestan lab performed transcriptome sequencing of 16 regions of adult human, chimpanzee, and macaque brains to better understand the molecular and cellular differences in brain organization between human and nonhuman primates. Additionally, they used single-cell transcriptomic analysis, which revealed global, regional, and cell-type–specific species expression differences in genes representing distinct functional categories. Finally, they were able to use these methods to show species-specific expression patterns of genes involved in dopamine biosynthesis and signaling, which elucidated a subpopulation of interneurons that are upregulated in humans but absent in some non-human primates.

The figures below are from 247 tissue samples that belonged to six humans, five chimps, and five macaque monkeys that resulted in an annotation set of 26,514 mRNAs, and 1,485 miRNAs. The details of each figure can be found in the legend, but the major takeaways will be discussed here.

Figure 1a shows that about 12% (3,154) of the total mRNA genes show human specific differential expression in distinct brain regions, when compared to macaque and chimpanzee expression. Figure 1b shows that 13% (202) of the miRNAs exhibit differential expression that is specific to humans.  Lastly, figure 1c is an example of how they broke down the 3,154 mRNAs that demonstrate human specific expression patterns, and analyzed what the genes are coding for, and which genes were up- or down-regulated and in which brain regions.

JC fig 1

Figure 2 (figure 3 in the paper), was generated by integrating their transcriptome data from figure 1 with single-cell RNA sequencing (RNA-seq) data generated from the human neocortex. They could now evaluate gene expression at the cellular level. Overall, the researchers found genes displaying species- and/or region-specific expression patterns also exhibited cell-type—specific expression patterns. Figure 2a and 2b are showing radar plots where each column shows a specific cell-type, and only genes expressed in the respective cell types are plotted. This allowed researchers to see how genes were expressed in cells and if they were implicated in i) neuropsychiatric disease, ii) neurotransmitter processing or trafficking, or iii) encoded ion channels. They used this information for follow-up experiments investigating the molecular profiling, developmental origin in humans, and in-vitro characterization of TH+ interneurons. TH+ interneurons are found throughout the cortex in humans and involved in the biosynthesis of dopamine.

JC fig 2

In the end, these researchers were able to show that by analyzing gene expression patterns in various brain regions of the neocortex, they could elucidate evolutionary modifications in genetic programs and neuron distribution associated with neuromodulatory systems that may underlie cognitive and behavioral differences between species.

Elischa Sanders is a first year Neuroscience Ph.D. student, currently working in Eiman Azim’s lab

Expanding on our imaging capabilities: Wide field-of-view calcium imaging between brain areas

The brain is made up of billions of neurons, cells exquisitely specialized for communication with other local neurons and with cells at quite a distance between brain areas. Calcium imaging, in which a fluorescent protein or molecule detects the calcium ions involved in synaptic transmission, has given neuroscientists the ability to visualize the activity of individual neurons in mice and other species. However, our ability to visualize the fluorescent responses of individual neurons under a microscope has long been limited to small fields of view, restricting the number of neurons and the brain areas that can be imaged simultaneously, and the correlations between these brain areas.

Dr. Spencer L. Smith is an Associate Professor in the Department of Cell Biology and Physiology and Investigator at the Carolina Institute for Developmental Disabilities at the University of North Carolina, Chapel Hill. Dr. Smith is primarily focused on population dynamics and circuit architecture in the primary and higher visual cortical areas in mice. To address the discussed limitations in calcium imaging and explore activity correlations between primary visual cortex V1 and higher visual areas during visually evoked activity in mice, Dr. Smith’s lab has also developed a novel multiphoton imaging technique for wide-field calcium imaging between cortical brain regions.

In “Wide field-of-view, multi-region, two-photon imaging of neuronal activity in the mammalian brain” (2016, Nature Biotechnology), Dr. Smith and colleagues describe the development of their Trepan2p microscope imaging system for use in ultra-wide field of view imaging and dual region imaging with offset in the XY and/or Z planes without the need to reposition the imaged specimen. To accomplish this, laser light from an 80MHz laser is split into two beams, one of which is sent through a delay path to delay the pulses by 6.25ns (or half of a period). Using custom steering mirrors and electronically tunable lenses (ETL), each light path can be individually positioned in the X, Y, and Z planes.

Scope schematic

Schematic of Trepan2p system. λ/2: half-wave plate, PBS: polarizing beam splitting cube, M: mirror, BB: beam block, BE: beam expander, ETL: electrically tunable lens, SM1/2: steering mirror, GS: galvanometer scanners, SL: scan lens, TL: tube lens, Obj: objective, DM: dichroic mirror, CL1/2: collection lens, PMT: photomultiplier tube


Dr. Smith and colleagues used the Trepan2p microscope to image cortical areas of interest for higher order visual processing in mice. Using a wide, 3.5mm field of view, the authors imaged GCaMP6s fluorescent calcium transients from individual neurons in primary visual cortex V1 and up to six higher order visual areas simultaneously using a naturalistic movie stimulus.

Ca2+ signal.png

(a) The Trepan2p system allows for wide field viewing of multiple visual cortical areas. (b) Intrinsic signal imaging was used to map out visual cortical areas (yellow). (c) 5,361 individual neurons expressing genetically encoded GCaMP6s can be individually resolved in this field of view. (d) Visually evoked calcium transients defined as increases in fluorescent signal (deltaF/F) were recorded over time during a naturalistic movie stimulus for all 5,361 neurons. (e) Calcium transient responses for the first 50 neurons.


Finally, Dr. Smith and his team used the independently controllable light paths of the Trepan2p system to image calcium transients with high temporal resolution in the V1 and higher order visual areas AM and PM in response to stimuli of visual gratings and naturalistic movie presentation. Interestingly, using the imaged calcium activity to infer action potential timing indicated an increase in correlated activity between V1 neurons and AM/PM neurons during presentation of the naturalistic movie stimulus, but not the visual grating stimulus. Overall, Dr. Smith and colleagues have engineered a new two-photon imaging system that alleviates previously restricted fields of view and allows for high temporal and cellular resolution for imaging correlated activity in multiple brain regions.

Gratings v naturalistic.png

Genetically encoded GCaMP6s calcium transients were recorded from V1 and higher visual areas AM and PM in response to naturalistic video and visual grating stimuli. Calcium transients were used to infer action potential timing. Activity correlations between pairs of cells (1 V1 neuron and 1 AM or PM neuron) increased in response to the naturalistic movie stimulus.


Please join us at 4:00 pm on Tuesday February 20th for Dr. Smith’s seminar entitled “Next generation multiphoton imaging reveals visual cortical areas acting in concert”


Susan Lubejko is a first year Neuroscience Ph.D. student, currently rotating in Dr. Jeff Isaacson’s lab

Neuronal Diversity in the Basal Ganglia Output Region

Each animal must use information about past experience, its sensory environment, and its internal motivation state to decide what action to carry out. Specialized brain circuits in the basal ganglia are fundamental to this process of “action selection” and reinforcement learning. Disruptions in the basal ganglia directly cause severe human neuropsychiatric disorders such as Parkinson’s and Huntington’s disease, and contribute to obsessive-compulsive disorder, Tourette’s syndrome, schizophrenia, as well as drug addiction. The circuitry of the basal ganglia is complex and, despite many box and arrow diagrams in textbooks, poorly understood (Figure 1).

Picture1.pngFigure1: Connectivity of the Basal Ganglia (from S. Ikemoto et al., Behavioural Brain Research, 2015).

Dr. Bernardo Sabatini is a Professor of Neurobiology at Harvard Medical School and a Howard Hughes Medical Institute Investigator, whose lab focuses on studying the synapses and circuits of the basal ganglia. His lab tries to answer questions such as how the circuitry of the basal ganglia is established and what are the dynamic interactions among nuclei of the basal ganglia and with other brain structures that mediate the selection and triggering of a motor action, utilizing a combination of viral tracing, optogenetics, imaging, single-cell sequencing and electrophysiological approaches.

In the recent Neuron paper titled “Genetically Distinct Parallel Pathways in the Entopeduncular Nucleus for Limbic and Sensorimotor Output of the Basal Ganglia”, his lab defined three neuron subtypes in the entopeduncular nucleus (EP), which release different neurotransmitters and have distinct targets.

EP is a major basal ganglia output nucleus. Most circuit-level schemes of basal ganglia organization describe the EP as a homogeneous group of neurons (Figure 1), while some studies indicate the existence of cellular heterogeneity. Anatomically, the EP lies posterior to the globus pallidus externus(GPe) and anterior to the subthalamic nucleus(STN). Immunostaining of this region indicate that there are at least two types of distinct neurons in the EP, with Somatostatin (Sst) expressed neurons in the anterior EP and Parvalbumin (Pvalb) expressed neruons in the posterior region. To further characterize their transcriptional difference, they performed single-cell mRNA sequencing (“Drop-seq”) on cell suspensions from the EP and surrounding areas, and defined two neuronal populations intrinsic to the EP (cluster 5 and 6 in figure 2D). Using differential expression analysis, they found genes enriched in cluster 5 were Kcnc3, Lypd1, Pvalb and Snc4b, whereas cluster 6 was enriched for Sst, Slc17a6(encoding VGlut2),Tbr1, Meis2 and Nrn1. These two clusters both expressed high levels of Slc32a1 (encoding Vgat), Gad1 and Gad2 (encoding the GABAergic synthetic enzymes GAD67 and GAD65). There was also a third minority EP neuron identified by RNA fluorescent in situ hybridization (FISH), which expressed Pvalb and Slc17a6 but not Gad. Based on the imaging, Drop-seq and FISH results, the EP neurons were mainly categorized into three classes: (1) Sst+/Tbr1+/Gad+/Slc17a6+ as GABA/glutamate dual-transmitter neurons, (2) Pvalb+/Gad+/Lypd1+ as purely GABAergic neurons, and (3) Pvalb+/Slc17a6+/Lypd1 as purely glutamatergic neurons.

Picture2.pngFigure 2: Single-Cell RNA Sequencing Defines Two Neuronal Populations in EP.

Next step, they genetically targeted EP subpopulations for electrophysiological characterization, anatomical tracing and functional mapping of outputs and retrograde viral tracing of inputs to show their involvement in different microcircuits.

For electrophysiological characterization, they found that Sst+ neurons had a smaller capacitance, wider action potentials and smaller after-hyperpolarization (AHP) than Pvalb+ neurons, which demonstrated that Sst+ neurons are more excitable (Figure 3).

Picture3.pngFigure 3: Differential Electrophysiological Properties of Sst+ and Pvalb+ Expressing EP Neurons

Then, to map the output of these neuron subtypes, they applied anatomical tracing by injecting Cre-dependent AAV encoding Synaptophysin-mCherry into the EP, and showed that Sst-Cre+ axons mainly projected to the lateral portion of LHb, while Pvalb-Cre+ axons projected to oval nucleus of the LHb and other brain regions such as ventro-anterior lateral thalamus (VAL), ventro-medial thalamus (VM), anteriordorsal thalamus (AD), PF, and brainstem (Figure 4). Apart from distinct outputs for these different EP neuronal subtypes, they also have different inputs and are involved in distinct microcircuits within the basal ganglia. Actually, monosynaptic retrograde tracing placed the three neuron types in one of two basal ganglia circuit. Sst+ and Pvalb+ neurons that project to LHb receive input biased toward limbic-associated regions of striatum, while Pvalb+ neurons that project to motor thalamus received input mainly from sensorimotor regions of striatum.

Picture4.pngFigure 4: Sst+ EP Neurons Target LHb, and Pvalb+ Neurons Target LHb and Motor Thalamus.

To summarize, the genetically defined three neuron subtypes in the EP has distinct electrophysiological properties, and the map of the input-output relationships of three classes of EP neurons may indicate distinct microcircuits within the general basal ganglia framework. The basal ganglia have a complex connectivity, with more circuits awaiting to be further explored.

To learn more about most recent research from Dr. Bernardo Sabatini lab, please join the seminar at 4:00pm, Tuesday at Marilyn G. Farquhar Seminar Room.

Xiaochun Cai is a first-year neuroscience Ph.D. student, currently rotating in the laboratory of Dr. Rusty Gage.

Vision in Action- Beyond Drifting Gratings

If you give a C57BL/6J lab mouse a live cricket- will it hunt and eat it? While mice are often considered “prey” creatures, in this case, the mouse becomes the “predator,” making a quick meal out of the cricket. For a mouse to successfully perform this prey-capture behavior, it turns out vision is critical. While visual circuits have been studied for decades now, they are often studied in non-behavioral contexts. A typical study usually involves presenting drifting gratings of black and white bars that move at different orientations, spatial frequencies, and temporal frequencies. The responses of neurons in various regions (such as primary visual cortex), are then recorded to determine which types of visual features makes the cell fire the most. While this type of stimulus is great for helping us isolate the specific visual features that neurons respond to, it doesn’t quite tell us how visual information is integrated in behaviors.

Cristopher Niell is currently an Associate Professor at the University of Oregon where he studying what circuits underlie complex visual behaviors and how they develop. Dr. Niell has spent a long time studying mouse vision. As a post-doc with Michael Stryker, he characterized the receptive field properties of mouse V1 neurons– opening the doors to the use of mice and all their genetic tools as a model for studying visual circuits. He is now trying to understand how vision is used in ethologically relevant behaviors- such as locomotion or hunting down dinner.

While other labs have begun to look at how mouse vision is involved in behaviors such as navigation or fear, no one has yet tried to see how mouse vision is involved in prey-capture. In the lab’s 2016 paper in Current Biology, titled “Vision Drives Accurate Approach Behavior during Prey Capture in Laboratory Mice,” his group establishes a robust assay for studying this behavior.

To do this, they first had to demonstrate that lab bred C57 mice- raised on chow for their entire lives would even attempt to hunt and capture live crickets placed in their home-cages. It turns out, lab mice really like eating crickets, as 96.5% of mice would capture and consume crickets. They next moved the task into an open-field arena so that they could use a camera to track the movements of the mouse and cricket, as you can see in the video below. Mice were able to quickly and accurately learn this task- taking 3 days to acclimate to their handler and fed crickets in their home cage, followed by food deprivation and another 3 days for acclimation to the open field (an inherently aversive environment for mice). By day 3 in the open field, mice were able to reach peak performance-with successful capture rates near 100% and taking <30 seconds to chase down and catch the cricket.


By annotating the videos of mouse prey-capture behavior, they were able to quantify several parameters- including how long it took for the mouse to capture the cricket, the distance between the mouse and cricket (Range), and the angle from which the mouse attacked (Azimuth).  They then sought to see what sensory information the mice were using to mediate this prey-capture behavior by varying two senses- sight and hearing with 4 different conditions–Light+Hearing, Dark+Hearing, Light+Ear Plugs, and Dark+Ear Plugs.

It turned out that both auditory and visual cues are important for helping the mouse perform the prey-capture behavior, but visual cues more so than auditory, as summarized in the figure above. Mice took longer to capture crickets when in the dark, but not when deaf. However, with both deaf and in darkness, mice performed even worse (B in figure). Under normal light and auditory conditions, mice often were able to get within close range of the cricket (<5 cm) and usually approached the cricket at a 0 degree azimuth (C and D in blue). When mice were only deaf, but could still see, the mice spent a slightly greater proportion of time as ranges >5 cm, but overall behaved similar to the light + auditory condition. However, in Dark and Dark+Ear Plug conditions (black and red lines respectively), mice essentially performed around chance.

After demonstrating that this prey-capture behavior is driven in large part by vision, they sought to create a simplified assay of the behavior to facilitate studies of the underlying neural circuits to this visual task. In the simplified task (shown above in A), the cricket (green star) is isolated from the mouse behind a clear acrylic barrier. This barrier served 2 purposes—1) to attenuate non visual cues (ie auditory), and 2) to limit the crickets motion to a single dimension. For this task, mice were trained as usual for the open-field arena, but then placed in the barrier-modified task on the 5th day. As depicted in A, mice in light were able to very quickly and accurately make contact with the barrier at the exact location of the cricket. In the light, most mice were able to begin modifying their trajectory to make a beeline towards the cricket at about 15 cm from the barrier. This is consistent with the approximate receptive field diameter for most mice retinal ganglion cells (a 2 cm cricket at 15 cm from a mouse’s eye is ~6-8 degrees of visual space). In the dark however, they often missed by a few cm—quantified as lateral error. This lateral error was essentially distributed the same as chance, suggesting that mice could only use vision to successfully complete the task in this set up.

Having set up a robust behavior assay for visually guided prey-capture behavior in mice, Dr. Niell and his team are now ready to start integrating electrophysiology and imaging methods for dissecting the circuitry that may mediate this behavior. Previous studies have demonstrated a role for the superior colliculus in orienting and alerting behaviors, as well as predator avoidance behavior, making it a likely target for future circuitry work. While freely moving recordings are technically challenging, they could create a virtual reality version of this task for a head-fix mouse on a running treadmill, or use head-mounted imaging systems. With the many genetic tools, high-density recording probes, and high-resolution imaging techniques that have emerged in recent years, (including the “Crystal Skull” from Christopher Niell and Mark Schnitzer), there is certainly no shortage of techniques for understanding this ethologically relevant task.

Helen Wang is a 1st year PhD student working in Dr. Edward Callaway’s laboratory.

How to paint a detailed portrait of the visual cortex

Neuroscience textbooks present regions of the brain as relatively homogenous units composed of a few different discrete types of cells each: the cortex has pyramidal cells, the cerebellum has Purkinje cells, the dentate gyrus has granule cells, to name a few, and all are sparsely dotted with interneurons. In reality, most regions of the brain are much messier, containing a tangled mix of dozens of cell types varying along numerous spectra, such as morphology and gene expression. If traits we observe in neurons are continuous instead of discrete, and these neurons must be organized on many continua, two obvious questions arise: How do we define a cell type, and are current classifications adequate to describe the full range of neurons we can find in the nervous system?


Dr. Hongkui Zeng, the Executive Director of Structured Science at the Allen Institute for Brain Science, has devoted her career to answering these questions. She is primarily interested in how a neuron’s genes determine its physiology and connections with other types of neurons, and she has created many public datasets in the pursuit of these answers. In the process, she has also developed many high-throughput pipelines and tools relied up on by neuroscientists all over the world, and she has led numerous large-scale projects including the Human Cortex Gene Survey, Allen Mouse Brain Connectivity Atlas, and the Mouse Cell Types and Connectivity Program.


At the beginning of 2016, she took an important step towards answering these questions in her paper “Adult mouse cortical cell taxonomy revealed by single cell transcriptomics,” published in Nature Neuroscience. In this paper, her team at the Allen Institute set out to characterize all of the cell types in the primary visual cortex, one of the most well-defined and study regions of the brain. To do this, she ran single-cell RNA sequencing on many genetically-modified mice where cells expressing certain genes would glow, allowing her team to extract these glowing cells in very small amounts and find underlying patterns of gene expression. By identifying patterns exclusive to certain cells, they were able to characterize 49 cell types within the primary visual cortex. Many of these cell types have not previously been described, here or anywhere else in the brain, and in most cases the genes defining these groups had not been used as a molecular marker previously. Interestingly, these cell types were not discrete, but instead in two tiers: a core tier central to each group, and an intermediate tier displaying gene expression partially characteristic of multiple cell types.


Screen Shot 2018-01-22 at 9.40.10 PM.png

Figure 1: Initial determination of cell types in primary visual cortex.


Using these new markers for these new groups, her team used in-situ hybridization to locate these cell types within the cortex. This allowed them to place these cell types in the context of the cortex, which helps specify their function and their connectivity based on their home layer. These groups were also linked by their respective intermediate cells similar to both groups, which allowed lineage tracing and linkage to be performed, assembling these core groups into larger meta-groups based on close clustering of these core groups.


Screen Shot 2018-01-22 at 10.18.31 PM

Figure 2: Locations and relationships of cell types in primary visual cortex.

Finally, she and her team injected a viral retrograde tracer into the contralateral primary visual cortex and ipsilateral thalamus, two regions that primary visual cortex neurons project to. Cells would project to the contralateral visual cortex or the ipsilateral thalamus, but not to both. This allowed them to run RNA sequencing on the groups of neurons projecting to each region and then classify each group by what region it projects to, adding another dimension to these groupings.


Dr. Zeng’s findings provide a great new resource and framework for other investigators. In this study, she was able to identify cell types in the primary visual cortex with the greatest granularity recorded thus far. She was then able to characterize all these groups by differential gene expression, degree of relatedness, their location in the cortex, and the location of their projections. These findings lay the groundwork for similar studies in other regions of the brain, and could hopefully one day provide insight into what makes exactly gives these diverse groups of neurons their unique properties.


James Howe is a first-year neuroscience Ph.D. student currently rotating in Dr. Rusty Gage’s laboratory.

A Trick of the Light: Probing Neural Circuits with Advanced Optical Imaging

Imagine standing on earth and peering across the universe toward a star that is billions of miles away. Along the way to this star, your telescope’s path would encounter atmosphere, debris, and numerous other objects that would distort your image and create aberrations. Yet, somehow, we can get surprisingly high-resolution images of celestial objects that are unimaginably far away. This is through the power of a technology called adaptive optics. Now come back down to earth and think instead about using a microscope to visualize a tiny bouton on a dendrite of a neuron deep inside the brain. Using your microscope, you would again encounter distortions, this time of proteins, lipids, and nucleic acids. Is it possible to use the same technology that we use to visualize stars in far-away galaxies to examine neurons inside of the brain? Thanks to the work of Professor Na Ji and others, this and other groundbreaking methods of visualizing the brain are now at our finger-tips.

Dr. Na Ji is an associate professor in the Physics and Molecular & Cell Biology departments at UC Berkeley. Her laboratory aims to develop and apply novel imaging methods to understand the brain. Before she joined the UC Berkeley faculty in 2016, she was at the Janelia Research Campus, of the Howard Hughes Medical Institute. Among other advances at while at Janelia, she pioneered using the aforementioned adaptive optics for in vivo fluorescence microscopy to obtain high resolution images of neurons at depths previously inaccessible.

Let’s take a step back and examine how adaptive optics works. In a recent perspective in Nature Methods, Dr. Ji outlines the physical principles behind the technology. In summary, whether looking through a telescope or a microscope, one encounters image aberrations resulting from the medium one is peering through. However, if the character of the aberrations is a known quantity, these aberrations can be corrected for by the production of a compensatory distortion. This compensation results in the production of a more faithful image.  In order to characterize the degree of distortion seen by  a telescope examining distant cosmos, astronomers use something called a reference star, which can be either a real star, or an “artificial star” that is actually a laser beam of a known wavelength projected in space. In a biological sample, distortions of microscopic images can be measured by using an object such as a fluorescent bead placed in or below the sample. Dr. Ji’s group demonstrated the power of this technique in one study in 2015, published in Nature Communications, where they presented adaptive optical fluorescence microscopy that was able to obtain clear 2-photon images of neurons up to 700µm deep in the brain.

adaptive optics

An example of the effect of using adaptive optics on visualizing the dendrites of neurons in the brain. Image from the lab website of Dr. Na Ji https://www.jilab.net/research/.

Dr. Ji’s research has not only tackled the problem of depth of brain imaging, but also how one can visualize the activity of many neurons at once in a live animal. To solve this, the imaging method not only needs to be clear, but it also needs to be fast. In one of her most recent publications “Video-rate volumetric functional imaging of the brain at synaptic resolution,” published in 2017 in Nature Neuroscience, Dr.  Ji and authors present a system that can perform volumetric imaging of the brain at sub second temporal resolution. To achieve this, they developed a system that can be integrated into a standard 2-photon laser-scanning microscope (2PLSM). This system uses a type of axially elongated beam called a Bessel beam, which allows for better lateral imaging and depth of field. Because neurons remain relatively stationary during in vivo imaging, instead of constantly tracking the position of neurons in 3D, they used super-fast 2D imaging to reconstruct a 3D image at a 30 Hz rate.  They then showed various applications for this imaging strategy, including studying the responses of different inhibitory interneuron subtypes in the mouse visual cortex to various stimuli. With this technique, they could image the activity of up to 71 different interneurons at once! It will be exciting for the future of Neuroscience to see the applications of this and other innovative new imaging technologies developed by Dr. Na Ji.


Concept, design and performance of the Bessel module for in vivo volumetric imaging. Adapted from figure 1 the Lu et al. Nature Neuroscience (2017).

Shannan McClain is a 1st year Neuroscience PhD student in the laboratory of Dr. Matthew Banghart.



Independent Component Analysis of EEG: What is an ERP anyway?

Dr. Scott Makeig is currently the director of the Swartz Center for Computational Neuroscience (SCCN) of the Institute for Neural Computation (INC) at the University of California San Diego (UCSD). With a distinguished scientific career in the analysis and modeling of human cognitive event-related brain dynamics, he leads the Swartz Center towards its goal of observing and modeling how functional activities in multiple brain areas interact dynamically to support human awareness, interaction and creativity.

Dr. Makeig and his colleagues have pioneered brain imaging analysis approaches including time-frequency analysis, independent component analysis, and neural network and machine learning methods with relation to EEG, MEG, and other imaging modalities. Additionally, Dr. Makeig was the originator and remain the PI and co-developer with Arnaud Delorme of the widely used EEGLAB signal processing environment for MATLAB.

One example of his groundbreaking work was the publication entitled “Dynamic Brain Sources of Visual Evoked Responses,” published in the journal Science. In this paper, the Dr. Makeig and others describe a series of experiments that demonstrate a novel interpretation of signals collected via EEG.

In the experiment, electrical signals are recorded from the scalp of 15 adult subjects, who were asked to perform a simple task such as to focus their attention on a particular image on a screen. Such EEG recordings, known as Event-Related Potentials (ERPs) demonstrate a stereotyped response pattern each time the subject performs the task. But the underlying mechanism for the response had been the subject of extensive debate. The traditional model of ERP generation was a reflection of discrete neural activity within functionally defined brain regions, that was elicited with each “event.”

Groundbreakingly, the authors demonstrated that this was not the case. Rather, using a relatively novel technique at the time – Intrinsic Component Analysis – the authors demonstrated that the ERP was actually a combination of many different independent signals being reset in phase by the event. This “phase resetting” able to account for many of the characteristics of ERPs that had puzzled scientists, and provided a whole new approach to the analysis of EEG data.


The above figure shows the characteristics of the 4 major contributing component clusters of the ERP. As can be seen in the middle row, each component accounts for part of the total signal observes in an ERP, all together accounting for 77% of variance.

More recently, Dr. Makeig continues to focus on analysis of EEG data, and has begun collaborating with clinical researchers to apply these advances in functional EEG-based imaging to medical research and clinical practice.

Additionally, Dr. Makeig directs a new multi-campus initiative of the University of California – The “UC Musical Experience Research Community Initiative” – to bring together and promote research on music, mind, brain, and body.

Denis Smirnov is a Graduate Student at UCSD working at the Alzheimer’s Disease Research Center with Dr. James Brewer.

Our 3D-SHOT at optically stimulating circuits with cellular resolution

Dr. Hillel Adesnik is an Assistant Professor of Neurobiology at the University of California, Berkeley, whose lab aims to understand how cortical circuits process sensory information and drive behavior. A lot of this work centers on the mouse ‘barrel’ cortex, where cortical columns represent individual whiskers on the face, making it a useful circuit for studying sensation and perception. Their approach often combines in vivo behavioral experiments with detailed in vitro analyses of synaptic connectivity and network dynamics.

In this type of research, it is crucial to have precise tools for probing the circuits of interest. The goal of many experiments is to identify neuronal ensembles whose activity is necessary and sufficient to produce specific computations. To do this, many neuroscientists use optogenetics, where photosensitive channels called opsins are inserted into cells and used to excite or inhibit them with light stimulation. It is an increasingly popular technique, but suffers from several limitations. Dr. Adesnik’s team sought out to address some of them with a new tool they developed, described in a recent publication in Nature Communications entitled, “Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).” Let’s break that down, shall we?

Optogenetics relies on genetically-defined cell types to target some neurons and not others. However, many neural computations involve distinct cells that are genetically similar but spatially intermixed. Traditional one-photon optogenetics uses visible light to activate the opsins in these cells, but this light tends to scatter throughout the brain tissue making you lose some control over which neurons you’re stimulating. Two-photon methods instead use infrared light and significantly improve the spatial resolution and depth penetration. The light can actually scan across individual cell bodies, using raster or spiral patterns of precise stimulation. The issue here though is the channels themselves deactivate very quickly, so stimulating them with a point by point scan makes it hard to get enough current going at once in the neuron to generate reliable action potentials. Opsins with slower deactivation kinetics can help, but they make it difficult to trigger action potentials with precise timing, reducing the temporal resolution of the experiments.

Computer generated holography (CGH) removes the scanning issue, instead using holograms matched to the size of each neuron’s cell body to deliver flashes that simultaneously activate many opsins. This produces currents with fast kinetics and reliably activates the neurons. However, CGH also has its limits. Some technical aspects often lead it to produce unwanted stimulation above and below the target. This is where temporal focusing (TF) comes in. Here, the light pulse is decomposed into separate components which constructively interfere with each other at a specific location, restricting the two-photon response to a thin layer of tissue. What this allows for is precise activation of neurons in a single 2D plane, but cell bodies are of course distributed in three dimensions. 3D-SHOT overcomes this final challenge as well, enabling the simultaneous activation of many neurons with cellular resolution at multiple depths. This suggests that we can target a custom neuronal ensemble and precisely interrogate only those cells, which would be a major advance in untangling neural circuits and their functions.

Figure1The authors test and optimize their tool in various ways, showing that a power correction algorithm allows them to deposit spatially precise and equivalent two-photon excitation in at least 50 axial planes (Figure 1a, b). They go on to demonstrate the reliable activation of up to 300-600 neurons in a large volume with minimal off-target activation (Figure 1c). Finally, they use whole-cell recordings to show precise optogenetic activation of cells in mouse brain slices and in vivo. They suggest that combining 3D-SHOT with imaging of neural activity will “enable real-time manipulation of functionally defined neural ensembles with high specificity in both space and time, paving the way for a new class of experiments aimed at understanding the neural code.”

Dr. Adesnik will be presenting mostly unpublished data in his upcoming Seminar Series lecture, which may be starting to do exactly that. To see how 3D-SHOT can be used to interrogate sensory circuits, please join us at his talk on Tuesday, December 12th at 4pm in the CNCB Marilyn G. Farquhar Seminar Room.

Nicole Mlynaryk is a first year Ph.D. student currently rotating in Dr. Stefan Leutgeb’s lab.

Dynamic Modeling of Dendritic Spines

Take a moment to watch this clip:

Against a dark background, a tangle of bulbs vibrates and pulses with light. If you’re anything like me, you played the movie over and over, mesmerized by the dancing nodes. The film is a recording of dendritic spines, the small bumps that line dendrites, the receptive portions of neurons. (Here they’re tagged with a fluorescent protein, so that they can be watched under a microscope.) Dendritic spines form the sites of synapses, where the post-synaptic neuron receives a message from its pre-synaptic partner. Since these protuberances move and remodel themselves in response to experience, they are thought to play an important role in learning and memory.

What are the parts that make up dendritic spines? And how do they allow for growth and movement? These turn out to be questions with hybrid answers, requiring analysis rooted in both biology and engineering. That’s the approach taken by Padmini Rangamani, an Assistant Professor in UCSD’s department of Mechanical and Aerospace Engineering. Her research seeks to understand “the design principles of biological systems.” In correspondence, Rangamani describes her method: she “work(s) with experimentalists of multiple flavors to build models and seek(s) to answer questions that cannot be directly answered by measurements.”

In 2016, Professor Rangamani published a paper in Proceedings of the National Academy of Sciences titled “Paradoxical signaling regulates structural plasticity in dendritic spines.” Spine remodeling can take place even over a short interval (3-5 minutes), and it is in this context that Rangamani and her collaborators sought to characterize the logic underlying  spine dynamics. The paper builds a mathematical model on a foundation of observations about the molecular components involved in changes in dendritic volume.  

Several of the functional elements of dendritic spines are well known (see Figure 1 below). Notable amongst these are: calmodulin-dependent protein kinase II, or CaMKII, a kinase activated by Ca2+ influx into a cell when NMDA receptors are activated (as takes place during learning); actin and myosin, or structural proteins; and Rho, proteins involved in regulating structural protein movement. Rangamani and her group noted that previous experimental work had established that, within the 3-5 minute interval in question, the expression or activation of different spine-related proteins seemed to wax and wane in different patterns for different types of molecules.

Screen Shot 2017-12-02 at 10.21.19 PM.png

Figure 1: Interaction of spine constituents

These data were then able to be described with biexponential functions. One exponent accounted for time of activation, while the other accounted for deactivation. Nesting the equations for the molecules together allowed Rangamani to produce a model that was consistent with the overall transient (again, 3-5 minute) dendritic spine volume dynamics (see Figure 2 below). The model revealed an interesting underlying pattern: paradoxical signaling, in which the same molecule (here CaMKII) drives “both the expansion and contraction of the spine volume by regulating actin dynamics through intermediaries, such as Rho, Cdc42, and other actin-related proteins.” In short, Ca2+ influx into the neuron drives expression of both inhibitors and activators of spine growth. It’s the balance of these two that determines when and to what extent the spine expands and contracts.

Screen Shot 2017-12-03 at 6.34.42 PM.png

Figure 2: Nested biexponential equation describing spine dynamics

This model bridges empirical and conceptual work well. Not only is it grounded in experimental data, but it also has the virtue of being able to generate testable hypotheses about spine dynamics (by using silencing RNAs, among other tools, to modulate the components of spines and then test for effects in protein expression and spine shape). This is a case study in how biology is enriched by interaction with other fields. What can engineering tell us about our minds? Plenty, it turns out.

Ben T. is a first-year Ph.D. student at UCSD