A Trick of the Light: Probing Neural Circuits with Advanced Optical Imaging

Imagine standing on earth and peering across the universe toward a star that is billions of miles away. Along the way to this star, your telescope’s path would encounter atmosphere, debris, and numerous other objects that would distort your image and create aberrations. Yet, somehow, we can get surprisingly high-resolution images of celestial objects that are unimaginably far away. This is through the power of a technology called adaptive optics. Now come back down to earth and think instead about using a microscope to visualize a tiny bouton on a dendrite of a neuron deep inside the brain. Using your microscope, you would again encounter distortions, this time of proteins, lipids, and nucleic acids. Is it possible to use the same technology that we use to visualize stars in far-away galaxies to examine neurons inside of the brain? Thanks to the work of Professor Na Ji and others, this and other groundbreaking methods of visualizing the brain are now at our finger-tips.

Dr. Na Ji is an associate professor in the Physics and Molecular & Cell Biology departments at UC Berkeley. Her laboratory aims to develop and apply novel imaging methods to understand the brain. Before she joined the UC Berkeley faculty in 2016, she was at the Janelia Research Campus, of the Howard Hughes Medical Institute. Among other advances at while at Janelia, she pioneered using the aforementioned adaptive optics for in vivo fluorescence microscopy to obtain high resolution images of neurons at depths previously inaccessible.

Let’s take a step back and examine how adaptive optics works. In a recent perspective in Nature Methods, Dr. Ji outlines the physical principles behind the technology. In summary, whether looking through a telescope or a microscope, one encounters image aberrations resulting from the medium one is peering through. However, if the character of the aberrations is a known quantity, these aberrations can be corrected for by the production of a compensatory distortion. This compensation results in the production of a more faithful image.  In order to characterize the degree of distortion seen by  a telescope examining distant cosmos, astronomers use something called a reference star, which can be either a real star, or an “artificial star” that is actually a laser beam of a known wavelength projected in space. In a biological sample, distortions of microscopic images can be measured by using an object such as a fluorescent bead placed in or below the sample. Dr. Ji’s group demonstrated the power of this technique in one study in 2015, published in Nature Communications, where they presented adaptive optical fluorescence microscopy that was able to obtain clear 2-photon images of neurons up to 700µm deep in the brain.

adaptive optics

An example of the effect of using adaptive optics on visualizing the dendrites of neurons in the brain. Image from the lab website of Dr. Na Ji https://www.jilab.net/research/.

Dr. Ji’s research has not only tackled the problem of depth of brain imaging, but also how one can visualize the activity of many neurons at once in a live animal. To solve this, the imaging method not only needs to be clear, but it also needs to be fast. In one of her most recent publications “Video-rate volumetric functional imaging of the brain at synaptic resolution,” published in 2017 in Nature Neuroscience, Dr.  Ji and authors present a system that can perform volumetric imaging of the brain at sub second temporal resolution. To achieve this, they developed a system that can be integrated into a standard 2-photon laser-scanning microscope (2PLSM). This system uses a type of axially elongated beam called a Bessel beam, which allows for better lateral imaging and depth of field. Because neurons remain relatively stationary during in vivo imaging, instead of constantly tracking the position of neurons in 3D, they used super-fast 2D imaging to reconstruct a 3D image at a 30 Hz rate.  They then showed various applications for this imaging strategy, including studying the responses of different inhibitory interneuron subtypes in the mouse visual cortex to various stimuli. With this technique, they could image the activity of up to 71 different interneurons at once! It will be exciting for the future of Neuroscience to see the applications of this and other innovative new imaging technologies developed by Dr. Na Ji.


Concept, design and performance of the Bessel module for in vivo volumetric imaging. Adapted from figure 1 the Lu et al. Nature Neuroscience (2017).

Shannan McClain is a 1st year Neuroscience PhD student in the laboratory of Dr. Matthew Banghart.




Independent Component Analysis of EEG: What is an ERP anyway?

Dr. Scott Makeig is currently the director of the Swartz Center for Computational Neuroscience (SCCN) of the Institute for Neural Computation (INC) at the University of California San Diego (UCSD). With a distinguished scientific career in the analysis and modeling of human cognitive event-related brain dynamics, he leads the Swartz Center towards its goal of observing and modeling how functional activities in multiple brain areas interact dynamically to support human awareness, interaction and creativity.

Dr. Makeig and his colleagues have pioneered brain imaging analysis approaches including time-frequency analysis, independent component analysis, and neural network and machine learning methods with relation to EEG, MEG, and other imaging modalities. Additionally, Dr. Makeig was the originator and remain the PI and co-developer with Arnaud Delorme of the widely used EEGLAB signal processing environment for MATLAB.

One example of his groundbreaking work was the publication entitled “Dynamic Brain Sources of Visual Evoked Responses,” published in the journal Science. In this paper, the Dr. Makeig and others describe a series of experiments that demonstrate a novel interpretation of signals collected via EEG.

In the experiment, electrical signals are recorded from the scalp of 15 adult subjects, who were asked to perform a simple task such as to focus their attention on a particular image on a screen. Such EEG recordings, known as Event-Related Potentials (ERPs) demonstrate a stereotyped response pattern each time the subject performs the task. But the underlying mechanism for the response had been the subject of extensive debate. The traditional model of ERP generation was a reflection of discrete neural activity within functionally defined brain regions, that was elicited with each “event.”

Groundbreakingly, the authors demonstrated that this was not the case. Rather, using a relatively novel technique at the time – Intrinsic Component Analysis – the authors demonstrated that the ERP was actually a combination of many different independent signals being reset in phase by the event. This “phase resetting” able to account for many of the characteristics of ERPs that had puzzled scientists, and provided a whole new approach to the analysis of EEG data.


The above figure shows the characteristics of the 4 major contributing component clusters of the ERP. As can be seen in the middle row, each component accounts for part of the total signal observes in an ERP, all together accounting for 77% of variance.

More recently, Dr. Makeig continues to focus on analysis of EEG data, and has begun collaborating with clinical researchers to apply these advances in functional EEG-based imaging to medical research and clinical practice.

Additionally, Dr. Makeig directs a new multi-campus initiative of the University of California – The “UC Musical Experience Research Community Initiative” – to bring together and promote research on music, mind, brain, and body.

Denis Smirnov is a Graduate Student at UCSD working at the Alzheimer’s Disease Research Center with Dr. James Brewer.

Our 3D-SHOT at optically stimulating circuits with cellular resolution

Dr. Hillel Adesnik is an Assistant Professor of Neurobiology at the University of California, Berkeley, whose lab aims to understand how cortical circuits process sensory information and drive behavior. A lot of this work centers on the mouse ‘barrel’ cortex, where cortical columns represent individual whiskers on the face, making it a useful circuit for studying sensation and perception. Their approach often combines in vivo behavioral experiments with detailed in vitro analyses of synaptic connectivity and network dynamics.

In this type of research, it is crucial to have precise tools for probing the circuits of interest. The goal of many experiments is to identify neuronal ensembles whose activity is necessary and sufficient to produce specific computations. To do this, many neuroscientists use optogenetics, where photosensitive channels called opsins are inserted into cells and used to excite or inhibit them with light stimulation. It is an increasingly popular technique, but suffers from several limitations. Dr. Adesnik’s team sought out to address some of them with a new tool they developed, described in a recent publication in Nature Communications entitled, “Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).” Let’s break that down, shall we?

Optogenetics relies on genetically-defined cell types to target some neurons and not others. However, many neural computations involve distinct cells that are genetically similar but spatially intermixed. Traditional one-photon optogenetics uses visible light to activate the opsins in these cells, but this light tends to scatter throughout the brain tissue making you lose some control over which neurons you’re stimulating. Two-photon methods instead use infrared light and significantly improve the spatial resolution and depth penetration. The light can actually scan across individual cell bodies, using raster or spiral patterns of precise stimulation. The issue here though is the channels themselves deactivate very quickly, so stimulating them with a point by point scan makes it hard to get enough current going at once in the neuron to generate reliable action potentials. Opsins with slower deactivation kinetics can help, but they make it difficult to trigger action potentials with precise timing, reducing the temporal resolution of the experiments.

Computer generated holography (CGH) removes the scanning issue, instead using holograms matched to the size of each neuron’s cell body to deliver flashes that simultaneously activate many opsins. This produces currents with fast kinetics and reliably activates the neurons. However, CGH also has its limits. Some technical aspects often lead it to produce unwanted stimulation above and below the target. This is where temporal focusing (TF) comes in. Here, the light pulse is decomposed into separate components which constructively interfere with each other at a specific location, restricting the two-photon response to a thin layer of tissue. What this allows for is precise activation of neurons in a single 2D plane, but cell bodies are of course distributed in three dimensions. 3D-SHOT overcomes this final challenge as well, enabling the simultaneous activation of many neurons with cellular resolution at multiple depths. This suggests that we can target a custom neuronal ensemble and precisely interrogate only those cells, which would be a major advance in untangling neural circuits and their functions.

Figure1The authors test and optimize their tool in various ways, showing that a power correction algorithm allows them to deposit spatially precise and equivalent two-photon excitation in at least 50 axial planes (Figure 1a, b). They go on to demonstrate the reliable activation of up to 300-600 neurons in a large volume with minimal off-target activation (Figure 1c). Finally, they use whole-cell recordings to show precise optogenetic activation of cells in mouse brain slices and in vivo. They suggest that combining 3D-SHOT with imaging of neural activity will “enable real-time manipulation of functionally defined neural ensembles with high specificity in both space and time, paving the way for a new class of experiments aimed at understanding the neural code.”

Dr. Adesnik will be presenting mostly unpublished data in his upcoming Seminar Series lecture, which may be starting to do exactly that. To see how 3D-SHOT can be used to interrogate sensory circuits, please join us at his talk on Tuesday, December 12th at 4pm in the CNCB Marilyn G. Farquhar Seminar Room.

Nicole Mlynaryk is a first year Ph.D. student currently rotating in Dr. Stefan Leutgeb’s lab.

Dynamic Modeling of Dendritic Spines

Take a moment to watch this clip:

Against a dark background, a tangle of bulbs vibrates and pulses with light. If you’re anything like me, you played the movie over and over, mesmerized by the dancing nodes. The film is a recording of dendritic spines, the small bumps that line dendrites, the receptive portions of neurons. (Here they’re tagged with a fluorescent protein, so that they can be watched under a microscope.) Dendritic spines form the sites of synapses, where the post-synaptic neuron receives a message from its pre-synaptic partner. Since these protuberances move and remodel themselves in response to experience, they are thought to play an important role in learning and memory.

What are the parts that make up dendritic spines? And how do they allow for growth and movement? These turn out to be questions with hybrid answers, requiring analysis rooted in both biology and engineering. That’s the approach taken by Padmini Rangamani, an Assistant Professor in UCSD’s department of Mechanical and Aerospace Engineering. Her research seeks to understand “the design principles of biological systems.” In correspondence, Rangamani describes her method: she “work(s) with experimentalists of multiple flavors to build models and seek(s) to answer questions that cannot be directly answered by measurements.”

In 2016, Professor Rangamani published a paper in Proceedings of the National Academy of Sciences titled “Paradoxical signaling regulates structural plasticity in dendritic spines.” Spine remodeling can take place even over a short interval (3-5 minutes), and it is in this context that Rangamani and her collaborators sought to characterize the logic underlying  spine dynamics. The paper builds a mathematical model on a foundation of observations about the molecular components involved in changes in dendritic volume.  

Several of the functional elements of dendritic spines are well known (see Figure 1 below). Notable amongst these are: calmodulin-dependent protein kinase II, or CaMKII, a kinase activated by Ca2+ influx into a cell when NMDA receptors are activated (as takes place during learning); actin and myosin, or structural proteins; and Rho, proteins involved in regulating structural protein movement. Rangamani and her group noted that previous experimental work had established that, within the 3-5 minute interval in question, the expression or activation of different spine-related proteins seemed to wax and wane in different patterns for different types of molecules.

Screen Shot 2017-12-02 at 10.21.19 PM.png

Figure 1: Interaction of spine constituents

These data were then able to be described with biexponential functions. One exponent accounted for time of activation, while the other accounted for deactivation. Nesting the equations for the molecules together allowed Rangamani to produce a model that was consistent with the overall transient (again, 3-5 minute) dendritic spine volume dynamics (see Figure 2 below). The model revealed an interesting underlying pattern: paradoxical signaling, in which the same molecule (here CaMKII) drives “both the expansion and contraction of the spine volume by regulating actin dynamics through intermediaries, such as Rho, Cdc42, and other actin-related proteins.” In short, Ca2+ influx into the neuron drives expression of both inhibitors and activators of spine growth. It’s the balance of these two that determines when and to what extent the spine expands and contracts.

Screen Shot 2017-12-03 at 6.34.42 PM.png

Figure 2: Nested biexponential equation describing spine dynamics

This model bridges empirical and conceptual work well. Not only is it grounded in experimental data, but it also has the virtue of being able to generate testable hypotheses about spine dynamics (by using silencing RNAs, among other tools, to modulate the components of spines and then test for effects in protein expression and spine shape). This is a case study in how biology is enriched by interaction with other fields. What can engineering tell us about our minds? Plenty, it turns out.

Ben T. is a first-year Ph.D. student at UCSD


Making and breaking habits: the role of endocannabinoid modulation of orbito-striatal activity on habitual action control

Christina Gremel is an Assistant Professor of Psychology at the University of California, San Diego. Her lab is interested in the neural bases of decision-making processes, and how these processes are altered in people with neuropathologies like addiction and obsessive-compulsive disorder (OCD). She is especially interested in the role of cortico-basal ganglia circuits in habitual and goal-directed actions, and how an inability to switch between the two can lead to disordered behavior.

Habitual behavior is necessary for us to be able to perform routine actions quickly and efficiently. However, we also need to be able to shift to more goal-directed behavior as circumstances change. An inability to break a habit and to shift our behavior based on updated information can have devastating consequences, and this inability has been shown to underlie neuropsychiatric conditions involved with disordered decision-making, such as addiction and OCD. Thus, a balance between habitual and goal-oriented behavior is critical for healthy action selection. The Gremel lab is studying the molecular mechanisms underlying this balance (or lack thereof), with the ultimate goal of improving treatments for people with these disorders.

In “Endocannabinoid Modulation of Orbitostriatal Circuits Gates Habit Formation” (Gremel et al., 2016), the authors examine the role of the endocannabinoid system on a specific pathway between the orbitofrontal cortex (OFC) and dorsal striatum (DS), both areas involved in the control of goal-directed behavior. More specifically, they examine the role of cannabinoid type 1 (CB1) receptors within the OFC-DS pathway on the ability to shift from goal-directed to habitual action control.

They accomplish this with an instrumental lever-press task in mice that were trained to press a lever for the same reward (either a pellet or sucrose solution) under two different reinforcement schedules: random ratio (RR), which induces goal-directed behavior, and random interval (RI), which induces habitual behavior. Whichever food reward a mouse doesn’t receive during training is used as a control provided in the home cage. To determine if actions are controlled through habitual or goal-directed processes, they perform a two-day “outcome devaluation procedure”. On the valued day, mice prefeed on the home cage food, which is not associated with lever-pressing. On the devalued day, mice prefeed on the food given for lever-pressing, thereby decreasing their motivation for that reward. After prefeeding each day, non-rewarded lever-pressing is measured. A reduction in lever pressing in the devalued condition (rather than valued condition) indicates greater goal-directed control, whereas no reduction indicates habitual control.

To study the role of the endocannabinoid system on the control of goal-directed behavior, the authors examined the effects of deleting CB1 receptors in the OFC-DS pathway. They accomplished this using a combinatorial viral approach in transgenic mice. CB1flox mice and their wild type littermates were injected in the DS with the retrograde herpes simplex virus 1 carrying flipase hEF1a-eYFP-IRES-flp (HSV-1 fp), and in the OFC with AAV8-Ef1a-FD-mCherry-p2A-Cre (AAV fp-Cre), with Cre recombinase dependent on the presence of flipase (Figure 5A). This resulted in CB1 deletion in OFC-DS neurons of the CB1flox mice, but not in the controls.

Screen Shot 2017-11-19 at 9.36.37 PM

During the outcome devaluation procedure, the control mice had reduced lever-pressing in the RR, but not RI, context, whereas the CB1flox mice had reduced lever-pressing in both RR and RI contexts (Figure 5G). Additionally, while the CB1flox mice had higher lever pressing in the valued state compared to the devalued state in both RR and RI contexts, the control mice only showed higher valued-state lever-pressing in the RR context (Figure 5H). Finally, calculation of devaluation indices showed that control mice increased their devaluation index between the RI and RR contexts, indicating a shift toward more goal-directed control, whereas the CB1flox mice did not show this shift (Figure 5I).

These results suggest that CB1 receptor-mediated inhibition of OFC-DS activity is critical for habitual action control. In other words, when the OFC-DS pathway is silenced, habit takes over. This is important, because it suggests that therapeutic targeting of the endocannabinoid system may be beneficial in treating people suffering from neuropsychiatric disorders involved with decision-making.

Seraphina Solders is a first-year Ph.D. student currently rotating in Dr. John Ravits’ lab.

Algorithms in Nature: How Biology Can Help Computer Science

Dr. Saket Navlakha is an Assistant Professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. His research sits at the intersection between computer science, machine learning, and biology with the aims of both building models of complex biological systems as well as studying “Algorithms in nature”, or observing how biological systems solve interesting computational problems.

In his article, “Algorithms in nature, the convergence of systems biology and computational thinking”, he argues that “adopting a ‘computational thinking’ approach to studying biological processes” can both improve our understanding of such processes and also improve the design of computational algorithms. Biologists have increasingly made use of sophisticated computational techniques to analyze data and model systems. Likewise, computer scientists have looked to biological systems to inspire the creation of novel algorithms. From image segmentation to graph search problems, computer scientists have found much success in these domains by looking at biological systems – from the human brain in artificial neural networks and ant colonies in graph search algorithms.

Dr. Navlakha notes that there are many shared principles between biological and computational systems which suggest that combining the two may advance research in both directions. First, both types of systems are often distributed – they consist of constitutive parts that interact and make decisions with little central control. Second, both systems are robust to different and noisy environments. Third, both systems are often modular – they reuse certain components in multiple applications. These shared principles suggest that thinking about computer science in terms of biology or vice versa may lead to an increase in understanding of both fields.

Dr. Navlakha has utilized this framework in multiple biological and computational domains. His most recent project looks at the fly olfactory system as a model of a class of algorithms known as locality-sensitive hashing. Locality-sensitive hashing (LSH) is a dimensionality reduction algorithm that preserves the shape of the input space. If two pieces of data are close together in the input space, then a LSH algorithm will hash them to lower-dimensional representations that maintain their proximity. LSH is useful in search algorithms, where you want to reduce the dimensionality of your data, say an image, so that the search takes a shorter time, but you also want to maintain high accuracy of the search results.Screen Shot 2017-11-05 at 8.45.36 PM

In the above figure, you can see how the fly olfactory system implements LSH. In part A, the fly has 3 layers of odor information processing. The first layer has around 50 odorant receptor neurons. These neurons linearly project to the projection neuron layer. As a result of this, each odor is represented by an exponential distribution of firing rates, with the same mean for all types of odors. Then, the information is transferred to the Kenyon cell layer which expands the dimensionality to around 2000 cells. There is feedback in this layer to turn off the 95% of cells with the lowest firing rate. The maximally firing 5% of cells corresponds to the hash for a given odor, represented in part B. Part C shows differences between convention LSH algorithms and the fly’s algorithm. While most LSH algorithms just reduce the dimensionality of the inputs, the fly’s algorithm expands the dimensionality before reducing.

When applied to a similarity-search problem, Dr. Navlakha found that the fly’s algorithm actually performs much better than the conventional way of doing LSH. Not only is this useful for computer science, but by looking at the fly olfactory system in terms of a computational approach, we gain insights into how the fly olfactory system is set up to perform sensory processing.

Tim Tadros is a Ph.D. student current rotating in Dr. Navlakha’s lab.


Could a neuroscientist understand a microprocessor?

With campaigns like the BRAIN Initiative in full force, we are already producing more data than current analytical approaches can manage. So how do we go about analyzing ‘big data’? It is with this goal in mind that this week’s Neuroscience Seminar speaker, Dr. Konrad Kording, seeks to understand the brain.

Dr. Kording is a Penn Integrated Knowledge Professor at the University of Pennsylvania and a Deputy Editor for PLOS Computational Biology. He received his PhD in Physics from the Federal Institute of Technology in Zurich. His lab is a self-described group of ‘data scientists with an interest in understanding the brain’. He focuses on analyzing big data sets and maintaining a healthy skepticism towards the interpretation of results.

A brilliant example of his approach can be found in Could a neuroscientist understand a microprocessor? (Jonas and Kording 2017). This witty paper seeks to analyze the viability and usefulness of current analytical methods in neuroscience. The authors seek to glean insight into how to understand a biological system by examining a technical system with a known ‘ground truth’, a simple microprocessor (Fig 1).


Figure 1. Reconstruction of a simple microprocessor (MOS 6502).


But what does it mean to ‘understand’ a biological system? Is it the ability to fix the system? Or the ability to accurately describe its inputs, transformations, and outputs? Or maybe the ability to describe its characteristics/processes at all levels: a) computationally, b) algorithmically, and c) physically? Kording argues that a true understanding is only achieved when a system can be explained at all levels. So how do we get there?

Innovations in computational approaches are clearly required to make further progress, but it is also necessary to verify that these methods work. Jonas and Kording suggest the use of a known technical system as a test, an idea that stemmed from a critique of modeling in molecular biology, Can a Biologist Fix a Radio? (Lazebnik 2002). Kording used a reconstructed and simulated microprocessor like those used in Atari as a model system. The behavioral inputs were three games – Donkey Kong, Space Invaders, and Pitfall. The behavioral output is the boot-up of the game. The ‘recorded’ data (Fig 2) was then sent through a battery of analysis methods used on real brain data, ranging from connectomics to dimensionality reduction. Here, I outline a few of these methods.