How to paint a detailed portrait of the visual cortex

Neuroscience textbooks present regions of the brain as relatively homogenous units composed of a few different discrete types of cells each: the cortex has pyramidal cells, the cerebellum has Purkinje cells, the dentate gyrus has granule cells, to name a few, and all are sparsely dotted with interneurons. In reality, most regions of the brain are much messier, containing a tangled mix of dozens of cell types varying along numerous spectra, such as morphology and gene expression. If traits we observe in neurons are continuous instead of discrete, and these neurons must be organized on many continua, two obvious questions arise: How do we define a cell type, and are current classifications adequate to describe the full range of neurons we can find in the nervous system?

 

Dr. Hongkui Zeng, the Executive Director of Structured Science at the Allen Institute for Brain Science, has devoted her career to answering these questions. She is primarily interested in how a neuron’s genes determine its physiology and connections with other types of neurons, and she has created many public datasets in the pursuit of these answers. In the process, she has also developed many high-throughput pipelines and tools relied up on by neuroscientists all over the world, and she has led numerous large-scale projects including the Human Cortex Gene Survey, Allen Mouse Brain Connectivity Atlas, and the Mouse Cell Types and Connectivity Program.

 

At the beginning of 2016, she took an important step towards answering these questions in her paper “Adult mouse cortical cell taxonomy revealed by single cell transcriptomics,” published in Nature Neuroscience. In this paper, her team at the Allen Institute set out to characterize all of the cell types in the primary visual cortex, one of the most well-defined and study regions of the brain. To do this, she ran single-cell RNA sequencing on many genetically-modified mice where cells expressing certain genes would glow, allowing her team to extract these glowing cells in very small amounts and find underlying patterns of gene expression. By identifying patterns exclusive to certain cells, they were able to characterize 49 cell types within the primary visual cortex. Many of these cell types have not previously been described, here or anywhere else in the brain, and in most cases the genes defining these groups had not been used as a molecular marker previously. Interestingly, these cell types were not discrete, but instead in two tiers: a core tier central to each group, and an intermediate tier displaying gene expression partially characteristic of multiple cell types.

 

Screen Shot 2018-01-22 at 9.40.10 PM.png

Figure 1: Initial determination of cell types in primary visual cortex.

 

Using these new markers for these new groups, her team used in-situ hybridization to locate these cell types within the cortex. This allowed them to place these cell types in the context of the cortex, which helps specify their function and their connectivity based on their home layer. These groups were also linked by their respective intermediate cells similar to both groups, which allowed lineage tracing and linkage to be performed, assembling these core groups into larger meta-groups based on close clustering of these core groups.

 

Screen Shot 2018-01-22 at 10.18.31 PM

Figure 2: Locations and relationships of cell types in primary visual cortex.

Finally, she and her team injected a viral retrograde tracer into the contralateral primary visual cortex and ipsilateral thalamus, two regions that primary visual cortex neurons project to. Cells would project to the contralateral visual cortex or the ipsilateral thalamus, but not to both. This allowed them to run RNA sequencing on the groups of neurons projecting to each region and then classify each group by what region it projects to, adding another dimension to these groupings.

 

Dr. Zeng’s findings provide a great new resource and framework for other investigators. In this study, she was able to identify cell types in the primary visual cortex with the greatest granularity recorded thus far. She was then able to characterize all these groups by differential gene expression, degree of relatedness, their location in the cortex, and the location of their projections. These findings lay the groundwork for similar studies in other regions of the brain, and could hopefully one day provide insight into what makes exactly gives these diverse groups of neurons their unique properties.

 

James Howe is a first-year neuroscience Ph.D. student currently rotating in Dr. Rusty Gage’s laboratory.

Advertisements

A Trick of the Light: Probing Neural Circuits with Advanced Optical Imaging

Imagine standing on earth and peering across the universe toward a star that is billions of miles away. Along the way to this star, your telescope’s path would encounter atmosphere, debris, and numerous other objects that would distort your image and create aberrations. Yet, somehow, we can get surprisingly high-resolution images of celestial objects that are unimaginably far away. This is through the power of a technology called adaptive optics. Now come back down to earth and think instead about using a microscope to visualize a tiny bouton on a dendrite of a neuron deep inside the brain. Using your microscope, you would again encounter distortions, this time of proteins, lipids, and nucleic acids. Is it possible to use the same technology that we use to visualize stars in far-away galaxies to examine neurons inside of the brain? Thanks to the work of Professor Na Ji and others, this and other groundbreaking methods of visualizing the brain are now at our finger-tips.

Dr. Na Ji is an associate professor in the Physics and Molecular & Cell Biology departments at UC Berkeley. Her laboratory aims to develop and apply novel imaging methods to understand the brain. Before she joined the UC Berkeley faculty in 2016, she was at the Janelia Research Campus, of the Howard Hughes Medical Institute. Among other advances at while at Janelia, she pioneered using the aforementioned adaptive optics for in vivo fluorescence microscopy to obtain high resolution images of neurons at depths previously inaccessible.

Let’s take a step back and examine how adaptive optics works. In a recent perspective in Nature Methods, Dr. Ji outlines the physical principles behind the technology. In summary, whether looking through a telescope or a microscope, one encounters image aberrations resulting from the medium one is peering through. However, if the character of the aberrations is a known quantity, these aberrations can be corrected for by the production of a compensatory distortion. This compensation results in the production of a more faithful image.  In order to characterize the degree of distortion seen by  a telescope examining distant cosmos, astronomers use something called a reference star, which can be either a real star, or an “artificial star” that is actually a laser beam of a known wavelength projected in space. In a biological sample, distortions of microscopic images can be measured by using an object such as a fluorescent bead placed in or below the sample. Dr. Ji’s group demonstrated the power of this technique in one study in 2015, published in Nature Communications, where they presented adaptive optical fluorescence microscopy that was able to obtain clear 2-photon images of neurons up to 700µm deep in the brain.

adaptive optics

An example of the effect of using adaptive optics on visualizing the dendrites of neurons in the brain. Image from the lab website of Dr. Na Ji https://www.jilab.net/research/.

Dr. Ji’s research has not only tackled the problem of depth of brain imaging, but also how one can visualize the activity of many neurons at once in a live animal. To solve this, the imaging method not only needs to be clear, but it also needs to be fast. In one of her most recent publications “Video-rate volumetric functional imaging of the brain at synaptic resolution,” published in 2017 in Nature Neuroscience, Dr.  Ji and authors present a system that can perform volumetric imaging of the brain at sub second temporal resolution. To achieve this, they developed a system that can be integrated into a standard 2-photon laser-scanning microscope (2PLSM). This system uses a type of axially elongated beam called a Bessel beam, which allows for better lateral imaging and depth of field. Because neurons remain relatively stationary during in vivo imaging, instead of constantly tracking the position of neurons in 3D, they used super-fast 2D imaging to reconstruct a 3D image at a 30 Hz rate.  They then showed various applications for this imaging strategy, including studying the responses of different inhibitory interneuron subtypes in the mouse visual cortex to various stimuli. With this technique, they could image the activity of up to 71 different interneurons at once! It will be exciting for the future of Neuroscience to see the applications of this and other innovative new imaging technologies developed by Dr. Na Ji.

bessel

Concept, design and performance of the Bessel module for in vivo volumetric imaging. Adapted from figure 1 the Lu et al. Nature Neuroscience (2017).

Shannan McClain is a 1st year Neuroscience PhD student in the laboratory of Dr. Matthew Banghart.

 

 

Independent Component Analysis of EEG: What is an ERP anyway?

Dr. Scott Makeig is currently the director of the Swartz Center for Computational Neuroscience (SCCN) of the Institute for Neural Computation (INC) at the University of California San Diego (UCSD). With a distinguished scientific career in the analysis and modeling of human cognitive event-related brain dynamics, he leads the Swartz Center towards its goal of observing and modeling how functional activities in multiple brain areas interact dynamically to support human awareness, interaction and creativity.

Dr. Makeig and his colleagues have pioneered brain imaging analysis approaches including time-frequency analysis, independent component analysis, and neural network and machine learning methods with relation to EEG, MEG, and other imaging modalities. Additionally, Dr. Makeig was the originator and remain the PI and co-developer with Arnaud Delorme of the widely used EEGLAB signal processing environment for MATLAB.

One example of his groundbreaking work was the publication entitled “Dynamic Brain Sources of Visual Evoked Responses,” published in the journal Science. In this paper, the Dr. Makeig and others describe a series of experiments that demonstrate a novel interpretation of signals collected via EEG.

In the experiment, electrical signals are recorded from the scalp of 15 adult subjects, who were asked to perform a simple task such as to focus their attention on a particular image on a screen. Such EEG recordings, known as Event-Related Potentials (ERPs) demonstrate a stereotyped response pattern each time the subject performs the task. But the underlying mechanism for the response had been the subject of extensive debate. The traditional model of ERP generation was a reflection of discrete neural activity within functionally defined brain regions, that was elicited with each “event.”

Groundbreakingly, the authors demonstrated that this was not the case. Rather, using a relatively novel technique at the time – Intrinsic Component Analysis – the authors demonstrated that the ERP was actually a combination of many different independent signals being reset in phase by the event. This “phase resetting” able to account for many of the characteristics of ERPs that had puzzled scientists, and provided a whole new approach to the analysis of EEG data.

F4.large

The above figure shows the characteristics of the 4 major contributing component clusters of the ERP. As can be seen in the middle row, each component accounts for part of the total signal observes in an ERP, all together accounting for 77% of variance.

More recently, Dr. Makeig continues to focus on analysis of EEG data, and has begun collaborating with clinical researchers to apply these advances in functional EEG-based imaging to medical research and clinical practice.

Additionally, Dr. Makeig directs a new multi-campus initiative of the University of California – The “UC Musical Experience Research Community Initiative” – to bring together and promote research on music, mind, brain, and body.

Denis Smirnov is a Graduate Student at UCSD working at the Alzheimer’s Disease Research Center with Dr. James Brewer.

Our 3D-SHOT at optically stimulating circuits with cellular resolution

Dr. Hillel Adesnik is an Assistant Professor of Neurobiology at the University of California, Berkeley, whose lab aims to understand how cortical circuits process sensory information and drive behavior. A lot of this work centers on the mouse ‘barrel’ cortex, where cortical columns represent individual whiskers on the face, making it a useful circuit for studying sensation and perception. Their approach often combines in vivo behavioral experiments with detailed in vitro analyses of synaptic connectivity and network dynamics.

In this type of research, it is crucial to have precise tools for probing the circuits of interest. The goal of many experiments is to identify neuronal ensembles whose activity is necessary and sufficient to produce specific computations. To do this, many neuroscientists use optogenetics, where photosensitive channels called opsins are inserted into cells and used to excite or inhibit them with light stimulation. It is an increasingly popular technique, but suffers from several limitations. Dr. Adesnik’s team sought out to address some of them with a new tool they developed, described in a recent publication in Nature Communications entitled, “Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).” Let’s break that down, shall we?

Optogenetics relies on genetically-defined cell types to target some neurons and not others. However, many neural computations involve distinct cells that are genetically similar but spatially intermixed. Traditional one-photon optogenetics uses visible light to activate the opsins in these cells, but this light tends to scatter throughout the brain tissue making you lose some control over which neurons you’re stimulating. Two-photon methods instead use infrared light and significantly improve the spatial resolution and depth penetration. The light can actually scan across individual cell bodies, using raster or spiral patterns of precise stimulation. The issue here though is the channels themselves deactivate very quickly, so stimulating them with a point by point scan makes it hard to get enough current going at once in the neuron to generate reliable action potentials. Opsins with slower deactivation kinetics can help, but they make it difficult to trigger action potentials with precise timing, reducing the temporal resolution of the experiments.

Computer generated holography (CGH) removes the scanning issue, instead using holograms matched to the size of each neuron’s cell body to deliver flashes that simultaneously activate many opsins. This produces currents with fast kinetics and reliably activates the neurons. However, CGH also has its limits. Some technical aspects often lead it to produce unwanted stimulation above and below the target. This is where temporal focusing (TF) comes in. Here, the light pulse is decomposed into separate components which constructively interfere with each other at a specific location, restricting the two-photon response to a thin layer of tissue. What this allows for is precise activation of neurons in a single 2D plane, but cell bodies are of course distributed in three dimensions. 3D-SHOT overcomes this final challenge as well, enabling the simultaneous activation of many neurons with cellular resolution at multiple depths. This suggests that we can target a custom neuronal ensemble and precisely interrogate only those cells, which would be a major advance in untangling neural circuits and their functions.

Figure1The authors test and optimize their tool in various ways, showing that a power correction algorithm allows them to deposit spatially precise and equivalent two-photon excitation in at least 50 axial planes (Figure 1a, b). They go on to demonstrate the reliable activation of up to 300-600 neurons in a large volume with minimal off-target activation (Figure 1c). Finally, they use whole-cell recordings to show precise optogenetic activation of cells in mouse brain slices and in vivo. They suggest that combining 3D-SHOT with imaging of neural activity will “enable real-time manipulation of functionally defined neural ensembles with high specificity in both space and time, paving the way for a new class of experiments aimed at understanding the neural code.”

Dr. Adesnik will be presenting mostly unpublished data in his upcoming Seminar Series lecture, which may be starting to do exactly that. To see how 3D-SHOT can be used to interrogate sensory circuits, please join us at his talk on Tuesday, December 12th at 4pm in the CNCB Marilyn G. Farquhar Seminar Room.

Nicole Mlynaryk is a first year Ph.D. student currently rotating in Dr. Stefan Leutgeb’s lab.

Dynamic Modeling of Dendritic Spines

Take a moment to watch this clip:

Against a dark background, a tangle of bulbs vibrates and pulses with light. If you’re anything like me, you played the movie over and over, mesmerized by the dancing nodes. The film is a recording of dendritic spines, the small bumps that line dendrites, the receptive portions of neurons. (Here they’re tagged with a fluorescent protein, so that they can be watched under a microscope.) Dendritic spines form the sites of synapses, where the post-synaptic neuron receives a message from its pre-synaptic partner. Since these protuberances move and remodel themselves in response to experience, they are thought to play an important role in learning and memory.

What are the parts that make up dendritic spines? And how do they allow for growth and movement? These turn out to be questions with hybrid answers, requiring analysis rooted in both biology and engineering. That’s the approach taken by Padmini Rangamani, an Assistant Professor in UCSD’s department of Mechanical and Aerospace Engineering. Her research seeks to understand “the design principles of biological systems.” In correspondence, Rangamani describes her method: she “work(s) with experimentalists of multiple flavors to build models and seek(s) to answer questions that cannot be directly answered by measurements.”

In 2016, Professor Rangamani published a paper in Proceedings of the National Academy of Sciences titled “Paradoxical signaling regulates structural plasticity in dendritic spines.” Spine remodeling can take place even over a short interval (3-5 minutes), and it is in this context that Rangamani and her collaborators sought to characterize the logic underlying  spine dynamics. The paper builds a mathematical model on a foundation of observations about the molecular components involved in changes in dendritic volume.  

Several of the functional elements of dendritic spines are well known (see Figure 1 below). Notable amongst these are: calmodulin-dependent protein kinase II, or CaMKII, a kinase activated by Ca2+ influx into a cell when NMDA receptors are activated (as takes place during learning); actin and myosin, or structural proteins; and Rho, proteins involved in regulating structural protein movement. Rangamani and her group noted that previous experimental work had established that, within the 3-5 minute interval in question, the expression or activation of different spine-related proteins seemed to wax and wane in different patterns for different types of molecules.

Screen Shot 2017-12-02 at 10.21.19 PM.png

Figure 1: Interaction of spine constituents

These data were then able to be described with biexponential functions. One exponent accounted for time of activation, while the other accounted for deactivation. Nesting the equations for the molecules together allowed Rangamani to produce a model that was consistent with the overall transient (again, 3-5 minute) dendritic spine volume dynamics (see Figure 2 below). The model revealed an interesting underlying pattern: paradoxical signaling, in which the same molecule (here CaMKII) drives “both the expansion and contraction of the spine volume by regulating actin dynamics through intermediaries, such as Rho, Cdc42, and other actin-related proteins.” In short, Ca2+ influx into the neuron drives expression of both inhibitors and activators of spine growth. It’s the balance of these two that determines when and to what extent the spine expands and contracts.

Screen Shot 2017-12-03 at 6.34.42 PM.png

Figure 2: Nested biexponential equation describing spine dynamics

This model bridges empirical and conceptual work well. Not only is it grounded in experimental data, but it also has the virtue of being able to generate testable hypotheses about spine dynamics (by using silencing RNAs, among other tools, to modulate the components of spines and then test for effects in protein expression and spine shape). This is a case study in how biology is enriched by interaction with other fields. What can engineering tell us about our minds? Plenty, it turns out.

Ben T. is a first-year Ph.D. student at UCSD

 

Making and breaking habits: the role of endocannabinoid modulation of orbito-striatal activity on habitual action control

Christina Gremel is an Assistant Professor of Psychology at the University of California, San Diego. Her lab is interested in the neural bases of decision-making processes, and how these processes are altered in people with neuropathologies like addiction and obsessive-compulsive disorder (OCD). She is especially interested in the role of cortico-basal ganglia circuits in habitual and goal-directed actions, and how an inability to switch between the two can lead to disordered behavior.

Habitual behavior is necessary for us to be able to perform routine actions quickly and efficiently. However, we also need to be able to shift to more goal-directed behavior as circumstances change. An inability to break a habit and to shift our behavior based on updated information can have devastating consequences, and this inability has been shown to underlie neuropsychiatric conditions involved with disordered decision-making, such as addiction and OCD. Thus, a balance between habitual and goal-oriented behavior is critical for healthy action selection. The Gremel lab is studying the molecular mechanisms underlying this balance (or lack thereof), with the ultimate goal of improving treatments for people with these disorders.

In “Endocannabinoid Modulation of Orbitostriatal Circuits Gates Habit Formation” (Gremel et al., 2016), the authors examine the role of the endocannabinoid system on a specific pathway between the orbitofrontal cortex (OFC) and dorsal striatum (DS), both areas involved in the control of goal-directed behavior. More specifically, they examine the role of cannabinoid type 1 (CB1) receptors within the OFC-DS pathway on the ability to shift from goal-directed to habitual action control.

They accomplish this with an instrumental lever-press task in mice that were trained to press a lever for the same reward (either a pellet or sucrose solution) under two different reinforcement schedules: random ratio (RR), which induces goal-directed behavior, and random interval (RI), which induces habitual behavior. Whichever food reward a mouse doesn’t receive during training is used as a control provided in the home cage. To determine if actions are controlled through habitual or goal-directed processes, they perform a two-day “outcome devaluation procedure”. On the valued day, mice prefeed on the home cage food, which is not associated with lever-pressing. On the devalued day, mice prefeed on the food given for lever-pressing, thereby decreasing their motivation for that reward. After prefeeding each day, non-rewarded lever-pressing is measured. A reduction in lever pressing in the devalued condition (rather than valued condition) indicates greater goal-directed control, whereas no reduction indicates habitual control.

To study the role of the endocannabinoid system on the control of goal-directed behavior, the authors examined the effects of deleting CB1 receptors in the OFC-DS pathway. They accomplished this using a combinatorial viral approach in transgenic mice. CB1flox mice and their wild type littermates were injected in the DS with the retrograde herpes simplex virus 1 carrying flipase hEF1a-eYFP-IRES-flp (HSV-1 fp), and in the OFC with AAV8-Ef1a-FD-mCherry-p2A-Cre (AAV fp-Cre), with Cre recombinase dependent on the presence of flipase (Figure 5A). This resulted in CB1 deletion in OFC-DS neurons of the CB1flox mice, but not in the controls.

Screen Shot 2017-11-19 at 9.36.37 PM

During the outcome devaluation procedure, the control mice had reduced lever-pressing in the RR, but not RI, context, whereas the CB1flox mice had reduced lever-pressing in both RR and RI contexts (Figure 5G). Additionally, while the CB1flox mice had higher lever pressing in the valued state compared to the devalued state in both RR and RI contexts, the control mice only showed higher valued-state lever-pressing in the RR context (Figure 5H). Finally, calculation of devaluation indices showed that control mice increased their devaluation index between the RI and RR contexts, indicating a shift toward more goal-directed control, whereas the CB1flox mice did not show this shift (Figure 5I).

These results suggest that CB1 receptor-mediated inhibition of OFC-DS activity is critical for habitual action control. In other words, when the OFC-DS pathway is silenced, habit takes over. This is important, because it suggests that therapeutic targeting of the endocannabinoid system may be beneficial in treating people suffering from neuropsychiatric disorders involved with decision-making.

Seraphina Solders is a first-year Ph.D. student currently rotating in Dr. John Ravits’ lab.

Algorithms in Nature: How Biology Can Help Computer Science

Dr. Saket Navlakha is an Assistant Professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. His research sits at the intersection between computer science, machine learning, and biology with the aims of both building models of complex biological systems as well as studying “Algorithms in nature”, or observing how biological systems solve interesting computational problems.

In his article, “Algorithms in nature, the convergence of systems biology and computational thinking”, he argues that “adopting a ‘computational thinking’ approach to studying biological processes” can both improve our understanding of such processes and also improve the design of computational algorithms. Biologists have increasingly made use of sophisticated computational techniques to analyze data and model systems. Likewise, computer scientists have looked to biological systems to inspire the creation of novel algorithms. From image segmentation to graph search problems, computer scientists have found much success in these domains by looking at biological systems – from the human brain in artificial neural networks and ant colonies in graph search algorithms.

Dr. Navlakha notes that there are many shared principles between biological and computational systems which suggest that combining the two may advance research in both directions. First, both types of systems are often distributed – they consist of constitutive parts that interact and make decisions with little central control. Second, both systems are robust to different and noisy environments. Third, both systems are often modular – they reuse certain components in multiple applications. These shared principles suggest that thinking about computer science in terms of biology or vice versa may lead to an increase in understanding of both fields.

Dr. Navlakha has utilized this framework in multiple biological and computational domains. His most recent project looks at the fly olfactory system as a model of a class of algorithms known as locality-sensitive hashing. Locality-sensitive hashing (LSH) is a dimensionality reduction algorithm that preserves the shape of the input space. If two pieces of data are close together in the input space, then a LSH algorithm will hash them to lower-dimensional representations that maintain their proximity. LSH is useful in search algorithms, where you want to reduce the dimensionality of your data, say an image, so that the search takes a shorter time, but you also want to maintain high accuracy of the search results.Screen Shot 2017-11-05 at 8.45.36 PM

In the above figure, you can see how the fly olfactory system implements LSH. In part A, the fly has 3 layers of odor information processing. The first layer has around 50 odorant receptor neurons. These neurons linearly project to the projection neuron layer. As a result of this, each odor is represented by an exponential distribution of firing rates, with the same mean for all types of odors. Then, the information is transferred to the Kenyon cell layer which expands the dimensionality to around 2000 cells. There is feedback in this layer to turn off the 95% of cells with the lowest firing rate. The maximally firing 5% of cells corresponds to the hash for a given odor, represented in part B. Part C shows differences between convention LSH algorithms and the fly’s algorithm. While most LSH algorithms just reduce the dimensionality of the inputs, the fly’s algorithm expands the dimensionality before reducing.

When applied to a similarity-search problem, Dr. Navlakha found that the fly’s algorithm actually performs much better than the conventional way of doing LSH. Not only is this useful for computer science, but by looking at the fly olfactory system in terms of a computational approach, we gain insights into how the fly olfactory system is set up to perform sensory processing.

Tim Tadros is a Ph.D. student current rotating in Dr. Navlakha’s lab.

 

Could a neuroscientist understand a microprocessor?

With campaigns like the BRAIN Initiative in full force, we are already producing more data than current analytical approaches can manage. So how do we go about analyzing ‘big data’? It is with this goal in mind that this week’s Neuroscience Seminar speaker, Dr. Konrad Kording, seeks to understand the brain.

Dr. Kording is a Penn Integrated Knowledge Professor at the University of Pennsylvania and a Deputy Editor for PLOS Computational Biology. He received his PhD in Physics from the Federal Institute of Technology in Zurich. His lab is a self-described group of ‘data scientists with an interest in understanding the brain’. He focuses on analyzing big data sets and maintaining a healthy skepticism towards the interpretation of results.

A brilliant example of his approach can be found in Could a neuroscientist understand a microprocessor? (Jonas and Kording 2017). This witty paper seeks to analyze the viability and usefulness of current analytical methods in neuroscience. The authors seek to glean insight into how to understand a biological system by examining a technical system with a known ‘ground truth’, a simple microprocessor (Fig 1).

journal.pcbi

Figure 1. Reconstruction of a simple microprocessor (MOS 6502).

 

But what does it mean to ‘understand’ a biological system? Is it the ability to fix the system? Or the ability to accurately describe its inputs, transformations, and outputs? Or maybe the ability to describe its characteristics/processes at all levels: a) computationally, b) algorithmically, and c) physically? Kording argues that a true understanding is only achieved when a system can be explained at all levels. So how do we get there?

Innovations in computational approaches are clearly required to make further progress, but it is also necessary to verify that these methods work. Jonas and Kording suggest the use of a known technical system as a test, an idea that stemmed from a critique of modeling in molecular biology, Can a Biologist Fix a Radio? (Lazebnik 2002). Kording used a reconstructed and simulated microprocessor like those used in Atari as a model system. The behavioral inputs were three games – Donkey Kong, Space Invaders, and Pitfall. The behavioral output is the boot-up of the game. The ‘recorded’ data (Fig 2) was then sent through a battery of analysis methods used on real brain data, ranging from connectomics to dimensionality reduction. Here, I outline a few of these methods.

Picture1.png

Figure 2. a) 10 identified transistors and b) their spiking activity.

 

Analysis Method 1: Lesion Studies

If each transistor is removed, will the processor still boot the game? They find that there are indeed subsets of transistors that make one of the behaviors (games) impossible (Fig 3). A logical conclusion would be that these transistors are responsible for that particular game. What’s wrong with this? Transistors are not specific to a behavior, but rather implement simple functions. Moreover, these findings are unlikely to generalize to other behaviors

Picture2.png

Figure 3. Lesioning every single transistor. a) Colored transistors were found to impact only one behavior. b) Breakdown of the impact of transistor lesion by behavioral state.

 

Analysis 2: Tuning Curves

How is the activity of each transistor (spike rate) tuned to behavior (luminance of last pixel upon booting)? They find that some transistors show a strong tuning curve (Fig 4). What’s wrong with this? Transistors relate in a highly nonlinear way to screen luminance; thus, apparent tuning is not insightful.

Picture3.png

Figure 4. Examples of tuning curves for 5 transistors.

 

Analysis 3: Local Field Potentials

Is the processor modular? Will analyzing average activity of localized regions yield insight about functional organization? Indeed, region-specific rhythmicity is found (Fig 5). What’s wrong with this? It is hard to attribute self-organized criticality to a processor. Moreover, the temporal organization of the LFPs does not contain meaningful information about the processor’s behavior.

Picture4.png

Figure 5. Local field potentials in 5 different regions.

 

This is a subset of the methods tested in this paper to check the naïve use of various approaches used in neuroscience. The authors find that the standard data analysis techniques produced results that were surprisingly similar to those found in real brains. However, in the case of the processor, its function and structure are known and the results did not lead to a satisfying understanding.

Kording suggests that using a microprocessor or artificial neural network to test all analysis methods prior to using them on the brain could be extremely useful. In my correspondence with him, Kording gave the following additional suggests for doing ‘good’ science:

  1. “When doing experiments, evaluate how far findings generalize. How much can you change the experiment until your findings go away?”
  2. “When doing data analysis, be careful about causal interpretations. There are an infinite set of models that predict the same correlations.”
  3. “When developing theories, develop theories that actually solve the problems. I.e. when your theories are linear, they are not very exciting unless they are truly fundamental.”
  4. “Be clear about the exact thing you are studying. Be clear about the question. Be clear about the implicit assumptions you are making. These are usually shared across the field and usually are wrong.”
  5. “Just because lots of people work on something does not necessarily make it more probable that it has a solid logical underpinning.”
  6. “You always believe that somewhere there are the advanced people who really understand what they are doing. Certainly I did. They don’t.”
  7. “Share. Share code. Share data. Share ideas. Share preprints. Share an appreciation for great science. If it is not worth sharing it is not worth doing.”

To hear more about Dr. Kording’s ideas on rethinking underlying assumptions and to learn about why machine learning is a useful skill that should be in every scientists’ toolbox, please attend his talk on Tuesday, October 24, 2017 at 4pm in the CNCB Marilyn G. Farquhar Seminar Room.

Jess Haley is a first-year neuroscience Ph.D. student currently rotating in the Laboratory of Dr. Sreekanth Chalasani.

Unraveling the Circuits of Sleep-Promoting Neurons

Dr. Yang Dan is a Howard Hughes Medical Institute Investigator and Professor of Neurobiology at The University of California, Berkley whose lab studies the neural circuits controlling sleep as well as the function of the prefrontal cortex. In May 2017 she published a paper in Nature titled “Identification of preoptic sleep neurons using retrograde labeling and gene profiling.”

It was previously known that inhibitory neurons in the sleep-active preoptic area project to the wake-promoting tuberomamillary nucleus in the posterior hypothalamus. However, these neurons are difficult to isolate because of intertwining wake-promoting neurons. In order to solve this problem the Yang Dan group used a lentivirus expressing channelrhodopsin-2 for activation or a light-activated chloride channel for inhibition to retrograde label and subsequently control the activity of these neurons. They then showed that these cells are both sleep-active and sleep promoting and that they can be identified by four distinct peptide markers that promoted sleep following administration.

They also showed that a separate population of inhibitory neurons in the preoptic area that does not project to the posterior hypothalamus were wake-, rather than sleep-, active. This finding underscores the importance of cell identification and labeling based on both the type of cell and the target of projection.

Screen Shot 2017-10-15 at 8.45.16 PM.png

The figure above shows that optogenetic stimulation (left column) of inhibitory GABAergic neurons projecting from the sleep-active preoptic area to the tuberomamillary nucleus of the posterior hypothalamus causes an increase in sleep, whereas inhibition of these neurons causes a decrease in sleep. In a and b the schematic of optogenetic activation is illustrated. In b and e the electroencephalography (EEG) spectrogram and trace, as well as the electromyography (EMG) trace, are shown across non-REM, REM and wake states. The area shaded in blue represents the time of stimulation. In c we see an increase in the percent of non-REM and REM sleep and a decrease in waking during stimulation of these neurons, whereas in f we see a decrease in the percent of non-REM sleep and an increase in waking during inhibition of these neurons.

Overall this paper provides insight into the circuits underlying sleep regulation and demonstrates a powerful set of strategies to target spatially intertwined neurons in the brain in order to understand their structure and function.

 

Charlie Dickey is a first-year Neurosciences Ph.D. student working in the lab of Dr. Eric Halgren.

What happens at the encoding of a memory?

Dr. Mark Mayford and his lab have done amazing work illuminating the molecular mechanisms behind learning and memory. Using transgenetic mice, they specifically and acutely label cells involved in memory encoding and synaptic plasticity. In this study, the lab begins to explore the role of the neocortex in the encoding of a memory. They use a c-fos genetic tagging system, optogenetics, and fear condition to observe the specific mechanisms behind encoding of memories in the cortex. They show here that through stimulation of the cortex they can induce context dependent fear conditioned behaviors. When stimulating neural ensembles activated during fear conditioning, fear conditioning behavior could be induced in neutral environment. This stimulation seems to activate downstream cells in the amygdala as well.

To begin in this study, researchers used a fos/tTA-tetO/ChEF-tdTomato bitransgenic system. This allowed them to acutely label cells being activated in the retrosplenial cortex (RSC) during fear conditioning using a controled dox enriched diet to study those specific ensembles of cells. To support this, they showed that activation of these RSC cortex neurons was indeed due to optogenetic stimulation by looking at electrophysiology and channel rhodopsin expression levels. Next, they observed that transgenic mice showed increase in freezing under optogenetic stimulation in a neutral arena compared to wild type mice that received fear conditioning and control groups. This is beginning to show that stimulating the ensemble of cells active during fear conditioning can induce the same behavior even in a different context. Their next experiments try to parse out the different ways these memories may be being encoded in the RSC. Mice were tagged in either Box A or Box B and then shocked in box A. Transgenic mice tagged in Box A and shocked in Box A showed increase freezing over the other groups as shown in the figure below. This displays that the representation of Box A is stable enough to be linked to the shock during subsequent conditioning in Box A. In the final experiment, they go to show that not only does this optogenetic stimulation activate ensembles in the RSC but in downstream cells in the amygdala.

 

markmayfordblogpost To learn more about Dr. Mayford’s research, please attend his talk on Tuesday, June 5, 2017 at 4pm in the CNCB Marilyn G. Farquhar Seminar Room.

Kevin White is a PhD student currently rotating in Dr. Axel Nimmerjan’s lab in the Biophontonics Center at the Salk Institute.