Making and breaking habits: the role of endocannabinoid modulation of orbito-striatal activity on habitual action control

Christina Gremel is an Assistant Professor of Psychology at the University of California, San Diego. Her lab is interested in the neural bases of decision-making processes, and how these processes are altered in people with neuropathologies like addiction and obsessive-compulsive disorder (OCD). She is especially interested in the role of cortico-basal ganglia circuits in habitual and goal-directed actions, and how an inability to switch between the two can lead to disordered behavior.

Habitual behavior is necessary for us to be able to perform routine actions quickly and efficiently. However, we also need to be able to shift to more goal-directed behavior as circumstances change. An inability to break a habit and to shift our behavior based on updated information can have devastating consequences, and this inability has been shown to underlie neuropsychiatric conditions involved with disordered decision-making, such as addiction and OCD. Thus, a balance between habitual and goal-oriented behavior is critical for healthy action selection. The Gremel lab is studying the molecular mechanisms underlying this balance (or lack thereof), with the ultimate goal of improving treatments for people with these disorders.

In “Endocannabinoid Modulation of Orbitostriatal Circuits Gates Habit Formation” (Gremel et al., 2016), the authors examine the role of the endocannabinoid system on a specific pathway between the orbitofrontal cortex (OFC) and dorsal striatum (DS), both areas involved in the control of goal-directed behavior. More specifically, they examine the role of cannabinoid type 1 (CB1) receptors within the OFC-DS pathway on the ability to shift from goal-directed to habitual action control.

They accomplish this with an instrumental lever-press task in mice that were trained to press a lever for the same reward (either a pellet or sucrose solution) under two different reinforcement schedules: random ratio (RR), which induces goal-directed behavior, and random interval (RI), which induces habitual behavior. Whichever food reward a mouse doesn’t receive during training is used as a control provided in the home cage. To determine if actions are controlled through habitual or goal-directed processes, they perform a two-day “outcome devaluation procedure”. On the valued day, mice prefeed on the home cage food, which is not associated with lever-pressing. On the devalued day, mice prefeed on the food given for lever-pressing, thereby decreasing their motivation for that reward. After prefeeding each day, non-rewarded lever-pressing is measured. A reduction in lever pressing in the devalued condition (rather than valued condition) indicates greater goal-directed control, whereas no reduction indicates habitual control.

To study the role of the endocannabinoid system on the control of goal-directed behavior, the authors examined the effects of deleting CB1 receptors in the OFC-DS pathway. They accomplished this using a combinatorial viral approach in transgenic mice. CB1flox mice and their wild type littermates were injected in the DS with the retrograde herpes simplex virus 1 carrying flipase hEF1a-eYFP-IRES-flp (HSV-1 fp), and in the OFC with AAV8-Ef1a-FD-mCherry-p2A-Cre (AAV fp-Cre), with Cre recombinase dependent on the presence of flipase (Figure 5A). This resulted in CB1 deletion in OFC-DS neurons of the CB1flox mice, but not in the controls.

Screen Shot 2017-11-19 at 9.36.37 PM

During the outcome devaluation procedure, the control mice had reduced lever-pressing in the RR, but not RI, context, whereas the CB1flox mice had reduced lever-pressing in both RR and RI contexts (Figure 5G). Additionally, while the CB1flox mice had higher lever pressing in the valued state compared to the devalued state in both RR and RI contexts, the control mice only showed higher valued-state lever-pressing in the RR context (Figure 5H). Finally, calculation of devaluation indices showed that control mice increased their devaluation index between the RI and RR contexts, indicating a shift toward more goal-directed control, whereas the CB1flox mice did not show this shift (Figure 5I).

These results suggest that CB1 receptor-mediated inhibition of OFC-DS activity is critical for habitual action control. In other words, when the OFC-DS pathway is silenced, habit takes over. This is important, because it suggests that therapeutic targeting of the endocannabinoid system may be beneficial in treating people suffering from neuropsychiatric disorders involved with decision-making.

Seraphina Solders is a first-year Ph.D. student currently rotating in Dr. John Ravits’ lab.

Advertisements

Algorithms in Nature: How Biology Can Help Computer Science

Dr. Saket Navlakha is an Assistant Professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. His research sits at the intersection between computer science, machine learning, and biology with the aims of both building models of complex biological systems as well as studying “Algorithms in nature”, or observing how biological systems solve interesting computational problems.

In his article, “Algorithms in nature, the convergence of systems biology and computational thinking”, he argues that “adopting a ‘computational thinking’ approach to studying biological processes” can both improve our understanding of such processes and also improve the design of computational algorithms. Biologists have increasingly made use of sophisticated computational techniques to analyze data and model systems. Likewise, computer scientists have looked to biological systems to inspire the creation of novel algorithms. From image segmentation to graph search problems, computer scientists have found much success in these domains by looking at biological systems – from the human brain in artificial neural networks and ant colonies in graph search algorithms.

Dr. Navlakha notes that there are many shared principles between biological and computational systems which suggest that combining the two may advance research in both directions. First, both types of systems are often distributed – they consist of constitutive parts that interact and make decisions with little central control. Second, both systems are robust to different and noisy environments. Third, both systems are often modular – they reuse certain components in multiple applications. These shared principles suggest that thinking about computer science in terms of biology or vice versa may lead to an increase in understanding of both fields.

Dr. Navlakha has utilized this framework in multiple biological and computational domains. His most recent project looks at the fly olfactory system as a model of a class of algorithms known as locality-sensitive hashing. Locality-sensitive hashing (LSH) is a dimensionality reduction algorithm that preserves the shape of the input space. If two pieces of data are close together in the input space, then a LSH algorithm will hash them to lower-dimensional representations that maintain their proximity. LSH is useful in search algorithms, where you want to reduce the dimensionality of your data, say an image, so that the search takes a shorter time, but you also want to maintain high accuracy of the search results.Screen Shot 2017-11-05 at 8.45.36 PM

In the above figure, you can see how the fly olfactory system implements LSH. In part A, the fly has 3 layers of odor information processing. The first layer has around 50 odorant receptor neurons. These neurons linearly project to the projection neuron layer. As a result of this, each odor is represented by an exponential distribution of firing rates, with the same mean for all types of odors. Then, the information is transferred to the Kenyon cell layer which expands the dimensionality to around 2000 cells. There is feedback in this layer to turn off the 95% of cells with the lowest firing rate. The maximally firing 5% of cells corresponds to the hash for a given odor, represented in part B. Part C shows differences between convention LSH algorithms and the fly’s algorithm. While most LSH algorithms just reduce the dimensionality of the inputs, the fly’s algorithm expands the dimensionality before reducing.

When applied to a similarity-search problem, Dr. Navlakha found that the fly’s algorithm actually performs much better than the conventional way of doing LSH. Not only is this useful for computer science, but by looking at the fly olfactory system in terms of a computational approach, we gain insights into how the fly olfactory system is set up to perform sensory processing.

Tim Tadros is a Ph.D. student current rotating in Dr. Navlakha’s lab.

 

Could a neuroscientist understand a microprocessor?

With campaigns like the BRAIN Initiative in full force, we are already producing more data than current analytical approaches can manage. So how do we go about analyzing ‘big data’? It is with this goal in mind that this week’s Neuroscience Seminar speaker, Dr. Konrad Kording, seeks to understand the brain.

Dr. Kording is a Penn Integrated Knowledge Professor at the University of Pennsylvania and a Deputy Editor for PLOS Computational Biology. He received his PhD in Physics from the Federal Institute of Technology in Zurich. His lab is a self-described group of ‘data scientists with an interest in understanding the brain’. He focuses on analyzing big data sets and maintaining a healthy skepticism towards the interpretation of results.

A brilliant example of his approach can be found in Could a neuroscientist understand a microprocessor? (Jonas and Kording 2017). This witty paper seeks to analyze the viability and usefulness of current analytical methods in neuroscience. The authors seek to glean insight into how to understand a biological system by examining a technical system with a known ‘ground truth’, a simple microprocessor (Fig 1).

journal.pcbi

Figure 1. Reconstruction of a simple microprocessor (MOS 6502).

 

But what does it mean to ‘understand’ a biological system? Is it the ability to fix the system? Or the ability to accurately describe its inputs, transformations, and outputs? Or maybe the ability to describe its characteristics/processes at all levels: a) computationally, b) algorithmically, and c) physically? Kording argues that a true understanding is only achieved when a system can be explained at all levels. So how do we get there?

Innovations in computational approaches are clearly required to make further progress, but it is also necessary to verify that these methods work. Jonas and Kording suggest the use of a known technical system as a test, an idea that stemmed from a critique of modeling in molecular biology, Can a Biologist Fix a Radio? (Lazebnik 2002). Kording used a reconstructed and simulated microprocessor like those used in Atari as a model system. The behavioral inputs were three games – Donkey Kong, Space Invaders, and Pitfall. The behavioral output is the boot-up of the game. The ‘recorded’ data (Fig 2) was then sent through a battery of analysis methods used on real brain data, ranging from connectomics to dimensionality reduction. Here, I outline a few of these methods.