<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-4548</issn>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2012.00170</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Complexity and Competition in Appetitive and Aversive Neural Circuits</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Barberini</surname> <given-names>Crista L.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Morrison</surname> <given-names>Sara E.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Saez</surname> <given-names>Alex</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Lau</surname> <given-names>Brian</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Salzman</surname> <given-names>C. Daniel</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<xref ref-type="aff" rid="aff6"><sup>6</sup></xref>
<xref ref-type="aff" rid="aff7"><sup>7</sup></xref>
<xref ref-type="author-notes" rid="fn001">&#x0002A;</xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Neuroscience, Columbia University</institution> <country>New York, NY, USA</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Psychiatry and Behavioral Science, Albert Einstein College of Medicine</institution> <country>Bronx, NY, USA</country></aff>
<aff id="aff3"><sup>3</sup><institution>Centre de Recherche de l&#x02019;Institut du Cerveau et de la Moelle &#x000C9;pini&#x000E8;re</institution> <country>Paris, France</country></aff>
<aff id="aff4"><sup>4</sup><institution>Department of Psychiatry, Columbia University</institution> <country>New York, NY, USA</country></aff>
<aff id="aff5"><sup>5</sup><institution>Kavli Institute for Brain Sciences, Columbia University</institution> <country>New York, NY, USA</country></aff>
<aff id="aff6"><sup>6</sup><institution>W. M. Keck Center on Brain Plasticity and Cognition, Columbia University</institution> <country>New York, NY, USA</country></aff>
<aff id="aff7"><sup>7</sup><institution>New York State Psychiatric Institute</institution> <country>New York, NY, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Philippe N. Tobler, University of Zurich, Switzerland</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Kae Nakamura, Kansai Medical University, Japan; Anton Ilango, National Institutes of Health, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: C. Daniel Salzman, Department of Neuroscience, Columbia University, 1051 Riverside Drive, Unit 87, New York, NY 10032, USA. e-mail: <email>cds2005&#x00040;columbia.edu</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Frontiers in Decision Neuroscience, a specialty of Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>11</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>170</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>10</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>11</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2012 Barberini, Morrison, Saez, Lau and Salzman.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement"><p>This is an open-access article distributed under the terms of the <uri xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</p></license>
</permissions>
<abstract>
<p>Decision-making often involves using sensory cues to predict possible rewarding or punishing reinforcement outcomes before selecting a course of action. Recent work has revealed complexity in how the brain learns to predict rewards and punishments. Analysis of neural signaling during and after learning in the amygdala and orbitofrontal cortex, two brain areas that process appetitive and aversive stimuli, reveals a dynamic relationship between appetitive and aversive circuits. Specifically, the relationship between signaling in appetitive and aversive circuits in these areas shifts as a function of learning. Furthermore, although appetitive and aversive circuits may often drive opposite behaviors &#x02013; approaching or avoiding reinforcement depending upon its valence &#x02013; these circuits can also drive similar behaviors, such as enhanced arousal or attention; these processes also may influence choice behavior. These data highlight the formidable challenges ahead in dissecting how appetitive and aversive neural circuits interact to produce a complex and nuanced range of behaviors.</p>
</abstract>
<kwd-group>
<kwd>amygdala</kwd>
<kwd>orbitofrontal cortex</kwd>
<kwd>value processing</kwd>
<kwd>reward</kwd>
<kwd>punishment</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="94"/>
<page-count count="13"/>
<word-count count="10229"/>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>The Importance of Learning to Predict Reinforcement for Punishment-Based Decision-Making</title>
<p>The decision-making process &#x02013; arguably one of the most important &#x0201C;executive&#x0201D; functions of the brain &#x02013; can be influenced by a variety of different types of information and motivators. Punishment-based decisions constitute an important subcategory that is common to a wide phylogenetic range, from nematodes to rodents to humans. Studies old and new have shown that punishment engages brain systems specialized for processing aversive information (Seymour et al., <xref ref-type="bibr" rid="B84">2007</xref>). Historically, these systems have been studied most frequently in rodents, and this work has revealed many aspects of the neural mechanisms driving behavior elicited by the threat of aversive stimuli (Davis, <xref ref-type="bibr" rid="B19">1992</xref>; LeDoux, <xref ref-type="bibr" rid="B44">2000</xref>). In everyday life, however, decisions typically require integrating information about potential punishments <italic>and</italic> rewards, as well as myriad factors such as external environment and internal drives. This is especially true in primates, as they exhibit particularly complex behavioral repertoires.</p>
<p>Rewards and punishments are reinforcers with opposite valence (positive versus negative), and they often drive behavior in opposite directions &#x02013; e.g., approaching a rewarding stimulus or avoiding a threat. Moreover, punishment-based decisions are often made in a context in which rewards and punishments are both possible consequences of an action; therefore, brain systems processing aversive information must interact with brain systems processing rewards &#x02013; interactions that presumably underlie how punishments and rewards compete to drive behavior and decision-making. Scientists have long appreciated these facts and have often posited that appetitive and aversive systems operate in an &#x0201C;opponent&#x0201D; manner (Konorski, <xref ref-type="bibr" rid="B42">1967</xref>; Solomon and Corbit, <xref ref-type="bibr" rid="B85">1974</xref>; Dickinson and Dearing, <xref ref-type="bibr" rid="B22">1979</xref>; Grossberg, <xref ref-type="bibr" rid="B30">1984</xref>; Daw et al., <xref ref-type="bibr" rid="B20">2002</xref>). However, appetitive and aversive stimuli also have certain common attributes &#x02013; e.g., they are both usually more salient than non-reinforcing stimuli &#x02013; and thus appetitive and aversive systems need not always act in opposition to each other. Rather, stimuli of both valences may mediate a number of processes, such as enhanced arousal or enhanced attention to stimuli predictive of reinforcement (Armony and Dolan, <xref ref-type="bibr" rid="B5">2002</xref>; Anderson, <xref ref-type="bibr" rid="B3">2005</xref>; Lang and Davis, <xref ref-type="bibr" rid="B43">2006</xref>; Phelps et al., <xref ref-type="bibr" rid="B69">2006</xref>; Brosch et al., <xref ref-type="bibr" rid="B12">2008</xref>; Ilango et al., <xref ref-type="bibr" rid="B35">2010</xref>; Pinkham et al., <xref ref-type="bibr" rid="B70">2010</xref>; Anderson et al., <xref ref-type="bibr" rid="B4">2011</xref>).</p>
<p>Punishment-based decisions are generally choices that are based on one or more prior experiences with an aversive outcome. Typically, an organism learns that a sensory cue predicts a possible negative outcome &#x02013; e.g., the taste of spoiled food precedes illness &#x02013; and later must decide what to do to avoid or defend against that outcome. Thus, learning to anticipate negative outcomes is an essential skill for subsequently being able to make optimal decisions in the face of possible punishment. This is also true for rewards: the adaptive response is to acquire the reward, rather than avoid it, but anticipation is critical in both cases.</p>
<p>Because accurately predicting reinforcement &#x02013; whether punishment or reward &#x02013; plays such a vital role in decision-making, our work has focused on understanding the neurophysiological processes whereby the brain comes to predict reinforcement as a result of learning. We have sought to understand where and how signals in the brain represent anticipated positive or negative outcomes, and whether those signals occur at a time and in a manner such that they could be used as input to decision-making processes. We have often referred to these signals as <italic>value</italic> signals. Although our published studies have not characterized these signals during an explicit decision-making task, the tasks we employed do provide measures that appear to co-vary with the amount and type of the reinforcement associated with a stimulus (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Belova et al., <xref ref-type="bibr" rid="B8">2007</xref>, <xref ref-type="bibr" rid="B9">2008</xref>; Salzman et al., <xref ref-type="bibr" rid="B77">2007</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B57">2009</xref>, <xref ref-type="bibr" rid="B59">2011</xref>; Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). We believe that the <italic>value</italic> of anticipated possible outcomes often drives behavior, and the estimation of value may be computed on-line during decision-making by taking into account expected potential reinforcement as well as a variety of internal variables (e.g., hunger or thirst) and external variables (e.g., how difficult a reward would be to acquire; Padoa-Schioppa, <xref ref-type="bibr" rid="B65">2011</xref>). We refer to the circuits that process and generate appetitive and aversive reinforcement predictions as value processing circuits, although in some cases work remains to be done to understand how different internal and external variables impact representations of reinforcement predictions.</p>
<p>Where in the brain does processing about reinforcement predictions occur? Early work indicated that the amygdala, a key structure in the limbic system, plays a central role in processing one of the primary negative emotions, the fear elicited by a stimulus predicting aversive consequences. Seminal fear conditioning studies in rats found that both learning and memory of fearful events required an intact, functional amygdala (Davis, <xref ref-type="bibr" rid="B19">1992</xref>; LeDoux, <xref ref-type="bibr" rid="B44">2000</xref>; Maren and Quirk, <xref ref-type="bibr" rid="B46">2004</xref>). Since then, it has become clear that the purview of the amygdala extends beyond fear to include other emotions, including positive ones (Holland and Gallagher, <xref ref-type="bibr" rid="B33">1999</xref>; Baxter and Murray, <xref ref-type="bibr" rid="B6">2002</xref>; Everitt et al., <xref ref-type="bibr" rid="B23">2003</xref>; Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Belova et al., <xref ref-type="bibr" rid="B9">2008</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B58">2010</xref>; Salzman and Fusi, <xref ref-type="bibr" rid="B76">2010</xref>). These results suggest that the amygdala may carry signals related to the computation of both positive and negative value.</p>
<p>How do amygdala signals come to impact behavior? The amygdala is heavily interconnected with many other areas of the brain, providing an array of anatomical pathways by which it can participate in learning and decision-making. It receives input from multiple sensory modalities (McDonald, <xref ref-type="bibr" rid="B53">1998</xref>; Amaral et al., <xref ref-type="bibr" rid="B1">2003</xref>; Freese and Amaral, <xref ref-type="bibr" rid="B25">2005</xref>), which accords with the amygdala&#x02019;s established role in associative learning; information from predictive sensory cues converges with input about reinforcing outcomes at the single cell level (e.g., Romanski et al., <xref ref-type="bibr" rid="B74">1993</xref>). Furthermore, lesions of the amygdala impair reinforcer devaluation (Baxter and Murray, <xref ref-type="bibr" rid="B6">2002</xref>; Izquierdo and Murray, <xref ref-type="bibr" rid="B37">2007</xref>), indicating that the amygdala plays a role not only in learning reinforcement contingencies, but also in adjusting these representations as the value of associated reinforcement outcomes changes.</p>
<p>Although the amygdala participates in learning stimulus-reinforcement associations that in turn may be utilized and adjusted during decision-making, it does not act alone in these processes. The amygdala has reciprocal connections with orbitofrontal cortex (OFC; McDonald, <xref ref-type="bibr" rid="B52">1991</xref>; Carmichael and Price, <xref ref-type="bibr" rid="B14">1995</xref>; Stefanacci and Amaral, <xref ref-type="bibr" rid="B86">2000</xref>, <xref ref-type="bibr" rid="B87">2002</xref>; Ghashghaei et al., <xref ref-type="bibr" rid="B27">2007</xref>), a cortical area thought to play a central role in value-based decisions (Padoa-Schioppa and Assad, <xref ref-type="bibr" rid="B66">2006</xref>; Wallis, <xref ref-type="bibr" rid="B91">2007</xref>; Padoa-Schioppa, <xref ref-type="bibr" rid="B65">2011</xref>). OFC may be important for implementing executive or cognitive control over behavior, and endowing subjects with the ability to rationally analyze their options, as well as to tune their behavior to what is socially acceptable in the face of emotionally driven impulses (Damasio, <xref ref-type="bibr" rid="B17">1994</xref>; Rolls, <xref ref-type="bibr" rid="B73">1996</xref>; Bechara et al., <xref ref-type="bibr" rid="B7">2000</xref>; Berlin et al., <xref ref-type="bibr" rid="B10">2005</xref>; Ochsner and Gross, <xref ref-type="bibr" rid="B62">2005</xref>). Part of this may be due to the fact that OFC seems to play a role in the simple ability to anticipate aversive stimuli or negative outcomes, as well as positive outcomes (Tremblay and Schultz, <xref ref-type="bibr" rid="B90">2000</xref>; Roberts et al., <xref ref-type="bibr" rid="B71">2004</xref>; Young et al., <xref ref-type="bibr" rid="B94">2010</xref>).</p>
<p>In this paper, we review our efforts to understand the roles of the amygdala and OFC in acquiring representations of reinforcement contingencies. As we reviewed above, these representations may be critical substrates for reward-based and punishment-based decision-making. One of the striking findings in these investigations concerns the differential dynamics of processing that takes place in appetitive and aversive systems in amygdala and OFC. The amygdala appears to have evolved an aversive system that learns changes in reinforcement contingencies more rapidly than its counterpart in OFC; but, for appetitive networks, the time courses of learning in the two brain areas are reversed. Moreover, both single unit and local field potential (LFP) data point to complex interactions between amygdala and OFC that change as a function of learning. Although appetitive and aversive systems have been posited to act in an opponent manner, this complex pattern of interactions suggests that a more nuanced framework may be required to understand the relative contribution of these networks during learning and decision-making. Moreover, behavioral evidence indicates that appetitive and aversive stimuli can have a variety of effects on cognitive processes, some of which may be induced by stimuli of either valence. Altogether, these data suggest that appetitive and aversive systems may act in congruent <italic>and</italic> opponent fashions &#x02013; even at the same time &#x02013; and do not merely compete to determine the most valuable behavioral option during decision-making.</p>
<sec>
<title>Positive and negative cells in the brain</title>
<p>We have focused on trying to understand neural circuits involved in punishment and aversive learning, and how these circuits may differ from and interact with circuits involved in rewards and appetitive learning. When we began our experiments several years ago, only a few studies had examined the neurophysiology of the amygdala in primates (Sanghera et al., <xref ref-type="bibr" rid="B78">1979</xref>; Nishijo et al., <xref ref-type="bibr" rid="B61">1988</xref>, <xref ref-type="bibr" rid="B60">2008</xref>; Rolls, <xref ref-type="bibr" rid="B72">2000</xref>; Sugase-Miyamoto and Richmond, <xref ref-type="bibr" rid="B88">2005</xref>; Wilson and Rolls, <xref ref-type="bibr" rid="B92">2005</xref>). Furthermore, no primate lab had undertaken simultaneous recordings in amygdala and OFC to understand dynamic interactions between the brain structures during learning.</p>
<p>Our experimental approach strove to disambiguate neural responses that might be related to the sensory properties of visual conditioned stimuli (CSs) from responses related to the reinforcement contingencies. To accomplish this, we used a mixed appetitive/aversive reversal learning paradigm. This paradigm combined a conditioning procedure with standard extracellular physiology in rhesus monkeys; we measured the physiological responses of individual neurons to CSs that signaled an impending positive or negative US. CSs were small fractal patterns, positive outcomes were small aliquots of water, and negative outcomes were brief airpuffs directed at the face (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Belova et al., <xref ref-type="bibr" rid="B8">2007</xref>, <xref ref-type="bibr" rid="B9">2008</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B57">2009</xref>, <xref ref-type="bibr" rid="B59">2011</xref>; Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). In these experiments, one CS was initially paired with reward and another with an aversive stimulus (unconditioned stimuli, USs); then, without warning, we reversed the reinforcement contingences of the CSs. We recorded single neuron responses while monkeys learned the initial CS-US associations and their reversal. One major advantage of this approach was that reinforcements &#x02013; particularly aversive &#x0201C;punishments&#x0201D; &#x02013; were unavoidable, so we were able to unequivocally identify neural activity related to the anticipation of appetitive and aversive reinforcement.</p>
<p>In both the amygdala and OFC, we observed two populations of neurons that fired more for positive or negative outcomes, respectively, which we refer to as positive and negative value-coding cells. The response profiles for these two populations are shown in Figures <xref ref-type="fig" rid="F1">1</xref>A&#x02013;D for OFC and in Figures <xref ref-type="fig" rid="F1">1</xref>E&#x02013;H for the amygdala. Shortly after CS onset, both cell populations systematically fire differentially for CSs paired with positive or negative reinforcement. Reversing the reinforcement contingencies (Figures <xref ref-type="fig" rid="F1">1</xref>C,D,G,H for positive and negative cells, respectively) demonstrates that the differential firing is specifically related to the reinforcement contingencies and not other aspects of the CS, such as specific visual features. Note that after reversal, an image formerly associated with a reward now leads to a punishment, and vice-versa; after only a few trials of exposure to these new contingencies (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Belova et al., <xref ref-type="bibr" rid="B8">2007</xref>; Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>), the neural response pattern shifts to reflect these changes, such that the response profiles look quite similar before and after reversal.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Value-coding cells in the amygdala and OFC</bold>. The average normalized neural activity (&#x000B1;SEM) as a function of time since CS onset is shown for the population of positive value-coding cells <bold>(A,C,E,G)</bold> and negative value-coding cells <bold>(B,D,F,H)</bold>, in OFC <bold>(A&#x02013;D)</bold> and the amygdala <bold>(E&#x02013;H)</bold>. Responses are shown before <bold>(A,B,E,F)</bold> and after <bold>(C,D,G,H)</bold> reversal of the outcome contingencies associated with each CS. Peristimulus time histograms (PSTHs) were built using 10&#x02009;ms non-overlapping bins, <italic>Z</italic>-scoring, and then averaging cells together, and lastly smoothing by calculating a 10-bin moving average. Blue lines, positive CS trials; red lines, negative CS trials. Vertical dotted line, CS onset. Adapted from Morrison et al. (<xref ref-type="bibr" rid="B56">2011</xref>), Figure 3, with permission.</p></caption>
<graphic xlink:href="fnins-06-00170-g001.tif"/>
</fig>
<p>The encoding of reinforcement contingencies seems to reflect the overall motivational significance, or <italic>value</italic>, of a US associated with a CS, and not other types of information learned during conditioning. Several lines of evidence support this conclusion. First, neither amygdala nor OFC neurons encode motor responses elicited by USs on our task, indicating that neurons do not appear to represent the relationship between a CS and the motor response elicited by USs (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B57">2009</xref>). Second, both OFC and amygdala neurons generally do not simply represent the relationship between a CS and the sensory qualities of a preferred US. Rather, we found that OFC and amygdala neurons respond in a graded manner to CSs predicting large rewards (LRs), small rewards (SRs), and negative outcomes; this means that a cell that prefers a CS associated with an aversive airpuff also responds differentially to CSs associated with water rewards, and thus encodes information about two types of outcomes. Moreover, since the outcomes include two modalities (taste and touch), it is unlikely that the neural response is primarily driven by a physical quality of one type of outcome, such as the strength or duration of the airpuff (Belova et al., <xref ref-type="bibr" rid="B9">2008</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B57">2009</xref>).</p>
<p>Third, positive and negative neurons often appear to track value in a consistent manner across the different sensory events in a trial &#x02013; including the fixation point, CS, and US presentations &#x02013; even though those stimuli differ in sensory modality. This has led us to suggest that amygdala and OFC neurons represent the overall value of the animals&#x02019; &#x0201C;state,&#x0201D; or situation (Belova et al., <xref ref-type="bibr" rid="B9">2008</xref>; Morrison and Salzman, <xref ref-type="bibr" rid="B57">2009</xref>, <xref ref-type="bibr" rid="B59">2011</xref>). Finally, in an additional series of experiments that examined the representation of &#x0201C;relative&#x0201D; value in different contexts, amygdala neurons changed their firing rate in accordance with changes in the relative value of a CS, even when the absolute value (i.e., reward size) of the associated US does not change (Schoer et al., <xref ref-type="bibr" rid="B83">2011</xref>). This phenomenon has also been observed in the OFC (Padoa-Schioppa, <xref ref-type="bibr" rid="B64">2009</xref>; Schoer et al., <xref ref-type="bibr" rid="B82">2009</xref>).</p>
<p>In contrast to the signals just described, there are doubtless other signals in the brain that encode the magnitude of single stimulus dimensions &#x02013; e.g., the size or taste of specific rewards. However, these signals would not, in and of themselves, be sufficient to inform choices made between outcomes that were in different modalities.</p>
</sec>
<sec>
<title>Dynamics during learning</title>
<p>The neurons we describe provide a dynamic representation that changes rapidly during learning. Overall, during reversal learning, the change in the neural responses in both amygdala and OFC was on a timescale similar to changes in the monkey&#x02019;s behavior. Behavioral metrics of the monkey&#x02019;s expectation &#x02013; anticipatory licking of the water tube preceding rewards and anticipatory &#x0201C;blinking&#x0201D; before aversive airpuffs &#x02013; reversed within a few trials, indicating that monkeys learned the new associations quite rapidly (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). Amygdala and OFC neural activity likewise began to change their responses to CSs within a few trials of a reversal in reinforcement contingencies (Paton et al., <xref ref-type="bibr" rid="B67">2006</xref>; Belova et al., <xref ref-type="bibr" rid="B8">2007</xref>; Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). This sequence of neural and behavioral changes indicates that the amygdala and OFC could be involved in the monkeys&#x02019; learning of new reinforcement contingencies.</p>
<p>Neuroscientists have long believed that the prefrontal cortex, and OFC in particular, drives reversal learning (Iversen and Mishkin, <xref ref-type="bibr" rid="B36">1970</xref>; Thorpe et al., <xref ref-type="bibr" rid="B89">1983</xref>; O&#x02019;Doherty et al., <xref ref-type="bibr" rid="B63">2001</xref>; Schoenbaum et al., <xref ref-type="bibr" rid="B80">2002</xref>; Chudasama and Robbins, <xref ref-type="bibr" rid="B16">2003</xref>; Fellows and Farah, <xref ref-type="bibr" rid="B24">2003</xref>; Hornak et al., <xref ref-type="bibr" rid="B34">2004</xref>; Izquierdo et al., <xref ref-type="bibr" rid="B38">2004</xref>; Chamberlain et al., <xref ref-type="bibr" rid="B15">2008</xref>; Hampshire et al., <xref ref-type="bibr" rid="B31">2008</xref>; Ghahremani et al., <xref ref-type="bibr" rid="B26">2010</xref>); but some have recently proposed that in fact representations in OFC may update more slowly upon reversal than those elsewhere (Schoenbaum et al., <xref ref-type="bibr" rid="B79">1998</xref>, <xref ref-type="bibr" rid="B81">2003</xref>; Saddoris et al., <xref ref-type="bibr" rid="B75">2005</xref>). Because we recorded amygdala and OFC activity simultaneously, we were able to examine the dynamics of learning in positive and negative value-coding neurons in both amygdala and OFC in order to characterize their relative timing. We found that appetitive and aversive networks in OFC and amygdala exhibited different learning rates, and &#x02013; surprisingly &#x02013; that the direction of the difference depended on the valence preference of the cell populations in question. For positive cells, changes in OFC neural activity after reversal were largely complete many trials earlier than in the amygdala; for negative cells, the opposite was true (Figure <xref ref-type="fig" rid="F2">2</xref>). In each case, the faster-changing area was completing its transition around the time of the onset of changes in behavior; meanwhile the other, more slowly changing area did not complete the shift in firing pattern until many trials after the behavioral responses began to change. Thus, signals appropriate for driving behavioral learning are present in both brain structures, with the putative aversive system in the amygdala and appetitive system in OFC being particularly sensitive to changes in reinforcement contingencies. This finding may reflect the preservation across evolution of an aversive system in the amygdala that learns very quickly in order to avoid threats to life and limb.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Comparison of the time courses of learning-related activity in positive and negative value-coding neurons in the amygdala and OFC</bold>. Normalized average contribution of image value to neural activity, derived from ANOVA, plotted as a function of trial number after reversal for positive value-coding neurons <bold>(A)</bold> and negative value-coding neurons <bold>(B)</bold>. Blue lines, OFC; green lines, amygdala; red and cyan arrowheads, mean licking and blinking change points, respectively. Adapted from Morrison et al. (<xref ref-type="bibr" rid="B56">2011</xref>), Figures 5C,D, with permission.</p></caption>
<graphic xlink:href="fnins-06-00170-g002.tif"/>
</fig>
</sec>
<sec>
<title>During versus after learning</title>
<p>Despite the complex pattern of dynamics we observed during learning, once the new CS-US contingencies have been established, we found that <italic>both</italic> populations of OFC cells &#x02013; positive value-coding and negative value-coding &#x02013; predict reinforcement earlier in the trial than their counterparts in the amygdala (Figure <xref ref-type="fig" rid="F3">3</xref>). To demonstrate this, we examined trials after learning had taken place and determined the earliest point in the trial each area begins to significantly differentiate between images that predict reward and images that predict airpuff. For both positive and negative cell populations, OFC predicted reinforcement more rapidly after image onset. Thus, it appears that the relationship between single unit firing in the appetitive and aversive networks in the two brain areas evolves as a function of learning, with the OFC perhaps assuming a more primary role after learning.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Encoding of image value in OFC and the amygdala</bold>. The contribution of image value as a function of time for positive value-coding cells <bold>(A)</bold> and negative value-coding cells <bold>(B)</bold>. Asterisks, time points at which the average contribution of value is significant (Fisher <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001) for OFC (blue lines) and the amygdala (green lines). Vertical dotted line, CS onset. Adapted from Morrison et al. (<xref ref-type="bibr" rid="B56">2011</xref>), Figures 8E,F, with permission.</p></caption>
<graphic xlink:href="fnins-06-00170-g003.tif"/>
</fig>
<p>We found further evidence of the evolving dynamic relationship between amygdala and OFC during learning by examining LFP data recorded during the reversal learning task. To do so, we applied Granger causality analysis, which measures the degree to which the past values of one neural signal predict the current values of another (Granger, <xref ref-type="bibr" rid="B28">1969</xref>; Brovelli et al., <xref ref-type="bibr" rid="B13">2004</xref>), to the simultaneously recorded LFPs in the amygdala and OFC. Remarkably, we found significant Granger causality in <italic>both</italic> directions that increased upon CS onset (Wilcoxon, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.01; Figure <xref ref-type="fig" rid="F4">4</xref>A). Notably, during learning, Granger causality was stronger in the amygdala-to-OFC direction, but after learning, Granger causality was strongest in the OFC-to-amygdala direction (Figures <xref ref-type="fig" rid="F4">4</xref>B,C). This result is consistent with single unit data showing that, after reversal learning has occurred, OFC predicts reinforcement with a shorter latency after CS onset. This positions the OFC to be able to drive or modulate amygdala responses to value-laden CSs after learning. (Note, however, that the amygdala continues to be able to influence processing in OFC, just not as strongly as the reverse.).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Granger causality between the amygdala and OFC</bold>. <bold>(A)</bold> Average normalized Granger causality (&#x000B1;SEM) for the OFC-to-amygdala direction (blue) and the amygdala-to-OFC direction (green). For each pair of OFC-amygdala LFP recordings, Granger causality was computed for all trials after reversal, then averaged across pairs. Only pairs with significant Granger causality at some point during the trial were included in the average, which combines frequencies from 5 to 100&#x02009;Hz. Asterisks, bins with significantly different causality for the two directions (permutation test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05). <bold>(B,C)</bold> Granger causality changes with learning. The difference between the mean Granger causality in the two directions (subtracting amygdala-to-OFC from OFC-to-amygdala) was separately calculated for early (during learning, red line) and late (post-learning, black line) trials after reversal. This comparison is shown for all frequencies 5&#x02013;100&#x02009;Hz as a function of time within the trial <bold>(B)</bold> and for the CS and trace intervals combined as a function of frequency <bold>(C)</bold>. Asterisks, bins where the difference between during-learning and post-learning values was significant (permutation test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05). Adapted from Morrison et al. (<xref ref-type="bibr" rid="B56">2011</xref>), Figure 9, with permission.</p></caption>
<graphic xlink:href="fnins-06-00170-g004.tif"/>
</fig>
</sec>
<sec>
<title>Conflict within appetitive and aversive circuits</title>
<p>There is an additional level of complexity within appetitive and aversive circuits that has not received much attention on the physiological level, namely competition and conflict within these circuits. Our learning data suggest that the signals carried by different neural circuits may be updated at different rates in different brain areas. This suggests that these systems might at times conflict with each other. Another possible example of competition is that between executive areas &#x02013; which allow us to evaluate potential outcomes on a practical and rational level &#x02013; and limbic areas, which are more involved in emotional processing, and which might provide a value signal based more heavily on immediate sensory experience and emotion-laden associations. For example, the amygdala and OFC themselves may at times &#x0201C;recommend&#x0201D; different responses, the former mediating more emotionally driven responses and the latter more executive or cognitive behaviors (De Martino et al., <xref ref-type="bibr" rid="B21">2006</xref>).</p>
<p>This phenomenon has been given some attention on the behavioral level (McNeil et al., <xref ref-type="bibr" rid="B54">1982</xref>; Damasio et al., <xref ref-type="bibr" rid="B18">1994</xref>; Kahneman and Tversky, <xref ref-type="bibr" rid="B40">2000</xref>; Loewenstein et al., <xref ref-type="bibr" rid="B45">2001</xref>; Greene and Haidt, <xref ref-type="bibr" rid="B29">2002</xref>), and has also been examined using fMRI in humans (McClure et al., <xref ref-type="bibr" rid="B51">2004</xref>, <xref ref-type="bibr" rid="B50">2007</xref>; De Martino et al., <xref ref-type="bibr" rid="B21">2006</xref>; Kable and Glimcher, <xref ref-type="bibr" rid="B39">2007</xref>). However, few studies have examined appetitive and aversive circuits at the level of single cells during a decision-making task involving rewards and punishments. To best investigate the interactions between appetitive and aversive neural circuits, such a decision-making task should include conditions in which rewards and aversive stimuli must be weighed against each other in order to guide behavior. As a first step, we trained monkeys to perform a simple two-choice task involving rewards and aversive stimuli (described below). We discovered that, even on this simple task, behavioral choices appear to be influenced not only by the value of the reinforcement associated with cues, but also by the salience of cues.</p>
<p>We used a two-choice task in which monkeys selected visual targets by making a saccade to the target of their choice. Monkeys viewed two visual targets on each trial, each of which was a CS associated with a particular outcome. After maintaining fixation during a 900&#x02013;1200&#x02009;ms delay period, monkeys chose one of the two targets by foveating it, followed by delivery of the associated outcome (Figure <xref ref-type="fig" rid="F5">5</xref>A). There were four possible outcomes: a LR, a SR, no reinforcement (N), or a punishment (P), where rewards were small amounts of water and punishments were brief airpuffs directed at the face. The four CSs (one for each outcome; Figure <xref ref-type="fig" rid="F5">5</xref>B) were offered in all possible combinations, with the exception of two of the same kind. Trial conditions were pseudo-randomly interleaved, and counter-balanced for spatial configuration. The list of trial types is shown in Figure <xref ref-type="fig" rid="F5">5</xref>C. New sets of CSs were used in each session. Two independent stimulus sets were used, and trials drawing from the two sets were interleaved in pseudo-random order. In each session, a pair of locations on the monitor was chosen and used for the duration of the session. The locations varied, but each pair always straddled the fixation point. While monkeys were free to choose either target, they had to make a choice: incomplete trials were repeated until one or the other target was chosen.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Schematic illustration of the design of the two-choice task</bold>. <bold>(A)</bold> Sequence of events in each trial. The monkey begins each trial by foveating a central fixation point (FP, black square), then two visual targets appear, straddling the FP, a delay ensues, the FP goes out, and the monkey makes an eye movement (black dashed line) to one of the two targets to select it. Targets are extinguished, and, after another short delay, the associated outcome (US) is delivered. <bold>(B)</bold> Visual targets (CSs) and associated outcomes (USs). Four targets are used as CSs, each one associated with one of the four possible USs. CSs are random grayscale stick figures (not shown); USs: LR, large reward; SR, small reward; N, neutral; P, punishment. <bold>(C)</bold> Trial types, determined by the outcome of the two CSs offered. CSs were counter-balanced for location.</p></caption>
<graphic xlink:href="fnins-06-00170-g005.tif"/>
</fig>
<p>If monkeys always chose the higher-value target, then plotting the percent of trials on which a CS was chosen, out of all trials on which that CS was offered, yields a straight line, since LR is always the higher-value target when presented, SR on 2/3 of trials when presented, N on 1/3 of trials, and P on no trials, as can be seen in the list of trial conditions (see Figure <xref ref-type="fig" rid="F5">5</xref>C). We will refer to this as &#x0201C;optimal&#x0201D; behavior. In Figure <xref ref-type="fig" rid="F6">6</xref>, two example sessions are shown. The first is a session in which a monkey chose the higher-value target most of the time, such that the plot of the number of times each target was chosen follows the optimal behavior line quite closely (Figure <xref ref-type="fig" rid="F6">6</xref>A). In the second example, however, the same monkey chose the punished target many times, and about as often as he chose the neutral (non-reinforced) target (Figure <xref ref-type="fig" rid="F6">6</xref>B).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Choice behavior in the two-choice task</bold>. The percent of trials that a CS was chosen when it was offered is shown for each CS. Blue line, monkey&#x02019;s choices; dashed black line, optimal behavior. Choice behavior is shown for two sessions, one where the monkey rarely chose the P target <bold>(A)</bold>, and one where he chose it frequently <bold>(B)</bold>. The two stimulus sets have been combined in this figure.</p></caption>
<graphic xlink:href="fnins-06-00170-g006.tif"/>
</fig>
<p>The deviation from optimal behavior seen in Figure <xref ref-type="fig" rid="F6">6</xref>B is not due to an overall drop in performance, but to a change in behavior on a single trial type: the N-P stimulus pair. In Figure <xref ref-type="fig" rid="F7">7</xref>, a running local average of the proportion of trials on which the monkey chose the higher-value target is shown, broken down by trial type, for the same two sessions shown in Figure <xref ref-type="fig" rid="F6">6</xref>. When offered a choice between a reward and a punishment, the monkey reliably chose the reward (LR-P and SR-P trial types in Figures <xref ref-type="fig" rid="F7">7</xref>A,B). However, when offered a choice between no reinforcement and a punishment, in some sessions, the monkey chose punishment quite often (N-P trial type in Figure <xref ref-type="fig" rid="F7">7</xref>B). These two sessions are representative of the type of choice behavior we observed.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Choice behavior as a function of trial number</bold>. <bold>(A,B)</bold> A running average is calculated (6-trial boxcar) for each trial type (the two stimulus sets are again folded together), as a function of trial number within the session, for the same sessions shown in Figure <xref ref-type="fig" rid="F6">6</xref>. Choice behavior on each trial is calculated as the proportion of higher-value targets chosen, and on each trial is either 1 (higher-value target was chosen) or 0 (lower-value target was chosen). Individual black dots show when one outcome or the other was chosen on a per-trial basis. Thus, dots along the bottom of the figure indicate a lower-value choices. Dots are offset from 0 to 1 for clarity. Running average lines start at different trial numbers because they start on the <italic>n</italic>th trial for that trial type, where <italic>n</italic> is the width of the running average, but are plotted against actual trial number in the session.</p></caption>
<graphic xlink:href="fnins-06-00170-g007.tif"/>
</fig>
<p>This choice pattern was perplexing to us at first. We noticed that sometimes monkeys avoided the punished target in a session, while other times he chose it over the neutral target a substantial fraction of the time. We checked and manipulated a number of parameters: did monkeys find the punishment aversive? Was it aversive <italic>enough</italic>? Did monkeys understand the CS-US contingencies? What we found, in two monkeys, was an abundance of evidence that subjects <italic>did</italic> understand the task contingencies, <italic>did</italic> find the airpuff aversive, and yet chose the punished target despite the aversive outcome they knew would follow. Evidence in support of the idea that the airpuff was indeed aversive included: visible frustration and displeasure upon airpuff delivery, defensive blinking behavior in anticipation of airpuff, statistically significant greater likelihood of breaking fixation on N-P trials, and willingness to work being clearly dependent on the strength or frequency of airpuff delivery, with increases in any of these variables quickly leading to the monkey&#x02019;s refusing to work for the rest of the day. None of these were observed in relation to rewarding outcomes.</p>
<p>Over a period of training lasting several months, these patterns persisted. Figure <xref ref-type="fig" rid="F8">8</xref> shows the performance across a series of sessions over a period of a few weeks in one monkey. The two example sessions shown in the previous figures are marked with asterisks. In Figure <xref ref-type="fig" rid="F8">8</xref>A, the percent of trials completed for N-P versus other trial types is displayed. On average, the monkey broke fixation before completing the trial more often on N-P trials than on other trial types &#x02013; resulting in a lower percent of trials completed &#x02013; which is indicative of that trial type being aversive, difficult, or both. (Note that the two sessions shown in Figures <xref ref-type="fig" rid="F6">6</xref> and <xref ref-type="fig" rid="F7">7</xref> are not representative of this overall pattern, having lower than average percent break-fixation trials). Figure <xref ref-type="fig" rid="F8">8</xref>B shows the percent of trials on which the monkey chose the N-target on N-P trials (dark gray bars, %N of NP) as compared to choosing the P target (light gray bars). What is apparent is that %N of NP varied day to day, and did not appear to plateau at a stable level, nor was there a trend in either direction as training progressed. Note that the selection of the punished target on N-P trials occurred during blocks in which, on all other interleaved trial types, the monkey chose the higher-value target nearly all of the time (Figure <xref ref-type="fig" rid="F8">8</xref>C). This same pattern was seen in other training periods for this monkey, as well as across all training periods in the second monkey.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Choice behavior in the two-choice task over time</bold>. Performance over a training period of weeks for one monkey. <bold>(A)</bold> The percent of trials completed is shown, for each session, for N-P trials and all other trials separately (dark and light gray bars, respectively). <bold>(B)</bold> The percent of N-P trials, for each session, on which the monkey chose N (higher-value CS, dark gray bars) or P (lower-value CS, light gray bars). <bold>(C)</bold> The percent of other trial types, for each session, on which the monkey chose the higher-value target (dark gray bars) or the lower-value target (light gray bars). Asterisks mark the two sessions shown in Figures <xref ref-type="fig" rid="F6">6</xref> and <xref ref-type="fig" rid="F7">7</xref>.</p></caption>
<graphic xlink:href="fnins-06-00170-g008.tif"/>
</fig>
<p>On average, one monkey chose neutral CSs over punished CSs only slightly more than half the time. Figure <xref ref-type="fig" rid="F9">9</xref>A shows the distribution of %N of NP across all training sessions, including the subset shown in Figure <xref ref-type="fig" rid="F8">8</xref>. The mean was 62.2%, and was significantly greater than 50% (<italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001). This was over a training period of 5&#x02009;months, and after trying a host of manipulations to ensure that the monkey understood the task and the CS-US contingencies involved. Also, note that on interleaved trials, the monkey was choosing the higher-value target virtually all the time (Figure <xref ref-type="fig" rid="F9">9</xref>B). In the second monkey, the average %N of NP was very close to 50%, and was not statistically significant (mean, 50.4%, mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.4409), even though that monkey was also trained extensively and exposed to the same set of task manipulations as the first monkey. However, his performance on other trial types was similarly very high (mean, 89.1% higher-value target chosen, mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001).</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>Distribution of higher-value target choices in two versions of the two-choice task</bold>. Performance of one monkey in the original two-choice task <bold>(A,B)</bold> and the modified two-choice task <bold>(C,D)</bold>. <bold>(A)</bold> Distribution of the percent of N-target choices on N-P trials across all sessions in a 5&#x02009;month training period. Mean, 62.2% (mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001). <bold>(B)</bold> Distribution of the percent higher-value choices on non-N-P trial types across the same set of sessions as in <bold>(A)</bold>. Mean, 97.1% (mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001). <bold>(C)</bold> Distribution of the percent of SR-target choices on SR&#x02212;[P&#x02009;&#x0002B;&#x02009;SR] trials across 32 sessions. Mean, 84.9%, (mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001). <bold>(D)</bold> Distribution of the percent higher-value choices on non-SR&#x02212;[P&#x02009;&#x0002B;&#x02009;SR] trial types across the same set of sessions as in <bold>(C)</bold>. Mean, 98.7%, (mean&#x02009;&#x0003E;&#x02009;50%, <italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001).</p></caption>
<graphic xlink:href="fnins-06-00170-g009.tif"/>
</fig>
<p>While there are several possible explanations of this counter-intuitive behavior, we favor one explanation that fits with some of the other examples of neural systems in competition. In particular, we believe that on the N-P trial type, the salience and value of cues were in conflict, and this conflict pushed monkeys toward different choices. This was not true on any of the other trial types, in which the most salient CS on the screen was also the most valuable (whatever the highest level of reward was). On N-P trials, however, the N-target is more valuable than the P target (presumed zero value versus negative value), but the P target, by virtue of its association with an aversive airpuff, is very likely to be more salient. Thus the P target is chosen some of the time, even though it is not necessarily what monkeys prefer, due to a strong impulse to foveate &#x02013; i.e., look at or orient toward &#x02013; this highly salient, behaviorally relevant stimulus. Further evidence to support this idea is that monkeys were much more indecisive on N-P trials than on other trials: this was apparent in the percentage of break-fixation trials (Figure <xref ref-type="fig" rid="F8">8</xref>A), and in the observation that monkeys often looked quickly back and forth between targets, even though this behavior led to a greater number of incomplete trials. The monkeys did not do this on other trial types. As might be expected for trial types that are more difficult or less certain, monkeys exhibited greater spatial bias on N-P trials than on other trial types. The differences were modest: first monkey, 10.0% versus 1.6% bias, and second monkey, 8.3% versus 2.4% bias, for N-P and other trials, respectively, when measured across all sessions. (Bias is the percentage over 50% that a preferred spatial location is chosen; a 10% bias is equivalent to a location being chosen 60% of the time). While both differences were statistically significant (<italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001 in both cases), the small magnitude indicates that other factors had a strong impact on the monkeys&#x02019; choices.</p>
<p>We suspected that the absence of a possible reward on N-P trials was having a major impact on the choice behavior of our monkeys. Therefore, we redesigned the task for the first monkey so that all outcomes included some level of reward, using as our set of possible outcomes: LR, SR, and a compound outcome of airpuff and SR (P&#x02009;&#x0002B;&#x02009;SR). For the compound outcome, the punishment was delivered first, followed by a short delay and then the SR. This change resulted in a substantial shift in the monkey&#x02019;s behavior. Within a few training sessions, the monkey learned the new task and began consistently choosing the higher-value target most of the time on all trial types. At the beginning of each session, new CSs were introduced, and the monkey learned them within a small number of repetitions, and then chose the higher-value target virtually all of the time for the rest of the session. The monkey performed at this level consistently day after day: the average choice %SR on the trial type SR&#x02212;[P&#x02009;&#x0002B;&#x02009;SR] was 84.9% (Figure <xref ref-type="fig" rid="F9">9</xref>C), which was significantly greater than 50% (<italic>t</italic>-test, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001), and variations around this mean were much smaller than they had been in the first version of the task. As before, on all other trial types, which were interleaved, the monkey chose the higher-value target virtually all of the time (Figure <xref ref-type="fig" rid="F9">9</xref>D).</p>
<p>We have here, then, an example of counter-intuitive choice behavior that is robust and occurs when no reward is possible. As we mention above, we suspect that this is due to competition between the neural circuits processing value and salience; we would also speculate that the salience of negative outcomes only grows large enough to compete with value signals driving behavior when the value of the alternative outcome is small or zero (e.g., when a cue predicts no reinforcement). Clearly, this results in sub-optimal choice behavior. This is consistent with other studies that have noted sub-optimal performance in tasks where monkeys are forced to make a choice between outcomes and the greatest possible reward is very small or zero. For example, Peck et al. (<xref ref-type="bibr" rid="B68">2009</xref>) observed more incorrect choices on &#x0201C;neutral&#x0201D; as opposed to rewarded trials, and Amemori and Graybiel (<xref ref-type="bibr" rid="B2">2012</xref>) observed longer reaction times and more omission errors on a &#x0201C;reward&#x02013;reward&#x0201D; control task when reward size was very low. Moreover, Amemori and Graybiel (<xref ref-type="bibr" rid="B2">2012</xref>) designed their main experimental task to include a SR for any choice because they found it necessary to &#x0201C;maintain motivation to perform the task.&#x0201D; The paradigm employed by Amemori and Graybiel differed from ours in a number of ways, including the use of a joystick movement operant response, limiting our ability to make a direct comparison of the behavior observed in the two tasks. On the other hand, our use of an eye movement operant response may have increased the efficacy by which representations of salience modulated behavior. There is good reason to believe that salience has privileged access to the oculomotor system (Bisley et al., <xref ref-type="bibr" rid="B11">2004</xref>; Hasegawa et al., <xref ref-type="bibr" rid="B32">2004</xref>), especially in the highly visually oriented primate, to promote rapid foveation of salient stimuli.</p>
<p>We suggest that our behavioral results may be an example of a competition between limbic and cortical circuits dedicated to emotional versus cognitive processing, respectively. This paradigm, in the macaque, may test the limits of the amount of cognitive control monkeys are able to exert over reflexive behaviors. While the monkey does succeed in overriding the impulse to look at the punished target some of the time, he does not do so all of the time. Humans, with their greater level of cognitive processing and control, would presumably have much less difficulty avoiding the punished target.</p>
</sec>
</sec>
<sec>
<title>Summary and Challenges</title>
<p>To make a decision, we often must predict how particular stimuli or courses of action lead to rewards or punishments. The ability to make these predictions relies on our ability to learn through experience the relationship between stimuli and actions and positive and negative reinforcement. It is therefore important to understand the representation of aversive and appetitive outcomes in the brain, both during and after learning, in order to understand how these signals generate behavior. However, at the same time, it&#x02019;s important to recognize that the impact of appetitive and aversive circuits is not limited to behavior that is specific to the valence of the looming reinforcement. Activation of appetitive and aversive circuits can also elicit valence non-specific responses, such as enhanced arousal or attention.</p>
<p>A number of the studies in our lab have been directed at trying to understand the nature of appetitive and aversive circuits in the brain. Although there hadn&#x02019;t been a great deal of work examining aversive processing at the physiological level in non-human primates in the past, some older studies suggested that our approach would be fruitful (e.g. Nishijo et al., <xref ref-type="bibr" rid="B61">1988</xref>; Mirenowicz and Schultz, <xref ref-type="bibr" rid="B55">1996</xref>; Rolls, <xref ref-type="bibr" rid="B72">2000</xref>; Yamada et al., <xref ref-type="bibr" rid="B93">2004</xref>). Our neurophysiological studies have expanded on these initial findings to create a more detailed picture of appetitive and aversive circuits. Both the amygdala and OFC contain neurons that belong to each network: positive and negative value-coding neurons are present in both areas, and appear to encode the value of cues that signal imminent appetitive and aversive reinforcers, responding in a graded fashion to the value of CSs as well as USs. The dynamics of learning exhibited by appetitive and aversive networks in amygdala and OFC are surprisingly complex, with aversive systems updating faster during reversal learning in the amygdala than OFC, but vice-versa for appetitive networks (Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). This suggests that reversal learning is not merely driven by one brain area or the other. The complexity of the dynamics is also illustrated by the fact that the degree to which each area may influence the other is not fixed and instead evolves during the learning process (Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>) and perhaps in other circumstances as well.</p>
<p>In addition to our neurophysiological findings, behavioral data indicates that the interactions between appetitive and aversive systems are complicated. In a paradigm that required monkeys to make decisions based on the value of stimuli, behavior was sub-optimal when monkeys had to choose between a cue associated with nothing and a cue associated with an airpuff. These results indicate that eye movement choice behavior may be influenced not just by the value of stimuli but also by their salience. It demonstrates that competition between appetitive and aversive networks may occur not only between the values encoded by the two systems but also by the extent to which the systems influence brain structures representing salience, and thereby perhaps generating enhanced attention and eye movements to salient targets.</p>
<p>The complexity of interactions between appetitive and aversive circuits is likely to remain an enduring problem for neuroscientists, but headway is being made. Notably, in our studies of the amygdala and OFC, we have failed to find evidence of anatomical segregation of appetitive and aversive networks (Morrison et al., <xref ref-type="bibr" rid="B56">2011</xref>). Rather, appetitive and aversive networks appear to be anatomically intermingled. Anatomical segregation of these systems would make it easier to develop experimental approaches that can target manipulations of one system or the other to test their causal role in behavior. Fortunately, some recent studies have begun to identify areas where anatomical segregation exists. Two examples of segregation in aversive systems may be found in the work of Hikosaka and colleagues on the habenula (Matsumoto and Hikosaka, <xref ref-type="bibr" rid="B47">2007</xref>, <xref ref-type="bibr" rid="B48">2008</xref>, <xref ref-type="bibr" rid="B49">2009</xref>), and Graybiel&#x02019;s team in the ACC (Amemori and Graybiel, <xref ref-type="bibr" rid="B2">2012</xref>). The habenula appears to encode negatively valenced stimuli in relation to expectation. The ACC contains networks belonging to appetitive and aversive networks, though there appears to be some anatomical segregation of the aversive network. Both areas are likely to be involved in value-driven decision-making and/or learning. In addition, in contrast to our findings in the monkey, anatomical segregation of appetitive and aversive processing has been observed in the OFC in human fMRI studies (Kim et al., <xref ref-type="bibr" rid="B41">2006</xref>). Our recordings focused only on a restricted part of OFC, largely area 13, and it remains possible that recordings from a more extensive part of the OFC will reveal anatomical segregation of appetitive and aversive systems in the macaque. In general, anatomical segregations may provide more experimentally tractable opportunities for future studies to elucidate details concerning how each network operates.</p>
<p>Despite the anatomical segregation of some aspects of these networks, the challenges ahead are formidable. The amygdala and OFC are two structures intimately related to emotional processing, and these structures, among others, likely mediate the executive control of emotion. Moreover, the amygdala, through its extensive connections to sensory cortex, to the basal forebrain and to the prefrontal cortex is poised to influence cognitive processing. The neurophysiological data we have presented illustrates the complexity of interactions between appetitive and aversive networks. Further, the behavioral data presented suggests that conflict between appetitive and aversive networks extends beyond conflicts about value to conflicts between value and salience. Future studies must clarify how these conflicts are resolved in the brain.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This research was supported by grants from NIMH R01 MH082017, NIMH RC1 MH088458, NIDA R01 DA020656, NEI P30 EY019007, and the James S. McDonnell and Gatsby foundations. Sara E. Morrison received support from an NSF graduate fellowship and from an individual NIMH NRSA F31 MH081620. Brian Lau received support from NIMH institutional training grant T32 MH015144 and the Helen Hay Whitney Foundation. Alex Saez was supported by the Kavli Foundation.</p>
</ack>
<sec>
<title>Authorization for Use of Experimental Animals</title>
<p>All experimental procedures were in accordance with the National Institutes of Health guidelines and were approved by the Institutional Animal Care and Use Committees at New York State Psychiatric Institute and Columbia University.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amaral</surname> <given-names>D. G.</given-names></name> <name><surname>Behniea</surname> <given-names>H.</given-names></name> <name><surname>Kelly</surname> <given-names>J. L.</given-names></name></person-group> (<year>2003</year>). <article-title>Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey</article-title>. <source>Neuroscience</source> <volume>118</volume>, <fpage>1099</fpage>&#x02013;<lpage>1120</lpage>.<pub-id pub-id-type="doi">10.1016/S0306-4522(02)01001-1</pub-id><pub-id pub-id-type="pmid">12732254</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amemori</surname> <given-names>K.</given-names></name> <name><surname>Graybiel</surname> <given-names>A. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Localized microstimulation of primate pregenual cingulate cortex induces negative decision-making</article-title>. <source>Nat. Neurosci.</source> <volume>15</volume>, <fpage>776</fpage>&#x02013;<lpage>785</lpage>.<pub-id pub-id-type="doi">10.1038/nn.3088</pub-id><pub-id pub-id-type="pmid">22484571</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>A. K.</given-names></name></person-group> (<year>2005</year>). <article-title>Affective influences on the attentional dynamics supporting awareness</article-title>. <source>J. Exp. Psychol. Gen.</source> <volume>134</volume>, <fpage>258</fpage>&#x02013;<lpage>281</lpage>.<pub-id pub-id-type="doi">10.1037/0096-3445.134.2.258</pub-id><pub-id pub-id-type="pmid">15869349</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>B. A.</given-names></name> <name><surname>Laurent</surname> <given-names>P. A.</given-names></name> <name><surname>Yantis</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Value-driven attentional capture</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>108</volume>, <fpage>10367</fpage>&#x02013;<lpage>10371</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.1014885108</pub-id><pub-id pub-id-type="pmid">21646524</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Armony</surname> <given-names>J. L.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Modulation of spatial attention by fear-conditioned stimuli: an event-related fMRI study</article-title>. <source>Neuropsychologia</source> <volume>40</volume>, <fpage>814</fpage>&#x02013;<lpage>826</lpage>.<pub-id pub-id-type="doi">10.1016/S0028-3932(01)00178-6</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baxter</surname> <given-names>M.</given-names></name> <name><surname>Murray</surname> <given-names>E. A.</given-names></name></person-group> (<year>2002</year>). <article-title>The amygdala and reward</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>3</volume>, <fpage>563</fpage>&#x02013;<lpage>573</lpage>.<pub-id pub-id-type="doi">10.1038/nrn875</pub-id><pub-id pub-id-type="pmid">12094212</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bechara</surname> <given-names>A.</given-names></name> <name><surname>Damasio</surname> <given-names>H.</given-names></name> <name><surname>Damasio</surname> <given-names>A. R.</given-names></name></person-group> (<year>2000</year>). <article-title>Emotion, decision making and the orbitofrontal cortex</article-title>. <source>Cereb. Cortex</source> <volume>10</volume>, <fpage>295</fpage>&#x02013;<lpage>307</lpage>.<pub-id pub-id-type="doi">10.1093/cercor/10.3.295</pub-id><pub-id pub-id-type="pmid">10731224</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Belova</surname> <given-names>M. A.</given-names></name> <name><surname>Paton</surname> <given-names>J. J.</given-names></name> <name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Expectation modulates neural responses to pleasant and aversive stimuli in primate amygdala</article-title>. <source>Neuron</source> <volume>55</volume>, <fpage>970</fpage>&#x02013;<lpage>984</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2007.08.004</pub-id><pub-id pub-id-type="pmid">17880899</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Belova</surname> <given-names>M. A.</given-names></name> <name><surname>Paton</surname> <given-names>J. J.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2008</year>). <article-title>Moment-to-moment tracking of state value in the amygdala</article-title>. <source>J. Neurosci.</source> <volume>28</volume>, <fpage>10023</fpage>&#x02013;<lpage>10030</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1400-08.2008</pub-id><pub-id pub-id-type="pmid">18829960</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berlin</surname> <given-names>H. A.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Iversen</surname> <given-names>S. D.</given-names></name></person-group> (<year>2005</year>). <article-title>Borderline personality disorder, impulsivity, and the orbitofrontal cortex</article-title>. <source>Am. J. Psychiatry</source> <volume>162</volume>, <fpage>2360</fpage>&#x02013;<lpage>2373</lpage>.<pub-id pub-id-type="doi">10.1176/appi.ajp.162.12.2360</pub-id><pub-id pub-id-type="pmid">16330602</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bisley</surname> <given-names>J. W.</given-names></name> <name><surname>Krishna</surname> <given-names>B. S.</given-names></name> <name><surname>Goldberg</surname> <given-names>M. E.</given-names></name></person-group> (<year>2004</year>). <article-title>A rapid and precise on-response in posterior parietal cortex</article-title>. <source>J. Neurosci.</source> <volume>24</volume>, <fpage>1833</fpage>&#x02013;<lpage>1838</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5007-03.2004</pub-id><pub-id pub-id-type="pmid">14985423</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brosch</surname> <given-names>T.</given-names></name> <name><surname>Sander</surname> <given-names>D.</given-names></name> <name><surname>Pourtois</surname> <given-names>G.</given-names></name> <name><surname>Scherer</surname> <given-names>K. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Beyond fear: rapid spatial orienting toward positive emotional stimuli</article-title>. <source>Psychol. Sci.</source> <volume>19</volume>, <fpage>362</fpage>&#x02013;<lpage>370</lpage>.<pub-id pub-id-type="doi">10.1111/j.1467-9280.2008.02094.x</pub-id><pub-id pub-id-type="pmid">18399889</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brovelli</surname> <given-names>A.</given-names></name> <name><surname>Ding</surname> <given-names>M.</given-names></name> <name><surname>Ledberg</surname> <given-names>A.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Nakamura</surname> <given-names>R.</given-names></name> <name><surname>Bressler</surname> <given-names>S. L.</given-names></name></person-group> (<year>2004</year>). <article-title>Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by Granger causality</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>101</volume>, <fpage>9849</fpage>&#x02013;<lpage>9854</lpage>.<pub-id pub-id-type="doi">10.1073/pnas.0308538101</pub-id><pub-id pub-id-type="pmid">15210971</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carmichael</surname> <given-names>S. T.</given-names></name> <name><surname>Price</surname> <given-names>J. L.</given-names></name></person-group> (<year>1995</year>). <article-title>Limbic connections of the orbital and medial prefrontal cortex in macaque monkeys</article-title>. <source>J. Comp. Neurol.</source> <volume>363</volume>, <fpage>615</fpage>&#x02013;<lpage>641</lpage>.<pub-id pub-id-type="doi">10.1002/cne.903630409</pub-id><pub-id pub-id-type="pmid">8847421</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chamberlain</surname> <given-names>S. R.</given-names></name> <name><surname>Menzies</surname> <given-names>L.</given-names></name> <name><surname>Hampshire</surname> <given-names>A.</given-names></name> <name><surname>Suckling</surname> <given-names>J.</given-names></name> <name><surname>Fineberg</surname> <given-names>N. A.</given-names></name> <name><surname>Del Campo</surname> <given-names>N.</given-names></name> <etal/></person-group> (<year>2008</year>). <article-title>Orbitofrontal dysfunction in patients with obsessive-compulsive disorder and their unaffected relatives</article-title>. <source>Science</source> <volume>321</volume>, <fpage>421</fpage>&#x02013;<lpage>422</lpage>.<pub-id pub-id-type="doi">10.1126/science.1154433</pub-id><pub-id pub-id-type="pmid">18635808</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chudasama</surname> <given-names>Y.</given-names></name> <name><surname>Robbins</surname> <given-names>T. W.</given-names></name></person-group> (<year>2003</year>). <article-title>Dissociable contributions of the orbitofrontal and infralimbic cortex to pavlovian autoshaping and discrimination reversal learning: further evidence for the functional heterogeneity of the rodent frontal cortex</article-title>. <source>J. Neurosci.</source> <volume>23</volume>, <fpage>8771</fpage>&#x02013;<lpage>8780</lpage>.<pub-id pub-id-type="pmid">14507977</pub-id></citation></ref>
<ref id="B17"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Damasio</surname> <given-names>A. R.</given-names></name></person-group> (<year>1994</year>). <source>Descartes Error</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>G.P. Putnam</publisher-name>.</citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Damasio</surname> <given-names>H.</given-names></name> <name><surname>Grabowski</surname> <given-names>T.</given-names></name> <name><surname>Frank</surname> <given-names>R.</given-names></name> <name><surname>Galaburda</surname> <given-names>A. M.</given-names></name> <name><surname>Damasio</surname> <given-names>A. R.</given-names></name></person-group> (<year>1994</year>). <article-title>The return of Phineas Gage: clues about the brain from the skull of a famous patient</article-title>. <source>Science</source> <volume>264</volume>, <fpage>1102</fpage>&#x02013;<lpage>1105</lpage>.<pub-id pub-id-type="doi">10.1126/science.8178168</pub-id><pub-id pub-id-type="pmid">8178168</pub-id></citation></ref>
<ref id="B19"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>M.</given-names></name></person-group> (<year>1992</year>). <article-title>&#x0201C;The role of the amygdala in conditioned fear,&#x0201D;</article-title> in <source>The Amygdala: Neurological Aspects of Emotion, Memory, and Mental Dysfunction</source>, ed. <person-group person-group-type="editor"><name><surname>Aggleton</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>John Wiley &#x00026; Sons</publisher-name>), <fpage>255</fpage>&#x02013;<lpage>306</lpage>.</citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daw</surname> <given-names>N. D.</given-names></name> <name><surname>Kakade</surname> <given-names>S.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name></person-group> (<year>2002</year>). <article-title>Opponent interactions between serotonin and dopamine</article-title>. <source>Neural Netw.</source> <volume>15</volume>, <fpage>603</fpage>&#x02013;<lpage>616</lpage>.<pub-id pub-id-type="doi">10.1016/S0893-6080(02)00052-7</pub-id><pub-id pub-id-type="pmid">12371515</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Martino</surname> <given-names>B.</given-names></name> <name><surname>Kumaran</surname> <given-names>D.</given-names></name> <name><surname>Seymour</surname> <given-names>B.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Frames, biases, and rational decision-making in the human brain</article-title>. <source>Science</source> <volume>313</volume>, <fpage>684</fpage>&#x02013;<lpage>687</lpage>.<pub-id pub-id-type="doi">10.1126/science.1128356</pub-id><pub-id pub-id-type="pmid">16888142</pub-id></citation></ref>
<ref id="B22"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Dickinson</surname> <given-names>A.</given-names></name> <name><surname>Dearing</surname> <given-names>M. F.</given-names></name></person-group> (<year>1979</year>). <article-title>&#x0201C;Appetitive-aversive interactions and inhibitory processes,&#x0201D;</article-title> in <source>Mechanisms of Learning and Motivation</source>, eds <person-group person-group-type="editor"><name><surname>Dickinson</surname> <given-names>A.</given-names></name> <name><surname>Boakes</surname> <given-names>R. A.</given-names></name></person-group> (<publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>), <fpage>203</fpage>&#x02013;<lpage>231</lpage>.</citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Everitt</surname> <given-names>B. J.</given-names></name> <name><surname>Cardinal</surname> <given-names>R. N.</given-names></name> <name><surname>Parkinson</surname> <given-names>J. A.</given-names></name> <name><surname>Robbins</surname> <given-names>T. W.</given-names></name></person-group> (<year>2003</year>). <article-title>Appetitive behaviour: impact of amygdala-dependent mechanisms of emotional learning</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>985</volume>, <fpage>233</fpage>&#x02013;<lpage>250</lpage>.<pub-id pub-id-type="doi">10.1111/j.1749-6632.2003.tb07085.x</pub-id><pub-id pub-id-type="pmid">12724162</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fellows</surname> <given-names>L. K.</given-names></name> <name><surname>Farah</surname> <given-names>M. J.</given-names></name></person-group> (<year>2003</year>). <article-title>Ventromedial frontal cortex mediates affective shifting in humans: evidence from a reversal learning paradigm</article-title>. <source>Brain</source> <volume>126</volume>, <fpage>1830</fpage>&#x02013;<lpage>1837</lpage>.<pub-id pub-id-type="doi">10.1093/brain/awg180</pub-id><pub-id pub-id-type="pmid">12821528</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Freese</surname> <given-names>J. L.</given-names></name> <name><surname>Amaral</surname> <given-names>D. G.</given-names></name></person-group> (<year>2005</year>). <article-title>The organization of projections from the amygdala to visual cortical areas TE and V1 in the macaque monkey</article-title>. <source>J. Comp. Neurol.</source> <volume>486</volume>, <fpage>295</fpage>&#x02013;<lpage>317</lpage>.<pub-id pub-id-type="doi">10.1002/cne.20520</pub-id><pub-id pub-id-type="pmid">15846786</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghahremani</surname> <given-names>D. G.</given-names></name> <name><surname>Monterosso</surname> <given-names>J.</given-names></name> <name><surname>Jentsch</surname> <given-names>J. D.</given-names></name> <name><surname>Bilder</surname> <given-names>R. M.</given-names></name> <name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2010</year>). <article-title>Neural components underlying behavioral flexibility in human reversal learning</article-title>. <source>Cereb. Cortex</source> <volume>20</volume>, <fpage>1843</fpage>&#x02013;<lpage>1852</lpage>.<pub-id pub-id-type="doi">10.1093/cercor/bhp247</pub-id><pub-id pub-id-type="pmid">19915091</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghashghaei</surname> <given-names>H. T.</given-names></name> <name><surname>Hilgetag</surname> <given-names>C. C.</given-names></name> <name><surname>Barbas</surname> <given-names>H.</given-names></name></person-group> (<year>2007</year>). <article-title>Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala</article-title>. <source>Neuroimage</source> <volume>34</volume>, <fpage>905</fpage>&#x02013;<lpage>923</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.09.046</pub-id><pub-id pub-id-type="pmid">17126037</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Granger</surname> <given-names>C. W. J.</given-names></name></person-group> (<year>1969</year>). <article-title>Investigating causal relationships by econometric models and cross-spectral methods</article-title>. <source>Econometrica</source> <volume>37</volume>, <fpage>424</fpage>&#x02013;<lpage>438</lpage>.<pub-id pub-id-type="doi">10.2307/1913549</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greene</surname> <given-names>J.</given-names></name> <name><surname>Haidt</surname> <given-names>J.</given-names></name></person-group> (<year>2002</year>). <article-title>How (and where) does moral judgment work?</article-title> <source>Trends Cogn. Sci. (Regul. Ed.)</source> <volume>6</volume>, <fpage>517</fpage>&#x02013;<lpage>523</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(02)02011-9</pub-id><pub-id pub-id-type="pmid">12475712</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grossberg</surname> <given-names>A.</given-names></name></person-group> (<year>1984</year>). <article-title>Some normal and abnormal behavioral syndromes due to transmitter gating of opponent processes</article-title>. <source>Biol. Psychiatry</source> <volume>19</volume>, <fpage>1075</fpage>&#x02013;<lpage>1118</lpage>.<pub-id pub-id-type="pmid">6148110</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hampshire</surname> <given-names>A.</given-names></name> <name><surname>Gruszka</surname> <given-names>A.</given-names></name> <name><surname>Fallon</surname> <given-names>S. J.</given-names></name> <name><surname>Owen</surname> <given-names>A. M.</given-names></name></person-group> (<year>2008</year>). <article-title>Inefficiency in self-organized attentional switching in the normal aging population is associated with decreased activity in the ventrolateral prefrontal cortex</article-title>. <source>J. Cogn. Neurosci.</source> <volume>20</volume>, <fpage>1670</fpage>&#x02013;<lpage>1686</lpage>.<pub-id pub-id-type="doi">10.1162/jocn.2008.20115</pub-id><pub-id pub-id-type="pmid">18345987</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasegawa</surname> <given-names>R. P.</given-names></name> <name><surname>Peterson</surname> <given-names>B. W.</given-names></name> <name><surname>Goldberg</surname> <given-names>M. E.</given-names></name></person-group> (<year>2004</year>). <article-title>Prefrontal neurons coding suppression of specific saccades</article-title>. <source>Neuron</source> <volume>43</volume>, <fpage>415</fpage>&#x02013;<lpage>425</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2004.07.013</pub-id><pub-id pub-id-type="pmid">15294148</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holland</surname> <given-names>P. C.</given-names></name> <name><surname>Gallagher</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <article-title>Amygdala circuitry in attentional and representational processes</article-title>. <source>Trends Cogn. Sci. (Regul. Ed.)</source> <volume>3</volume>, <fpage>65</fpage>&#x02013;<lpage>73</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(98)01271-6</pub-id><pub-id pub-id-type="pmid">10234229</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hornak</surname> <given-names>J.</given-names></name> <name><surname>O&#x02019;Doherty</surname> <given-names>J.</given-names></name> <name><surname>Bramham</surname> <given-names>J.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Morris</surname> <given-names>R. G.</given-names></name> <name><surname>Bullock</surname> <given-names>P. R.</given-names></name> <etal/></person-group> (<year>2004</year>). <article-title>Reward-related reversal learning after surgical excisions in orbito-frontal or dorsolateral prefrontal cortex in humans</article-title>. <source>J. Cogn. Neurosci.</source> <volume>16</volume>, <fpage>463</fpage>&#x02013;<lpage>478</lpage>.<pub-id pub-id-type="doi">10.1162/089892904322926791</pub-id><pub-id pub-id-type="pmid">15072681</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ilango</surname> <given-names>A.</given-names></name> <name><surname>Wetzel</surname> <given-names>W.</given-names></name> <name><surname>Scheich</surname> <given-names>H.</given-names></name> <name><surname>Ohl</surname> <given-names>F. W.</given-names></name></person-group> (<year>2010</year>). <article-title>The combination of appetitive and aversive reinforcers and the nature of their interaction during auditory learning</article-title>. <source>Neuroscience</source> <volume>166</volume>, <fpage>752</fpage>&#x02013;<lpage>762</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroscience.2010.01.010</pub-id><pub-id pub-id-type="pmid">20080152</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iversen</surname> <given-names>S. D.</given-names></name> <name><surname>Mishkin</surname> <given-names>M.</given-names></name></person-group> (<year>1970</year>). <article-title>Perseverative interference in monkeys following selective lesions of inferior prefrontal convexity</article-title>. <source>Exp. Brain Res.</source> <volume>11</volume>, <fpage>376</fpage>&#x02013;<lpage>386</lpage>.<pub-id pub-id-type="doi">10.1007/BF00237911</pub-id><pub-id pub-id-type="pmid">4993199</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izquierdo</surname> <given-names>A.</given-names></name> <name><surname>Murray</surname> <given-names>E. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Selective bilateral amygdala lesions in rhesus monkeys fail to disrupt object reversal learning</article-title>. <source>J. Neurosci.</source> <volume>27</volume>, <fpage>1054</fpage>&#x02013;<lpage>1062</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3616-06.2007</pub-id><pub-id pub-id-type="pmid">17267559</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izquierdo</surname> <given-names>A.</given-names></name> <name><surname>Suda</surname> <given-names>R. K.</given-names></name> <name><surname>Murray</surname> <given-names>E. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Bilateral orbital prefrontal cortex lesions in rhesus monkeys disrupt choices guided by both reward value and reward contingency</article-title>. <source>J. Neurosci.</source> <volume>24</volume>, <fpage>7540</fpage>&#x02013;<lpage>7548</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1921-04.2004</pub-id><pub-id pub-id-type="pmid">15329401</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kable</surname> <given-names>J. W.</given-names></name> <name><surname>Glimcher</surname> <given-names>P. W.</given-names></name></person-group> (<year>2007</year>). <article-title>The neural correlates of subjective value during intertemporal choice</article-title>. <source>Nat. Neurosci.</source> <volume>10</volume>, <fpage>1625</fpage>&#x02013;<lpage>1633</lpage>.<pub-id pub-id-type="doi">10.1038/nn2007</pub-id><pub-id pub-id-type="pmid">17982449</pub-id></citation></ref>
<ref id="B40"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kahneman</surname> <given-names>D.</given-names></name> <name><surname>Tversky</surname> <given-names>A.</given-names></name></person-group> (<year>2000</year>). <source>Choices, Values and Frames</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Shimogo</surname> <given-names>S.</given-names></name> <name><surname>O&#x02019;Doherty</surname> <given-names>J. P.</given-names></name></person-group> (<year>2006</year>). <article-title>Is avoiding an aversive outcome rewarding? Neural substrates of avoidance learning in the human brain</article-title>. <source>PLoS Biol.</source> <volume>4</volume>, <fpage>e233</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pbio.0040233</pub-id></citation></ref>
<ref id="B42"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Konorski</surname> <given-names>J.</given-names></name></person-group> (<year>1967</year>). <source>Integrative Activity of the Brain: An Interdisciplinary Approach</source>. <publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>.</citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lang</surname> <given-names>P. J.</given-names></name> <name><surname>Davis</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Emotion, motivation, and the brain: reflex foundations in animal and human research</article-title>. <source>Prog. Brain Res.</source> <volume>156</volume>, <fpage>3</fpage>&#x02013;<lpage>29</lpage>.<pub-id pub-id-type="doi">10.1016/S0079-6123(06)56001-7</pub-id><pub-id pub-id-type="pmid">17015072</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeDoux</surname> <given-names>J. E.</given-names></name></person-group> (<year>2000</year>). <article-title>Emotion circuits in the brain</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>23</volume>, <fpage>155</fpage>&#x02013;<lpage>184</lpage>.<pub-id pub-id-type="doi">10.1146/annurev.neuro.23.1.155</pub-id><pub-id pub-id-type="pmid">10845062</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Loewenstein</surname> <given-names>G. F.</given-names></name> <name><surname>Weber</surname> <given-names>E. U.</given-names></name> <name><surname>Hsee</surname> <given-names>C. K.</given-names></name> <name><surname>Welch</surname> <given-names>N.</given-names></name></person-group> (<year>2001</year>). <article-title>Risk as feelings</article-title>. <source>Psychol. Bull.</source> <volume>127</volume>, <fpage>267</fpage>&#x02013;<lpage>286</lpage>.<pub-id pub-id-type="doi">10.1037/0033-2909.127.2.267</pub-id><pub-id pub-id-type="pmid">11316014</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maren</surname> <given-names>S.</given-names></name> <name><surname>Quirk</surname> <given-names>G. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Neuronal signalling of fear memory</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>5</volume>, <fpage>844</fpage>&#x02013;<lpage>852</lpage>.<pub-id pub-id-type="doi">10.1038/nrn1535</pub-id><pub-id pub-id-type="pmid">15496862</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matsumoto</surname> <given-names>M.</given-names></name> <name><surname>Hikosaka</surname> <given-names>O.</given-names></name></person-group> (<year>2007</year>). <article-title>Lateral habenula as a source of negative reward signals in dopamine neurons</article-title>. <source>Nature</source> <volume>447</volume>, <fpage>1111</fpage>&#x02013;<lpage>1115</lpage>.<pub-id pub-id-type="doi">10.1038/nature05860</pub-id><pub-id pub-id-type="pmid">17522629</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matsumoto</surname> <given-names>M.</given-names></name> <name><surname>Hikosaka</surname> <given-names>O.</given-names></name></person-group> (<year>2008</year>). <article-title>Negative motivational control of saccadic eye movement by the lateral habenula</article-title>. <source>Prog. Brain Res.</source> <volume>171</volume>, <fpage>399</fpage>&#x02013;<lpage>402</lpage>.<pub-id pub-id-type="doi">10.1016/S0079-6123(08)00658-4</pub-id><pub-id pub-id-type="pmid">18718332</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matsumoto</surname> <given-names>M.</given-names></name> <name><surname>Hikosaka</surname> <given-names>O.</given-names></name></person-group> (<year>2009</year>). <article-title>Representation of negative motivational value in the primate lateral habenula</article-title>. <source>Nat. Neurosci.</source> <volume>12</volume>, <fpage>77</fpage>&#x02013;<lpage>84</lpage>.<pub-id pub-id-type="doi">10.1038/nn.2233</pub-id><pub-id pub-id-type="pmid">19043410</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McClure</surname> <given-names>S. M.</given-names></name> <name><surname>Ericson</surname> <given-names>K. M.</given-names></name> <name><surname>Laibson</surname> <given-names>D. I.</given-names></name> <name><surname>Loewenstein</surname> <given-names>G.</given-names></name> <name><surname>Cohen</surname> <given-names>J. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Time discounting for primary rewards</article-title>. <source>J. Neurosci.</source> <volume>27</volume>, <fpage>5796</fpage>&#x02013;<lpage>5804</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4246-06.2007</pub-id><pub-id pub-id-type="pmid">17522323</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McClure</surname> <given-names>S. M.</given-names></name> <name><surname>Laibson</surname> <given-names>D. I.</given-names></name> <name><surname>Loewenstein</surname> <given-names>G.</given-names></name> <name><surname>Cohen</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <article-title>Separate neural systems value immediate and delayed monetary rewards</article-title>. <source>Science</source> <volume>306</volume>, <fpage>503</fpage>&#x02013;<lpage>507</lpage>.<pub-id pub-id-type="doi">10.1126/science.1100907</pub-id><pub-id pub-id-type="pmid">15486304</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McDonald</surname> <given-names>A. J.</given-names></name></person-group> (<year>1991</year>). <article-title>Organization of amygdaloid projections to the prefrontal cortex and associated striatum in the rat</article-title>. <source>Neuroscience</source> <volume>44</volume>, <fpage>1</fpage>&#x02013;<lpage>44</lpage>.<pub-id pub-id-type="doi">10.1016/0306-4522(91)90248-M</pub-id><pub-id pub-id-type="pmid">1722886</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McDonald</surname> <given-names>A. J.</given-names></name></person-group> (<year>1998</year>). <article-title>Cortical pathways to the mammalian amygdala</article-title>. <source>Prog. Neurobiol.</source> <volume>55</volume>, <fpage>257</fpage>&#x02013;<lpage>332</lpage>.<pub-id pub-id-type="doi">10.1016/S0301-0082(98)00003-3</pub-id><pub-id pub-id-type="pmid">9643556</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McNeil</surname> <given-names>B. J.</given-names></name> <name><surname>Pauker</surname> <given-names>S. G.</given-names></name> <name><surname>Sox</surname> <given-names>H. C.</given-names> <suffix>Jr.</suffix></name> <name><surname>Tversky</surname> <given-names>A.</given-names></name></person-group> (<year>1982</year>). <article-title>On the elicitation of preferences for alternative therapies</article-title>. <source>N. Engl. J. Med.</source> <volume>306</volume>, <fpage>1259</fpage>&#x02013;<lpage>1262</lpage>.<pub-id pub-id-type="doi">10.1056/NEJM198205273062103</pub-id><pub-id pub-id-type="pmid">7070445</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mirenowicz</surname> <given-names>J.</given-names></name> <name><surname>Schultz</surname> <given-names>W.</given-names></name></person-group> (<year>1996</year>). <article-title>Preferential activation of midbrain dopamine neurons by appetitive rather than aversive stimuli</article-title>. <source>Nature</source> <volume>379</volume>, <fpage>449</fpage>&#x02013;<lpage>451</lpage>.<pub-id pub-id-type="doi">10.1038/379449a0</pub-id><pub-id pub-id-type="pmid">8559249</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Saez</surname> <given-names>A.</given-names></name> <name><surname>Lau</surname> <given-names>B.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2011</year>). <article-title>Different time courses for learning-related changes in amygdala and orbitofrontal cortex</article-title>. <source>Neuron</source> <volume>71</volume>, <fpage>1127</fpage>&#x02013;<lpage>1140</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2011.07.016</pub-id><pub-id pub-id-type="pmid">21943608</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2009</year>). <article-title>The convergence of information about rewarding and aversive stimuli in single neurons</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>11471</fpage>&#x02013;<lpage>11483</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1815-09.2009</pub-id><pub-id pub-id-type="pmid">19759296</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2010</year>). <article-title>Re-valuing the amygdala</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>20</volume>, <fpage>221</fpage>&#x02013;<lpage>230</lpage>.<pub-id pub-id-type="doi">10.1016/j.conb.2010.02.007</pub-id><pub-id pub-id-type="pmid">20299204</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2011</year>). <article-title>Representations of appetitive and aversive information in the primate orbitofrontal cortex</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>1239</volume>, <fpage>59</fpage>&#x02013;<lpage>70</lpage>.<pub-id pub-id-type="doi">10.1111/j.1749-6632.2011.06255.x</pub-id><pub-id pub-id-type="pmid">22145876</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishijo</surname> <given-names>H.</given-names></name> <name><surname>Hori</surname> <given-names>E.</given-names></name> <name><surname>Tazumi</surname> <given-names>T.</given-names></name> <name><surname>Ono</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>Neural correlates to both emotion and cognitive functions in the monkey amygdala</article-title>. <source>Behav. Brain Res.</source> <volume>188</volume>, <fpage>14</fpage>&#x02013;<lpage>23</lpage>.<pub-id pub-id-type="doi">10.1016/j.bbr.2007.10.013</pub-id><pub-id pub-id-type="pmid">18035429</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishijo</surname> <given-names>H.</given-names></name> <name><surname>Ono</surname> <given-names>T.</given-names></name> <name><surname>Nishino</surname> <given-names>H.</given-names></name></person-group> (<year>1988</year>). <article-title>Single neuron responses in amygdala of alert monkey during complex sensory stimulation with affective significance</article-title>. <source>J. Neurosci.</source> <volume>8</volume>, <fpage>3570</fpage>&#x02013;<lpage>3583</lpage>.<pub-id pub-id-type="pmid">3193171</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ochsner</surname> <given-names>K. N.</given-names></name> <name><surname>Gross</surname> <given-names>J. J.</given-names></name></person-group> (<year>2005</year>). <article-title>The cognitive control of emotion</article-title>. <source>Trends Cogn. Sci. (Reugul. Ed.)</source> <volume>9</volume>, <fpage>242</fpage>&#x02013;<lpage>249</lpage>.<pub-id pub-id-type="doi">10.1016/j.tics.2005.03.010</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>O&#x02019;Doherty</surname> <given-names>J.</given-names></name> <name><surname>Kringelbach</surname> <given-names>M. L.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Hornak</surname> <given-names>J.</given-names></name> <name><surname>Andrews</surname> <given-names>C.</given-names></name></person-group> (<year>2001</year>). <article-title>Abstract reward and punishment representations in the human orbitofrontal cortex</article-title>. <source>Nat. Neurosci.</source> <volume>4</volume>, <fpage>95</fpage>&#x02013;<lpage>102</lpage>.<pub-id pub-id-type="doi">10.1038/82959</pub-id><pub-id pub-id-type="pmid">11135651</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padoa-Schioppa</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <article-title>Range-adapting representation of economic value in the orbitofrontal cortex</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>14004</fpage>&#x02013;<lpage>14014</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3751-09.2009</pub-id><pub-id pub-id-type="pmid">19890010</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padoa-Schioppa</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Neurobiology of economic choice: a good-based model</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>34</volume>, <fpage>333</fpage>&#x02013;<lpage>359</lpage>.<pub-id pub-id-type="doi">10.1146/annurev-neuro-061010-113648</pub-id><pub-id pub-id-type="pmid">21456961</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padoa-Schioppa</surname> <given-names>C.</given-names></name> <name><surname>Assad</surname> <given-names>J. A.</given-names></name></person-group> (<year>2006</year>). <article-title>Neurons in the orbitofrontal cortex encode economic value</article-title>. <source>Nature</source> <volume>441</volume>, <fpage>223</fpage>&#x02013;<lpage>226</lpage>.<pub-id pub-id-type="doi">10.1038/nature04676</pub-id><pub-id pub-id-type="pmid">16633341</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paton</surname> <given-names>J. J.</given-names></name> <name><surname>Belova</surname> <given-names>M. A.</given-names></name> <name><surname>Morrison</surname> <given-names>S. E.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2006</year>). <article-title>The primate amygdala represents the positive and negative value of visual stimuli during learning</article-title>. <source>Nature</source> <volume>439</volume>, <fpage>865</fpage>&#x02013;<lpage>870</lpage>.<pub-id pub-id-type="doi">10.1038/nature04490</pub-id><pub-id pub-id-type="pmid">16482160</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peck</surname> <given-names>C. J.</given-names></name> <name><surname>Jangraw</surname> <given-names>D. C.</given-names></name> <name><surname>Suzuki</surname> <given-names>M.</given-names></name> <name><surname>Efem</surname> <given-names>R.</given-names></name> <name><surname>Gottlieb</surname> <given-names>J.</given-names></name></person-group> (<year>2009</year>). <article-title>Reward modulates attention independently of action value in posterior parietal cortex</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>11182</fpage>&#x02013;<lpage>11191</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1929-09.2009</pub-id><pub-id pub-id-type="pmid">19741125</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phelps</surname> <given-names>E. A.</given-names></name> <name><surname>Ling</surname> <given-names>S.</given-names></name> <name><surname>Carrasco</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Emotion facilitates perception and potentiates the perceptual benefits of attention</article-title>. <source>Psychol. Sci.</source> <volume>17</volume>, <fpage>292</fpage>&#x02013;<lpage>299</lpage>.<pub-id pub-id-type="doi">10.1111/j.1467-9280.2006.01701.x</pub-id><pub-id pub-id-type="pmid">16623685</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pinkham</surname> <given-names>A. E.</given-names></name> <name><surname>Griffin</surname> <given-names>M.</given-names></name> <name><surname>Baron</surname> <given-names>R.</given-names></name> <name><surname>Sasson</surname> <given-names>N. J.</given-names></name> <name><surname>Gur</surname> <given-names>R. C.</given-names></name></person-group> (<year>2010</year>). <article-title>The face in the crowd effect: anger superiority when using real faces and multiple identities</article-title>. <source>Emotion</source> <volume>10</volume>, <fpage>141</fpage>&#x02013;<lpage>146</lpage>.<pub-id pub-id-type="doi">10.1037/a0017387</pub-id><pub-id pub-id-type="pmid">20141311</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roberts</surname> <given-names>N. A.</given-names></name> <name><surname>Beer</surname> <given-names>J. S.</given-names></name> <name><surname>Werner</surname> <given-names>K. H.</given-names></name> <name><surname>Scabini</surname> <given-names>D.</given-names></name> <name><surname>Levens</surname> <given-names>S. M.</given-names></name> <name><surname>Knight</surname> <given-names>R. T.</given-names></name> <etal/></person-group> (<year>2004</year>). <article-title>The impact of orbitofrontal prefrontal cortex damage on emotional activation to unanticipated and anticipated acoustic startle stimuli</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>4</volume>, <fpage>307</fpage>&#x02013;<lpage>316</lpage>.<pub-id pub-id-type="doi">10.3758/CABN.4.3.307</pub-id><pub-id pub-id-type="pmid">15535166</pub-id></citation></ref>
<ref id="B72"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Rolls</surname> <given-names>E.</given-names></name></person-group> (<year>2000</year>). <article-title>&#x0201C;Neurophysiology and functions of the primate amygdala, and the neural basis of emotion,&#x0201D;</article-title> in <source>The Amygdala: A Functional Analysis</source>, ed. <person-group person-group-type="editor"><name><surname>Aggleton</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>New York</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>447</fpage>&#x02013;<lpage>478</lpage>.</citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rolls</surname> <given-names>E. T.</given-names></name></person-group> (<year>1996</year>). <article-title>The orbitofrontal cortex</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci.</source> <volume>351</volume>, <fpage>1433</fpage>&#x02013;<lpage>1443</lpage>.<pub-id pub-id-type="doi">10.1098/rstb.1996.0128</pub-id><pub-id pub-id-type="pmid">8941955</pub-id></citation></ref>
<ref id="B74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romanski</surname> <given-names>L. M.</given-names></name> <name><surname>Clugnet</surname> <given-names>M. C.</given-names></name> <name><surname>Bordi</surname> <given-names>F.</given-names></name> <name><surname>LeDoux</surname> <given-names>J. E.</given-names></name></person-group> (<year>1993</year>). <article-title>Somatosensory and auditory convergence in the lateral nucleus of the amygdala</article-title>. <source>Behav. Neurosci.</source> <volume>107</volume>, <fpage>444</fpage>&#x02013;<lpage>450</lpage>.<pub-id pub-id-type="doi">10.1037/0735-7044.107.3.444</pub-id><pub-id pub-id-type="pmid">8329134</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saddoris</surname> <given-names>M. P.</given-names></name> <name><surname>Gallagher</surname> <given-names>M.</given-names></name> <name><surname>Schoenbaum</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>Rapid associative encoding in basolateral amygdala depends on connections with orbitofrontal cortex</article-title>. <source>Neuron</source> <volume>46</volume>, <fpage>321</fpage>&#x02013;<lpage>331</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2005.02.018</pub-id><pub-id pub-id-type="pmid">15848809</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salzman</surname> <given-names>C. D.</given-names></name> <name><surname>Fusi</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Emotion, cognition, and mental state representation in amygdala and prefrontal cortex</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>33</volume>, <fpage>173</fpage>&#x02013;<lpage>202</lpage>.<pub-id pub-id-type="doi">10.1146/annurev.neuro.051508.135256</pub-id><pub-id pub-id-type="pmid">20331363</pub-id></citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salzman</surname> <given-names>C. D.</given-names></name> <name><surname>Paton</surname> <given-names>J. J.</given-names></name> <name><surname>Belova</surname> <given-names>M. A.</given-names></name> <name><surname>Morrison</surname> <given-names>S. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Flexible neural representations of value in the primate brain</article-title>. <source>Ann. N. Y. Acad. Sci.</source> <volume>1121</volume>, <fpage>336</fpage>&#x02013;<lpage>354</lpage>.<pub-id pub-id-type="doi">10.1196/annals.1401.034</pub-id><pub-id pub-id-type="pmid">17872400</pub-id></citation></ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanghera</surname> <given-names>M. K.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Roper-Hall</surname> <given-names>A.</given-names></name></person-group> (<year>1979</year>). <article-title>Visual responses of neurons in the dorsolateral amygdala of the alert monkey</article-title>. <source>Exp. Neurol.</source> <volume>63</volume>, <fpage>610</fpage>&#x02013;<lpage>626</lpage>.<pub-id pub-id-type="doi">10.1016/0014-4886(79)90175-4</pub-id><pub-id pub-id-type="pmid">428486</pub-id></citation></ref>
<ref id="B79"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schoenbaum</surname> <given-names>G.</given-names></name> <name><surname>Chiba</surname> <given-names>A.</given-names></name> <name><surname>Gallagher</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Orbitalfrontal cortex and basolateral amygdala encode expected outcomes during learning</article-title>. <source>Nat. Neurosci.</source> <volume>1</volume>, <fpage>155</fpage>&#x02013;<lpage>159</lpage>.<pub-id pub-id-type="doi">10.1038/407</pub-id><pub-id pub-id-type="pmid">10195132</pub-id></citation></ref>
<ref id="B80"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schoenbaum</surname> <given-names>G.</given-names></name> <name><surname>Nugent</surname> <given-names>S. L.</given-names></name> <name><surname>Saddoris</surname> <given-names>M. P.</given-names></name> <name><surname>Setlow</surname> <given-names>B.</given-names></name></person-group> (<year>2002</year>). <article-title>Orbitofrontal lesions in rats impair reversal but not acquisition of go, no-go odor discriminations</article-title>. <source>Neuroreport</source> <volume>13</volume>, <fpage>885</fpage>&#x02013;<lpage>890</lpage>.<pub-id pub-id-type="doi">10.1097/00001756-200205070-00030</pub-id><pub-id pub-id-type="pmid">11997707</pub-id></citation></ref>
<ref id="B81"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schoenbaum</surname> <given-names>G.</given-names></name> <name><surname>Setlow</surname> <given-names>B.</given-names></name> <name><surname>Saddoris</surname> <given-names>M. P.</given-names></name> <name><surname>Gallagher</surname> <given-names>M.</given-names></name></person-group> (<year>2003</year>). <article-title>Encoding predicted outcome and acquired value in orbitofrontal cortex during due sampling depends upon input from basolateral amygdala</article-title>. <source>Neuron</source> <volume>39</volume>, <fpage>855</fpage>&#x02013;<lpage>867</lpage>.<pub-id pub-id-type="doi">10.1016/S0896-6273(03)00474-4</pub-id><pub-id pub-id-type="pmid">12948451</pub-id></citation></ref>
<ref id="B82"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Schoer</surname> <given-names>R.</given-names></name> <name><surname>Paton</surname> <given-names>J. J.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2009</year>). <article-title>Activity of amygdala and orbitofrontal cortical neurons during contrast revaluation of reward predicting stimuli</article-title>. Program No. 784.17. <source>2009 Neuroscience Meeting Planner</source>. <publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>Society for Neuroscience Abstracts Online</publisher-name>.</citation></ref>
<ref id="B83"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Schoer</surname> <given-names>R.</given-names></name> <name><surname>Saez</surname> <given-names>A.</given-names></name> <name><surname>Salzman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2011</year>). <article-title>Amygdala neurons adaptively encode the motivational significance of conditioned stimuli in a relative manner</article-title>. Program No. 515.16. <source>2011 Neuroscience Meeting Planner</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>Society for Neuroscience Abstracts Online</publisher-name>.</citation></ref>
<ref id="B84"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seymour</surname> <given-names>B.</given-names></name> <name><surname>Singer</surname> <given-names>T.</given-names></name> <name><surname>Dolan</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>The neurobiology of punishment</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>8</volume>, <fpage>300</fpage>&#x02013;<lpage>311</lpage>.<pub-id pub-id-type="doi">10.1038/nrn2119</pub-id><pub-id pub-id-type="pmid">17375042</pub-id></citation></ref>
<ref id="B85"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Solomon</surname> <given-names>R. L.</given-names></name> <name><surname>Corbit</surname> <given-names>J. D.</given-names></name></person-group> (<year>1974</year>). <article-title>An opponent-process theory of motivation. 1. Temporal dynamics of affect</article-title>. <source>Psychol. Rev.</source> <volume>81</volume>, <fpage>119</fpage>&#x02013;<lpage>145</lpage>.<pub-id pub-id-type="doi">10.1037/h0036128</pub-id><pub-id pub-id-type="pmid">4817611</pub-id></citation></ref>
<ref id="B86"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stefanacci</surname> <given-names>L.</given-names></name> <name><surname>Amaral</surname> <given-names>D. G.</given-names></name></person-group> (<year>2000</year>). <article-title>Topographic organization of cortical inputs to the lateral nucleus of the macaque monkey amygdala: a retrograde tracing study</article-title>. <source>J. Comp. Neurol.</source> <volume>421</volume>, <fpage>52</fpage>&#x02013;<lpage>79</lpage>.<pub-id pub-id-type="doi">10.1002/(SICI)1096-9861(20000522)421:1&#x0003C;52::AID-CNE4&#x0003E;3.0.CO;2-O</pub-id><pub-id pub-id-type="pmid">10813772</pub-id></citation></ref>
<ref id="B87"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stefanacci</surname> <given-names>L.</given-names></name> <name><surname>Amaral</surname> <given-names>D. G.</given-names></name></person-group> (<year>2002</year>). <article-title>Some observations on cortical inputs to the macaque monkey amygdala: an anterograde tracing study</article-title>. <source>J. Comp. Neurol.</source> <volume>451</volume>, <fpage>301</fpage>&#x02013;<lpage>323</lpage>.<pub-id pub-id-type="doi">10.1002/cne.10339</pub-id><pub-id pub-id-type="pmid">12210126</pub-id></citation></ref>
<ref id="B88"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sugase-Miyamoto</surname> <given-names>Y.</given-names></name> <name><surname>Richmond</surname> <given-names>B. J.</given-names></name></person-group> (<year>2005</year>). <article-title>Neuronal signals in the monkey basolateral amygdala during reward schedules</article-title>. <source>J. Neurosci.</source> <volume>25</volume>, <fpage>11071</fpage>&#x02013;<lpage>11083</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1796-05.2005</pub-id><pub-id pub-id-type="pmid">16319307</pub-id></citation></ref>
<ref id="B89"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thorpe</surname> <given-names>S. J.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Maddison</surname> <given-names>S.</given-names></name></person-group> (<year>1983</year>). <article-title>The orbitofrontal cortex: neuronal activity in the behaving monkey</article-title>. <source>Exp. Brain Res.</source> <volume>49</volume>, <fpage>93</fpage>&#x02013;<lpage>115</lpage>.<pub-id pub-id-type="doi">10.1007/BF00235545</pub-id><pub-id pub-id-type="pmid">6861938</pub-id></citation></ref>
<ref id="B90"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>L.</given-names></name> <name><surname>Schultz</surname> <given-names>W.</given-names></name></person-group> (<year>2000</year>). <article-title>Reward-related neuronal activity during go-nogo task performance in primate orbitofrontal cortex</article-title>. <source>J. Neurophysiol.</source> <volume>83</volume>, <fpage>1864</fpage>&#x02013;<lpage>1876</lpage>.<pub-id pub-id-type="pmid">10758098</pub-id></citation></ref>
<ref id="B91"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallis</surname> <given-names>J. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Orbitofrontal cortex and its contribution to decision-making</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>30</volume>, <fpage>31</fpage>&#x02013;<lpage>56</lpage>.<pub-id pub-id-type="doi">10.1146/annurev.neuro.30.051606.094334</pub-id><pub-id pub-id-type="pmid">17417936</pub-id></citation></ref>
<ref id="B92"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>F. A.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name></person-group> (<year>2005</year>). <article-title>The primate amygdala and reinforcement: a dissociation between rule-based and associatively-mediated memory revealed in neuronal activity</article-title>. <source>Neuroscience</source> <volume>133</volume>, <fpage>1061</fpage>&#x02013;<lpage>1072</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuroscience.2005.03.022</pub-id><pub-id pub-id-type="pmid">15964491</pub-id></citation></ref>
<ref id="B93"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamada</surname> <given-names>H.</given-names></name> <name><surname>Matsumoto</surname> <given-names>N.</given-names></name> <name><surname>Kimura</surname> <given-names>M.</given-names></name></person-group> (<year>2004</year>). <article-title>Tonically active neurons in the primate caudate nucleus and putamen differentially encode instructed motivational outcomes of action</article-title>. <source>J. Neurosci.</source> <volume>24</volume>, <fpage>3500</fpage>&#x02013;<lpage>3510</lpage>.<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0068-04.2004</pub-id><pub-id pub-id-type="pmid">15071097</pub-id></citation></ref>
<ref id="B94"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>L.</given-names></name> <name><surname>Bechara</surname> <given-names>A.</given-names></name> <name><surname>Tranel</surname> <given-names>D.</given-names></name> <name><surname>Damasio</surname> <given-names>H.</given-names></name> <name><surname>Hauser</surname> <given-names>M.</given-names></name> <name><surname>Damasio</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Damage to ventromedial prefrontal cortex impairs judgment of harmful intent</article-title>. <source>Neuron</source> <volume>65</volume>, <fpage>845</fpage>&#x02013;<lpage>851</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuron.2010.03.003</pub-id><pub-id pub-id-type="pmid">20346759</pub-id></citation></ref>
</ref-list>
</back>
</article>