<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2017.00570</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Positivity Bias Phenomenon in Face Perception Given Different Information on Ability</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zhao</surname> <given-names>Sasa</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/382016/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Xiang</surname> <given-names>Yanhui</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/430239/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Xie</surname> <given-names>Jiushu</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/175862/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ye</surname> <given-names>Yanyan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/262320/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Tianfeng</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/429391/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Mo</surname> <given-names>Lei</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/199594/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Center for the Study of Applied Psychology, School of Psychology, South China Normal University</institution> <country>Guangzhou, China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University</institution> <country>Guangzhou, China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Psychology, Hunan Normal University</institution> <country>Changsha, China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: <italic>Davood Gozli, University of Macau, China</italic></p></fn>
<fn fn-type="edited-by"><p>Reviewed by: <italic>Izelle Labuschagne, Australian Catholic University, Australia; Shelbie Sutherland, University of Toronto, Canada</italic></p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x002A;Correspondence: <italic>Lei Mo, <email>molei@scnu.edu.cn</email></italic></p></fn>
<fn fn-type="other" id="fn002"><p><italic><sup>&#x2020;</sup>These authors have contributed equally to this work.</italic></p></fn>
<fn fn-type="other" id="fn003"><p>This article was submitted to Cognition, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>27</day>
<month>04</month>
<year>2017</year>
</pub-date>
<pub-date pub-type="collection">
<year>2017</year>
</pub-date>
<volume>8</volume>
<elocation-id>570</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>10</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>03</month>
<year>2017</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2017 Zhao, Xiang, Xie, Ye, Li and Mo.</copyright-statement>
<copyright-year>2017</copyright-year>
<copyright-holder>Zhao, Xiang, Xie, Ye, Li and Mo</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>The negativity bias has been shown in many fields, including in face processing. We assume that this bias stems from the potential threat inlayed in the stimuli (e.g., negative moral behaviors) in previous studies. In the present study, we conducted one behavioral and one event-related potentials (ERPs) experiments to test whether the positivity bias rather than negativity bias will arise when participants process information whose negative aspect involves no threat, i.e., the ability information. In both experiments, participants first completed a valence rating (negative-to-positive) of neutral facial expressions. Further, in the learning period, participants associated the neutral faces with high-ability, low-ability, or control sentences. Finally, participants rated these facial expressions again. Results of the behavioral experiment showed that compared with pre-learning, the expressions of the faces associated with high ability sentences were classified as more positive in the post-learning expression rating task, and the faces associated with low ability sentences were evaluated as more negative. Meanwhile, the change in the high-ability group was greater than that of the low-ability group. The ERP data showed that the faces associated with high-ability sentences elicited a larger early posterior negativity, an ERP component considered to reflect early sensory processing of the emotional stimuli, than the faces associated with control sentences. However, no such effect was found in faces associated with low-ability sentences. To conclude, high ability sentences exerted stronger influence on expression perception than did low ability ones. Thus, we found a positivity bias in this ability-related facial perceptual task. Our findings demonstrate an effect of valenced ability information on face perception, thereby adding to the evidence on the opinion that person-related knowledge can influence face processing. What&#x2019;s more, the positivity bias in non-threatening surroundings increases scope for studies on processing bias.</p>
</abstract>
<kwd-group>
<kwd>negativity bias</kwd>
<kwd>non-threatening information</kwd>
<kwd>positivity bias</kwd>
<kwd>facial perception</kwd>
<kwd>event-related potentials</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="34"/>
<page-count count="8"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec><title>Introduction</title>
<p>Studies have shown that people display a preferential processing of negative information (e.g., negative facial expressions, immoral behaviors) than the corresponding positive one (<xref ref-type="bibr" rid="B4">Baumeister et al., 2001</xref>; <xref ref-type="bibr" rid="B22">Rozin and Royzman, 2001</xref>; <xref ref-type="bibr" rid="B7">Dyck et al., 2009</xref>). This phenomenon is known as the &#x201C;negativity bias" and has been investigated within different domains, such as impressions formation, decision-making, social interaction, moral judgment, etc (<xref ref-type="bibr" rid="B4">Baumeister et al., 2001</xref>; <xref ref-type="bibr" rid="B22">Rozin and Royzman, 2001</xref>). Such a bias is also very common in face related studies: negative faces are preferentially processed; in terms of contextual influences on facial processing, negative information is always more influential than other information. However, presently, there are limited studies regarding the existence of a positivity bias. Therefore, this study aimed to explore the positivity bias in face processing.</p>
<p>The information processing bias toward negative stimuli may manifest in attention, memory, or perception. Studies show that negative faces draw more attention or are remembered better than are positive or neutral faces (<xref ref-type="bibr" rid="B15">Hansen and Hansen, 1988</xref>; <xref ref-type="bibr" rid="B19">&#x00D6;hman et al., 2001</xref>; <xref ref-type="bibr" rid="B2">Anderson et al., 2011</xref>; <xref ref-type="bibr" rid="B30">Tsukiura et al., 2013</xref>). For example, <xref ref-type="bibr" rid="B15">Hansen and Hansen (1988)</xref> reported that the speed of identifying an angry face from a crowd of smiling faces was faster than vice versa. <xref ref-type="bibr" rid="B30">Tsukiura et al. (2013)</xref> found that the faces with an untrustworthy impression were remembered more accurately than those with a neutral or trustworthy impression were. Furthermore, studies show that the processing of human faces is affected not only by facial movements but also by context information (e.g., person-related information). The negativity bias exists in the context effect is conceptualized as stronger influence that negative information had on faces than did positive information (<xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B2">Anderson et al., 2011</xref>; <xref ref-type="bibr" rid="B3">Baker et al., 2013</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>). For instance, <xref ref-type="bibr" rid="B2">Anderson et al. (2011)</xref> explored the impact of gossip on the processing of neutral faces, with the paradigm of binocular rivalry. The result indicated that the neutral faces paired with negative gossip dominated in visual awareness significantly longer than did faces paired with other gossip. However, no difference was found for the faces paired with positive and neutral gossip. <xref ref-type="bibr" rid="B3">Baker et al. (2013)</xref> investigated the influence of moral behaviors on face recognition. They firstly presented vignettes related to moral behaviors (either immoral, morally neutral, or altruistic) with neutral faces, and then asked participants to identify the target faces within a set of faces with different levels of trustworthiness. Results showed that faces paired with immoral vignettes were recognized as less trustworthy than the actual faces, while, there was no difference in the altruistic or neutral condition, namely, only immoral behaviors influenced facial recognition memory. Crucially, by means of event-related potentials (ERP), the negative personal knowledge bias on the faces was also found in the early perceptual processing period (<xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B32">Wieser et al., 2014</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>). <xref ref-type="bibr" rid="B29">Suess et al. (2015)</xref> reported that the neutral expressions of unfamiliar faces paired with negative biographical information were perceived as more negative than the faces paired with relatively neutral information, indexed by larger early posterior negativity (EPN), but the effect was not apparent for positive biographical information. EPN was taken as the earliest ERP component reflecting valenced personal information influence on facial perception (<xref ref-type="bibr" rid="B32">Wieser et al., 2014</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>).</p>
<p>However, it should be noted that the negative information involved in the previous studies mentioned above is mostly the one carrying threat, such as angry faces or evil behaviors (<xref ref-type="bibr" rid="B19">&#x00D6;hman et al., 2001</xref>; <xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B10">Feldmann-W&#x00FC;stefeld et al., 2011</xref>). For security and survival, people prioritize paying attention to the potential threat in the environment and display the &#x201C;negativity bias&#x201D; in information processing. From the perspective of evolution, the threat-related negativity bias is reasonable and of high adaptive value: the consequences of dangerous stimuli are often much more dramatic than those of ignoring or reacting slowly to neutral or even appetitive stimuli. But what if there is no threat in the surroundings? Specifically, we were interested in whether negativity bias still exists when the negatively valenced information is not threatening or adverse (we define such information as &#x201C;non-threatening information&#x201D;).</p>
<p>The current study assumes that negative non-threatening information (e.g., unattractive faces, low ability information) does not pose a threat to our survival and security; thus, it would not make people go on alert. The corresponding positive one (e.g., attractive faces, high ability information), however, carries desirable, beneficial information. In such cases, the positive information may have a more powerful influence than the negative one. Some studies have shown indirect evidence for this assumption. Research showed that in aesthetic processing, compared with non-attractive faces, the attractive faces elicited an EPN (<xref ref-type="bibr" rid="B31">Werheid et al., 2007</xref>). The EPN is closely related to personal selective attention in the early phase (<xref ref-type="bibr" rid="B26">Schupp et al., 2007</xref>; <xref ref-type="bibr" rid="B13">Fr&#x00FC;hholz et al., 2011</xref>). A recent study also found that attractive faces dominated in visual awareness significantly longer than average and unattractive ones (<xref ref-type="bibr" rid="B18">Mo et al., 2016</xref>). However, none of them has directly explored the positivity bias or summarized the attribute of the negative information. Accordingly, combining behavioral assessment and ERP technology, we planned to test the &#x201C;positivity bias&#x201D; effect that non-threatening information may have on face perception.</p>
<p>We chose ability as a representative of non-threatening information in the present study. Ability is appropriate because low ability information carries no threat. At the same time, ability (or competence) is one of the universal dimensions of social cognition, playing an important role in person perception and evaluation (<xref ref-type="bibr" rid="B27">Skowronski and Carlston, 1987</xref>; <xref ref-type="bibr" rid="B11">Fiske et al., 2007</xref>; <xref ref-type="bibr" rid="B12">Freddi et al., 2014</xref>). Based on this, we further used a similar paradigm as those studies displaying the negativity bias, a minimal affective learning task (<xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>), for a direct comparison. In the current experimental tasks, the neutral expression faces were paired with high ability, low ability or control sentences, to test whether valenced ability information could bias the perception of facial expressions. If the expression ratings of the faces paired with high ability sentences show greater changes between pre- and post-learning than other faces (Experiment 1: Behavioral Assessment), and the EPN of the faces paired with high ability information is more pronounced (Experiment 2: ERP Data), we can conclude that there does exist a &#x201C;positivity bias&#x201D; evident in the effect of ability information on face perception.</p>
</sec>
<sec><title>Experiment 1</title>
<sec><title>Method</title>
<sec><title>Participants</title>
<p>Participants were recruited via target advertising on social media sites of South China Normal University. Thirty-three participants took part in the experiment for a small monetary compensation. One participant&#x2019;s data was missing and another participant&#x2019;s data was removed because his expression rating in the high ability group was out of &#x00B1;3 SD in the post-learning. The remaining 31 participants (19 female) had a mean age of 20.74 years (<italic>SD</italic> = 2.14). All participants were right-handed and had normal or corrected-to-normal vision. None of them had any neurological impairment or had used psychoactive medication, nor had any of them participated in our other experiments. All participants gave their informed consent before the experiment. The current study was conducted under approval of the Academic Committee of the Department of Psychology at South China Normal University.</p>
</sec>
<sec><title>Design</title>
<p>The current study used a 2 (Learning: pre-learning, post-learning) &#x00D7; 3 (Ability: high, low, control) within-subjects design. The dependent variable was facial expression ratings.</p>
</sec>
<sec><title>Materials</title>
<sec>
<title>Faces</title>
<p>Thirty-six unfamiliar gray-scale photos of male and female faces with neutral expressions were chosen from the Chinese Facial Affective Picture System (CFAPS; <xref ref-type="bibr" rid="B14">Gong et al., 2011</xref>). All photos were frontal headshots. Then, they were edited for homogeneity of all features (i.e., the hair, ears, neck, and so on were all removed; the size of the faces were scaled to 2.7 cm &#x00D7; 3.5 cm), using Photoshop CS 6.0.</p>
</sec>
<sec>
<title>Ability sentences</title>
<p>We selected 25 behaviors that could distinguish levels of individual ability and adapted each behavior to one of the three kinds of sentences: high ability, low ability, and control. For example, for the behavior related to &#x201C;sales,&#x201D; the high ability sentence was &#x201C;Ranked first in sales many times,&#x201D; the low ability sentence was &#x201C;Failed to meet sales targets many times,&#x201D; and the neutral sentence (which was not related to ability) was &#x201C;Received sales target for this season.&#x201D; A different group of 30 participants took part in the rating of these behavioral sentences according to the degree of ability on a 9-point scale (1 = <italic>very low</italic>, 9 = <italic>very high</italic>). Meanwhile, participants were asked to rate whether or not these behavioral sentences concerned ability (1 = <italic>yes</italic>, 0 = <italic>no</italic>). Based on the rating results, we chose 12 behaviors as the target materials (Mean &#x00B1; SD ability ratings: high ability = 7.58 &#x00B1; 0.36, low ability = 2.70 &#x00B1; 0.29; the range of rating scores for high ability is 7.16 to 8.44, and the low ability is 2.12 to 3.18; Supplementary Material). A paired <italic>t</italic>-test indicated that the average ability level ratings between the high ability and low ability sentences differed significantly, <italic>p</italic> &#x003C; 0.001.</p>
</sec>
<sec>
<title>Formal experimental materials</title>
<p>The 36 target faces were paired with the 36 ability sentences. These &#x201C;face-sentence&#x201D; pairs constituted our formal experimental materials.</p>
</sec>
</sec>
<sec><title>Procedure</title>
<p>The experiment consisted of three phases: pre-learning, learning, and post-learning.</p>
<sec>
<title>Pre-learning phase</title>
<p>Each trial began with a fixation cross displayed in the center of the screen for 500 ms, followed by a blank screen for 300 ms. Then, one of the 36 faces appeared following a random order. Participants were instructed to judge each face by rating the facial expression on a 9-point scale from very negative (1) to very positive (9), analogous to the Self-Assessment Manikin (<xref ref-type="bibr" rid="B6">Bradley and Lang, 1994</xref>). Each stimulus was displayed until the participant keyed his or her response.</p>
</sec>
<sec>
<title>Learning phase</title>
<p>In this phase, participants viewed face-sentence pairs. The faces appeared in the center of the screen, and the sentences appeared just below the faces. Participants were told to remember the pairings by imagining each person performing the behavior described in the corresponding sentence. Each of the 36 target faces was paired with a unique descriptive ability sentence that was high-ability, low-ability, or control (Each kind had the same total number). The three kinds of sentences were counterbalanced across participants. Different participants were shown different face-sentence pairs. The pairs were each displayed on the computer screen for 5000 ms with an 800 ms intertrial interval. Each face-sentence pair repeated five times in a random order, constituting a total of 180 experimental trials.</p>
</sec>
<sec>
<title>Post-learning phase</title>
<p>The procedure was the same as that in the pre-learning. A total of 36 faces appeared one at a time in a random order, and participants were asked to rate the facial expressions on a 9-point scale from very negative (1) to very positive (9). All of the faces were repeated twice.</p>
</sec>
</sec></sec>
<sec><title>Results</title>
<p>Repeated measures analysis of variance (ANOVA) with the factors Ability Sentence (high, low, control) and Learning (pre-learning, post-learning) were carried out. There was a main effect of ability sentence, <italic>F</italic><sub>(2,60)</sub> = 14.18, <italic>p</italic> &#x003C; 0.001, <inline-formula><mml:math id="M1"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.321, and a significant interaction of Ability Sentence &#x00D7; Learning, <italic>F</italic><sub>(2,60)</sub> = 16.64, <italic>p</italic> &#x003C; 0.001, <inline-formula><mml:math id="M2"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.350. Simple effect analysis showed that faces associated with high ability information were rated as more positive in post-learning compared to pre-learning, <italic>F</italic><sub>(1,30)</sub> = 19.35, <italic>p</italic> &#x003C; 0.001, <inline-formula><mml:math id="M3"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.392. Likewise, faces associated with low ability information were rated as more negative, <italic>F</italic><sub>(1,30)</sub> = 4.51, <italic>p</italic> = 0.042, <inline-formula><mml:math id="M4"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.131. Faces associated with control sentences did not differ between pre-learning and post-learning, <italic>F</italic><sub>(1,30)</sub> = 0.41, <italic>p</italic> = 0.530 (see <bold>Figure <xref ref-type="fig" rid="F1">1</xref></bold>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>Mean facial expression ratings (Mean &#x00B1; SD) before and after the presentation of different kinds of ability information (high, low vs. control) for Experiment 1</bold>.</p></caption>
<graphic xlink:href="fpsyg-08-00570-g001.tif"/>
</fig>
<p>We further examined the difference between pre-learning and post-learning for the high ability condition compared to the low ability condition. The result showed that the difference for the high ability condition was larger than that of the low ability condition, <italic>F</italic><sub>(1,30)</sub> = 5.41, <italic>p</italic> = 0.027, <inline-formula><mml:math id="M5"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.153. As expected, these findings suggested &#x201C;positivity bias&#x201D; in facial perceptual processing for ability character information. As no differences emerged between pre-learning and post-learning for the control condition, we can exclude the mere exposure effect as a cause (<xref ref-type="bibr" rid="B34">Zajonc, 2001</xref>).</p>
</sec>
</sec>
<sec><title>Experiment 2</title>
<sec><title>Method</title>
<sec><title>Participants</title>
<p>Eighteen participants took part in the experiment for a small monetary compensation. One outlier was excluded because of the abnormal reaction time (&#x003C;200 ms). The remaining 17 participants (10 female) had a mean age of 22.59 years (<italic>SD</italic> = 1.62). All participants were right-handed and had normal or corrected-to-normal vision. None of them had any neurological impairment or had used psychoactive medication, nor had any of them participated in our other experiments. All participants gave their informed consent before the experiment. The current study was conducted under approval of the Academic Committee of the Department of Psychology at South China Normal University.</p>
</sec>
<sec><title>Design</title>
<p>The current study used a 2 (Learning: pre-learning, post-learning) &#x00D7; 3 (Ability Sentence: high, low, control) within-subjects design. The dependent variables were the facial expression ratings, N170 and EPN.</p>
</sec>
<sec><title>Materials</title>
<p>Experiment 2 used the same materials as Experiment 1.</p>
</sec>
<sec><title>Procedure</title>
<p>All participants were seated comfortably in a dimly lit, acoustically and electrically shielded room. Stimuli were presented using E-Prime1.1 at the center of a monitor that was placed at eye level 90 cm in front of the participants. The background of the screen was white, and the brightness, contrast, and color were all set consistently. Participants were instructed to take part in a memory experiment. As in Experiment 1, the procedure of Experiment 2 included three sessions: pre-learning, learning, and post-learning. The EEG test was only conducted during the post-learning.</p>
<sec>
<title>Pre-learning Phase</title>
<p>A total of 36 faces appeared in a random order. When each face was presented on the screen, participants were instructed to rate the facial expressions (To adapt to the space on the monitor for EEG, we reduced the 9-point scale to 7-point scale, 1 = <italic>very negative</italic>, 7 = <italic>very positive</italic>).</p>
</sec>
<sec>
<title>Learning Phase</title>
<p>The learning phase was the same as in Experiment 1, except for two changes. In Experiment 2, each &#x201C;face-sentence&#x201D; pair was repeated four times, and to test whether participants had learnt the pairing, a memory test was added after learning. In the memory test, a face appeared in the center of the screen with two sentences below it. Participants had to indicate which of the two sentences described the correct behavior related to the face by pressing the &#x2018;F&#x2019; or &#x2018;J&#x2019; key on the keyboard with either their left or right index finger. The assignment of keys for indicating the correct answer was random across trials. All sentences and faces were the same as those in the learning task. Only participants who passed the memory test with higher than 80% accuracy continued to the next phase, and the others repeated the learning phase. All the participants could attain higher than 90% accuracy for a second time.</p>
</sec>
<sec>
<title>Post-learning phase</title>
<p>Each trial started with the presentation of a fixation cross for 500 ms, followed by a blank screen for 300 ms. The target face was then presented for 3000 ms or until the participant made his or her response (a 7-point facial expression rating, 1 = <italic>very negative</italic>, 7 = <italic>very positive</italic>). All faces were the same as those in the learning task. Participants were asked to concentrate on viewing the faces first before making their response. Participants&#x2019; electrical brain activity was collected during this stage. One second after the response, the next trial began (see <bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold>). The 36 faces repeated four times in a random order, and thus there were 144 trials in total.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p><bold>Sequence of events in the post-learning</bold>.</p></caption>
<graphic xlink:href="fpsyg-08-00570-g002.tif"/>
</fig>
</sec>
</sec>
<sec><title>EEG Recording and Data Analysis</title>
<p>The EEG was recorded with Ag/AgCl electrodes from 64 sites, according to the extended 10&#x2013;20 system, at a sampling rate of 1000 Hz. Both the left mastoid and the right mastoid were used as the reference, and data were mathematically re-referenced off-line to an average reference. Both vertical (below the left eye) and horizontal (at the outer canthus of the right eye) electrooculograms were recorded. Electrode impedance was kept below 5 k&#x03A9;.</p>
<p>Off-line EEG analysis was performed with the computer software Brain Vision Analyzer Version 2.0 (Brain Products). EEGs were filtered using a 30 Hz low-pass and corrected for horizontal and vertical ocular artifacts. The remaining artifacts were eliminated with a semiautomatic artifact rejection procedure (amplitudes over &#x00B1; 80 &#x03BC;V, changing more than 50 &#x03BC;V between samples). The EEG was segmented into epochs of 1.2 s, starting 200 ms prior to stimulus onset. According to the matched sentence, faces were divided into three groups: the high ability group, the low ability group, and the control group.</p>
<p>Research showed that faces elicit a clear negative deflection around 170 ms after stimulus onset; this negative peak is known as the N170 component (<xref ref-type="bibr" rid="B5">Bentin et al., 1996</xref>; <xref ref-type="bibr" rid="B23">Sagiv and Bentin, 2001</xref>). As N170 is particularly sensitive to faces, it is known as an index of an early structural processing of facial features and configurations (e.g., <xref ref-type="bibr" rid="B5">Bentin et al., 1996</xref>; <xref ref-type="bibr" rid="B8">Eimer, 2000</xref>). Numerous studies have shown that the EPN reflects facilitated capture of attentional resources, selective motivated attention, the evaluation of perceptual characteristics, and the selective processing of emotional stimuli (<xref ref-type="bibr" rid="B25">Schupp et al., 2003</xref>, <xref ref-type="bibr" rid="B26">2007</xref>; <xref ref-type="bibr" rid="B20">Olofsson et al., 2008</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>). Starting at around 150 ms, the EPN component is a relative negative deflection usually observed over temporo-parieto-occipital brain regions and is maximally pronounced around 260&#x2013;280 ms after stimulus onset (<xref ref-type="bibr" rid="B25">Schupp et al., 2003</xref>, <xref ref-type="bibr" rid="B26">2007</xref>; <xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B32">Wieser et al., 2014</xref>). It is thought to reflect the mainly arousal-driven differential processing of emotional (compared to neutral) visual stimuli areas (<xref ref-type="bibr" rid="B33">Wieser et al., 2010</xref>). Specifically, emotional stimuli in comparison to neutral stimuli elicit larger EPN. According to the grand average, the three conditions began to diverge at nearly 230 ms in the temporo-parieto-occipital regions. Based on previous findings of early emotion effects in the EPN (<xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B16">Klein et al., 2015</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>), eight electrode sites for two ROIs were chosen for statistical analysis in the time window of 130&#x2013;180 (N170) and 250&#x2013;300 ms (EPN): CP5, P5, P7, PO7 (ROI: left posterior); CP6, P6, P8, PO8 (ROI: right posterior). Both of their amplitude differences were assessed with separate 2 (Ability Sentence: high, low, control) &#x00D7; 3 (Laterality: left, right) repeated-measures analyses of variance (ANOVAs). In all analyses, the Greenhouse&#x2013;Geisser correction for non-sphericity was applied if Mauchly&#x2019;s test of sphericity was significant.</p>
</sec>
</sec>
<sec><title>Results</title>
<sec><title>Behavioral Results</title>
<p>A 2 (Learning: pre-learning, post-learning) &#x00D7; 3 (Ability Sentence: high, low, control) repeated-measures ANOVA yielded main effects of Ability Sentence, <italic>F</italic>(2,32) = 7.08, <italic>p</italic> = 0.007, <inline-formula><mml:math id="M6"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.307, and interactions of Learning and Ability Sentence, <italic>F</italic>(2,32) = 8.24, <italic>p</italic> = 0.004, <inline-formula><mml:math id="M7"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.340. A separate analysis for ability sentences revealed that faces associated with high ability information were perceived as more positive after learning than before learning, <italic>F</italic>(1,16) = 6.34, <italic>p</italic> = 0.023, <inline-formula><mml:math id="M8"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.284 (Mean &#x00B1;<italic>SD</italic>: 3.96 &#x00B1; 0.33; 4.33 &#x00B1; 0.40). No difference was found for either the low ability condition or the control condition, <italic>F</italic>(1,16) = 0.38, <italic>p</italic> = 0.549 (Mean &#x00B1;<italic>SD</italic>: 3.85 &#x00B1; 0.44; 3.79 &#x00B1; 0.44); <italic>F</italic>(1,16) = 0.03, <italic>p</italic> = 0.864 (Mean &#x00B1;<italic>SD</italic>: 4.07 &#x00B1; 0.43; 4.05 &#x00B1; 0.40).</p>
</sec>
<sec><title>ERP Results</title>
<sec>
<title>N170</title>
<p>A two-factor (Ability Sentences: high, low, neutral; Laterality: left, right) repeated-measures ANOVA was conducted for the mean amplitude of N170. There was a significant main effect of Laterality, <italic>F</italic><sub>(1,16)</sub> = 7.53, <italic>p</italic> = 0.014, <inline-formula><mml:math id="M9"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.320, where over the right posterior sites a larger negativity was observed. However, main effect of Ability Sentence and interaction effect were not significant, <italic>F</italic><sub>(2,32)</sub> = 0.73, <italic>p</italic> = 0.442; <italic>F</italic><sub>(2,32)</sub> = 0.30, <italic>p</italic> = 0.742 (see <bold>Figure <xref ref-type="fig" rid="F3">3</xref></bold>).</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p><bold>Grand-averaged event-related potential (ERP) waveforms for high, low, and control ability conditions</bold>. The ROI of left laterality includes CP5, P5, P7, PO7; and the corresponding right laterality includes CP6, P6, P8, PO8.</p></caption>
<graphic xlink:href="fpsyg-08-00570-g003.tif"/>
</fig>
</sec>
<sec>
<title>Early posterior negativity</title>
<p>The same repeated-measures ANOVA was conducted for the mean amplitude of EPN, just as N170. There was a significant main effect of Ability Sentence, <italic>F</italic><sub>(2,32)</sub> = 3.64, <italic>p</italic> = 0.045, <inline-formula><mml:math id="M10"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.185, and a significant interaction between Ability Sentence and Laterality, <italic>F</italic><sub>(2,32)</sub> = 5.43, <italic>p</italic> = 0.012, <inline-formula><mml:math id="M11"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.253. A simple effect analysis was conducted. For the left laterality, no difference was found for the comparison among ability conditions, <italic>p</italic>s > 0.1. For the right laterality, the three ability conditions differed significantly, <italic>F</italic><sub>(2,32)</sub> = 6.61, <italic>p</italic> = 0.006, <inline-formula><mml:math id="M12"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.292. Specifically, the high ability condition elicited a more pronounced negativity compared to the low ability condition, <italic>F</italic><sub>(1,16)</sub> = 16.20, <italic>p</italic> = 0.001, <inline-formula><mml:math id="M13"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.503, and the control condition, <italic>F</italic><sub>(1,16)</sub> = 8.00, <italic>p</italic> = 0.012, <inline-formula><mml:math id="M14"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>&#x03b7;</mml:mi></mml:mrow><mml:mrow><mml:mi mathcolor='black' mathvariant='normal'>p</mml:mi></mml:mrow><mml:mrow><mml:mn mathcolor='black' mathvariant='normal'>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.333. The difference between the low ability condition and the control condition failed to reach statistical significance, <italic>F</italic><sub>(1,16)</sub> = 0.51, <italic>p</italic> = 0.486 (see <bold>Figure <xref ref-type="fig" rid="F3">3</xref></bold>).</p>
<p>Further, we&#x2019;ve made a Pearson&#x2019;s correlation test between facial expression ratings and EPN amplitudes: Pearson&#x2019;s <italic>r</italic> = -0.376, <italic>p</italic> = 0.006. Results showed that the higher the ratings is, the larger the EPN amplitude is. The change trend of behavioral ratings is consistent with the counterpart of EPN amplitudes.</p>
</sec>
</sec></sec>
</sec>
<sec><title>Discussion</title>
<p>The present study investigated the influence of non-threatening information (i.e., ability information, whose negative dimension involves no threat) on human face processing. Specifically, neutral faces and valenced ability sentences (high ability, low ability, and control) were paired to identify whether the valenced ability information could bias the perception of neutral facial expressions. The effect was expected to be more evident after exposure to high ability sentences, displaying a positivity bias. Both behavioral and ERP data supported our hypothesis, i.e., compared with low ability behaviors, high ability behaviors induced stronger effect on people&#x2019;s positivity/negativity ratings of faces.</p>
<p>In Experiment 1, the behavioral data showed a significant change in the expression ratings for the faces associated with both high ability and low ability (the high ability group were rated as more positive, while the low ability group were rated as more negative), but the change in the former was greater than those in the latter. The behavioral results in Experiment 2 were in accordance with the results of Experiment 1; greater change happened in the high ability group than in the low ability group after learning. In conclusion, the behavioral results suggested that the high ability information has a stronger effect on facial evaluation than the low ability information. The results of the ERP analyses in Experiment 2 showed that the three experimental conditions (high ability, low ability, and control) all elicited the obvious component of N170 and EPN in the time window 130&#x2013;180 ms and 250&#x2013;300 ms. For N170, we could not find any effect, which indicates that it may be unaffected by emotional faces (<xref ref-type="bibr" rid="B9">Eimer et al., 2003</xref>; <xref ref-type="bibr" rid="B24">Schacht and Sommer, 2009</xref>; <xref ref-type="bibr" rid="B16">Klein et al., 2015</xref>). For EPN, high ability elicited larger amplitude than other conditions, but no differences emerged between the low ability condition and control condition. Numerous studies have shown that EPN is related with more attention and enhanced perceptual encoding of emotional stimuli (<xref ref-type="bibr" rid="B25">Schupp et al., 2003</xref>, <xref ref-type="bibr" rid="B26">2007</xref>; <xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>). Moreover, EPN is deemed as the earliest component reflecting the effect of affective information on facial perception (<xref ref-type="bibr" rid="B32">Wieser et al., 2014</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>). In the present study, the EPN effect was more pronounced for faces paired with high ability sentences compared to those faces paired with low ability sentences. This suggests that the high ability information plays a more important role in expression perception and induces a larger bias in facial perception compared to low ability information. In other words, consistent with our hypothesis, the ability trait did exhibit a &#x201C;positivity bias&#x201D; influence in facial perceptual processing.</p>
<p>The valenced ability information effect found in our results provide new evidence that affective person-related information influences face processing (<xref ref-type="bibr" rid="B32">Wieser et al., 2014</xref>; <xref ref-type="bibr" rid="B16">Klein et al., 2015</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>). This might have implications for social communications. Before meeting someone, we may have had some knowledge of them (e.g., ability information). Such available information may influence our inferences to their mental state or intentions, which further regulate our own behaviors and attitudes toward them. Further, the &#x201C;positivity bias&#x201D; results were different from those found in previous studies (<xref ref-type="bibr" rid="B1">Abdel Rahman, 2011</xref>; <xref ref-type="bibr" rid="B29">Suess et al., 2015</xref>; <xref ref-type="bibr" rid="B17">Luo et al., 2016</xref>). For example, <xref ref-type="bibr" rid="B29">Suess et al. (2015)</xref> paired morality-relevant actions with neutral faces in the learning phase. The results showed that a significant change in expression evaluation value occurred when the faces were paired with negative moral actions; by contrast, little change occurred when those faces were paired with positive moral actions. These results imply that in the process of perceptual impression formation, negative moral information is more influential; that is, there exists a negativity bias for moral information. Negative moral information involved in these studies contains threat; negative ability information in current study, however, is not threatening. Thus, combining our results with prior research on the positive attention bias (<xref ref-type="bibr" rid="B31">Werheid et al., 2007</xref>; <xref ref-type="bibr" rid="B18">Mo et al., 2016</xref>), we suggest the assertion that positive information may take precedence over negative one when people process non-threatening information whose negative aspect do not embody threat.</p>
<p>Comparing the present study with previous research results, we conclude that people tend to exhibit a &#x201C;negativity bias&#x201D; when they process information whose negative aspect carries survival threat; when it comes to information whose negative aspect carries no threat, people may tend to display a &#x201C;positivity bias.&#x201D; Such a processing style may be explained by human tendency to avoid harmful stimuli and approach beneficial stimuli, and it reflects the flexibility of humans&#x2019; cognitive processing (<xref ref-type="bibr" rid="B28">Smith et al., 2006</xref>; <xref ref-type="bibr" rid="B21">Rothermund et al., 2008</xref>). Out of basic survival needs, people focus on potential threats in their surroundings. The vigilance for dangerous signals can protect people from being hurt and is of great adaptive value for survival. However, in circumstances where there are no dangers, or where the potential threat is less than the potential benefits, the requirements for benefits are predominant. Thus, people focus more on favorable or potentially beneficial information. In this context, being sensitive to positive signals is of more adaptive value.</p>
<p>To summarize, our results add to the evidence on semantic context effects in face processing and suggest that not only threatening information (like morality information) but also non-threatening one (like ability information) can shape expression perception. What&#x2019;s more, the &#x201C;positivity bias&#x201D; phenomenon has significant value for further understanding negativity bias in the facial perception, demonstrating that the &#x201C;negativity bias&#x201D; in face processing is not universal, but may be varied with the type of study stimuli. However, our study still has some limitations. Firstly, if an experiment testifying the negativity bias of threat-related information was added for a comparison, the results would be more conclusive. Secondly, we cannot ensure that low ability contains no threatening information absolutely in all cases, and further study should be conducted to explore this topic. Finally, further systematic studies can be carried out to strengthen our conclusion and enrich the effect, including experiments with other domains (like attention, memory) and other &#x201C;non-threatening information (like social status information).&#x201D;</p>
</sec>
<sec><title>Ethics Statement</title>
<p>This study was approved by the Human Research Ethics Committee of South China Normal University. Informed written consent was obtained from participants before the experiment.</p>
</sec>
<sec><title>Author Contributions</title>
<p>SZ: study design, data collection, data analysis, paper writing. YX: study design, paper writing. JX: data analysis, paper revising. YY: data collection, paper revising. TL: data collection, paper revising. LM: study design, paper writing.</p>
</sec>
<sec><title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work was supported by the Major Research Plan of the National Social Science Foundation [grant number.14ZDB159] and the Project of Key Institute of Humanities and Social Sciences (Grant No. 16JJD190001).</p>
</fn>
</fn-group>
<ack>
<p>We thank Tianyu Zeng and Zicheng Zhuang for their help of the data collecting.</p>
</ack>
<sec sec-type="supplementary material">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fpsyg.2017.00570/full#supplementary-material">http://journal.frontiersin.org/article/10.3389/fpsyg.2017.00570/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.DOCX" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdel Rahman</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <article-title>Facing good and evil: early brain signatures of affective biographical knowledge in face recognition.</article-title> <source><italic>Emotion</italic></source> <volume>11</volume> <fpage>1397</fpage>&#x2013;<lpage>1405</lpage>. <pub-id pub-id-type="doi">10.1037/a0024717</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>E.</given-names></name> <name><surname>Siegel</surname> <given-names>E. H.</given-names></name> <name><surname>Bliss-Moreau</surname> <given-names>E.</given-names></name> <name><surname>Barrett</surname> <given-names>L. F.</given-names></name></person-group> (<year>2011</year>). <article-title>The visual impact of gossip.</article-title> <source><italic>Science</italic></source> <volume>332</volume> <fpage>1446</fpage>&#x2013;<lpage>1448</lpage>. <pub-id pub-id-type="doi">10.1126/science.1201574</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baker</surname> <given-names>A.</given-names></name> <name><surname>ten Brinke</surname> <given-names>L.</given-names></name> <name><surname>Porter</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>The face of an angel: effect of exposure to details of moral behavior on facial recognition memory.</article-title> <source><italic>J. Appl. Res. Mem. Cogn.</italic></source> <volume>2</volume> <fpage>101</fpage>&#x2013;<lpage>106</lpage>. <pub-id pub-id-type="doi">10.1016/j.jarmac.2013.03.004</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baumeister</surname> <given-names>R. F.</given-names></name> <name><surname>Bratslavsky</surname> <given-names>E.</given-names></name> <name><surname>Finkenauer</surname> <given-names>C.</given-names></name> <name><surname>Vohs</surname> <given-names>K. D.</given-names></name></person-group> (<year>2001</year>). <article-title>Bad is stronger than good.</article-title> <source><italic>Rev. Gen. Psychol.</italic></source> <volume>5</volume> <fpage>323</fpage>&#x2013;<lpage>370</lpage>. <pub-id pub-id-type="doi">10.1037/1089-2680.5.4.323</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bentin</surname> <given-names>S.</given-names></name> <name><surname>Allison</surname> <given-names>T.</given-names></name> <name><surname>Puce</surname> <given-names>A.</given-names></name> <name><surname>Perez</surname> <given-names>E.</given-names></name> <name><surname>McCarthy</surname> <given-names>G.</given-names></name></person-group> (<year>1996</year>). <article-title>Electrophysiological studies of face perception in humans.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>8</volume> <fpage>551</fpage>&#x2013;<lpage>565</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.1996.8.6.551</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bradley</surname> <given-names>M. M.</given-names></name> <name><surname>Lang</surname> <given-names>P. J.</given-names></name></person-group> (<year>1994</year>). <article-title>Measuring emotion: the self-assessment manikin and the semantic differential.</article-title> <source><italic>J. Behav. Ther. Exp. Psychiatry</italic></source> <volume>25</volume> <fpage>49</fpage>&#x2013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1016/0005-7916(94)90063-9</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dyck</surname> <given-names>M.</given-names></name> <name><surname>Habel</surname> <given-names>U.</given-names></name> <name><surname>Slodczyk</surname> <given-names>J.</given-names></name> <name><surname>Schlummer</surname> <given-names>J.</given-names></name> <name><surname>Backes</surname> <given-names>V.</given-names></name> <name><surname>Schneider</surname> <given-names>F.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Negative bias in fast emotion discrimination in borderline personality disorder.</article-title> <source><italic>Psychol. Med.</italic></source> <volume>39</volume> <fpage>855</fpage>&#x2013;<lpage>864</lpage>. <pub-id pub-id-type="doi">10.1017/S0033291708004273</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eimer</surname> <given-names>M.</given-names></name></person-group> (<year>2000</year>). <article-title>The face-specific N170 component reflects late stages in the structural encoding of faces.</article-title> <source><italic>Neuroreport</italic></source> <volume>11</volume> <fpage>2319</fpage>&#x2013;<lpage>2324</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200007140-00050</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eimer</surname> <given-names>M.</given-names></name> <name><surname>Holmes</surname> <given-names>A.</given-names></name> <name><surname>McGlone</surname> <given-names>F. P.</given-names></name></person-group> (<year>2003</year>). <article-title>The role of spatial attention in the processing of facial expression: an ERP study of rapid brain responses to six basic emotions.</article-title> <source><italic>Cogn. Affect. Behav. Neurosci.</italic></source> <volume>3</volume> <fpage>97</fpage>&#x2013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.3758/CABN.3.2.97</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feldmann-W&#x00FC;stefeld</surname> <given-names>T.</given-names></name> <name><surname>Schmidt-Daffy</surname> <given-names>M.</given-names></name> <name><surname>Schub&#x00F6;</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Neural evidence for the threat detection advantage: differential attention allocation to angry and happy faces.</article-title> <source><italic>Psychophysiology</italic></source> <volume>48</volume> <fpage>697</fpage>&#x2013;<lpage>707</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2010.01130.x</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fiske</surname> <given-names>S. T.</given-names></name> <name><surname>Cuddy</surname> <given-names>A. J.</given-names></name> <name><surname>Glick</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Universal dimensions of social cognition: warmth and competence.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>11</volume> <fpage>77</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2006.11.005</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Freddi</surname> <given-names>S.</given-names></name> <name><surname>Tessier</surname> <given-names>M.</given-names></name> <name><surname>Lacrampe</surname> <given-names>R.</given-names></name> <name><surname>Dru</surname> <given-names>V.</given-names></name></person-group> (<year>2014</year>). <article-title>Affective judgement about information relating to competence and warmth: an embodied perspective.</article-title> <source><italic>Br. J. Soc. Psychol.</italic></source> <volume>53</volume> <fpage>265</fpage>&#x2013;<lpage>280</lpage>. <pub-id pub-id-type="doi">10.1111/bjso.12033</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fr&#x00FC;hholz</surname> <given-names>S.</given-names></name> <name><surname>Jellinghaus</surname> <given-names>A.</given-names></name> <name><surname>Herrmann</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Time course of implicit processing and explicit processing of emotional faces and emotional words.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>87</volume> <fpage>265</fpage>&#x2013;<lpage>274</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2011.03.008</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gong</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>Y. X.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Luo</surname> <given-names>Y. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Revision of the Chinese facial affective picture system.</article-title> <source><italic>Chin. Ment. Health J.</italic></source> <volume>25</volume> <fpage>40</fpage>&#x2013;<lpage>46</lpage>.</citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>C. H.</given-names></name> <name><surname>Hansen</surname> <given-names>R. D.</given-names></name></person-group> (<year>1988</year>). <article-title>Finding the face in the crowd: an anger superiority effect.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>54</volume> <fpage>917</fpage>&#x2013;<lpage>924</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.54.6.917</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klein</surname> <given-names>F.</given-names></name> <name><surname>Iffland</surname> <given-names>B.</given-names></name> <name><surname>Schindler</surname> <given-names>S.</given-names></name> <name><surname>Wabnitz</surname> <given-names>P.</given-names></name> <name><surname>Neuner</surname> <given-names>F.</given-names></name></person-group> (<year>2015</year>). <article-title>This person is saying bad things about you: the influence of physically and socially threatening context information on the processing of inherently neutral faces.</article-title> <source><italic>Cogn. Affect. Behav. Neurosci.</italic></source> <volume>15</volume> <fpage>736</fpage>&#x2013;<lpage>748</lpage>. <pub-id pub-id-type="doi">10.3758/s13415-015-0361-8</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Dzhelyova</surname> <given-names>M.</given-names></name> <name><surname>Mo</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>Effect of affective personality information on face processing: evidence from ERPs.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>7</volume>:<issue>810</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.00810</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mo</surname> <given-names>C.</given-names></name> <name><surname>Xia</surname> <given-names>T.</given-names></name> <name><surname>Qin</surname> <given-names>K.</given-names></name> <name><surname>Mo</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>Natural tendency towards beauty in humans: evidence from binocular rivalry.</article-title> <source><italic>PLoS ONE</italic></source> <volume>11</volume>:<issue>e0150147</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0150147</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x00D6;hman</surname> <given-names>A.</given-names></name> <name><surname>Lundqvist</surname> <given-names>D.</given-names></name> <name><surname>Esteves</surname> <given-names>F.</given-names></name></person-group> (<year>2001</year>). <article-title>The face in the crowd revisited: a threat advantage with schematic stimuli.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>80</volume> <fpage>381</fpage>&#x2013;<lpage>396</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.80.3.381</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olofsson</surname> <given-names>J. K.</given-names></name> <name><surname>Nordin</surname> <given-names>S.</given-names></name> <name><surname>Sequeira</surname> <given-names>H.</given-names></name> <name><surname>Polich</surname> <given-names>J.</given-names></name></person-group> (<year>2008</year>). <article-title>Affective picture processing: an integrative review of ERP findings.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>77</volume> <fpage>247</fpage>&#x2013;<lpage>265</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2007.11.006</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothermund</surname> <given-names>K.</given-names></name> <name><surname>Voss</surname> <given-names>A.</given-names></name> <name><surname>Wentura</surname> <given-names>D.</given-names></name></person-group> (<year>2008</year>). <article-title>Counter-regulation in affective attentional biases: a basic mechanism that warrants flexibility in emotion and motivation.</article-title> <source><italic>Emotion</italic></source> <volume>8</volume> <fpage>34</fpage>&#x2013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1037/1528-3542.8.1.34</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rozin</surname> <given-names>P.</given-names></name> <name><surname>Royzman</surname> <given-names>E. B.</given-names></name></person-group> (<year>2001</year>). <article-title>Negativity bias, negativity dominance, and contagion.</article-title> <source><italic>Pers. Soc. Psychol. Rev.</italic></source> <volume>5</volume> <fpage>296</fpage>&#x2013;<lpage>320</lpage>. <pub-id pub-id-type="doi">10.1207/S15327957PSPR0504-2</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sagiv</surname> <given-names>N.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2001</year>). <article-title>Structural encoding of human and schematic faces: holistic and part-based processes.</article-title> <source><italic>J. Cogn. Neurosci.</italic></source> <volume>13</volume> <fpage>937</fpage>&#x2013;<lpage>951</lpage>. <pub-id pub-id-type="doi">10.1162/089892901753165854</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schacht</surname> <given-names>A.</given-names></name> <name><surname>Sommer</surname> <given-names>W.</given-names></name></person-group> (<year>2009</year>). <article-title>Emotions in word and face processing: early and late cortical responses.</article-title> <source><italic>Brain Cogn.</italic></source> <volume>69</volume> <fpage>538</fpage>&#x2013;<lpage>550</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2008.11.005</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schupp</surname> <given-names>H. T.</given-names></name> <name><surname>Jungh&#x00F6;fer</surname> <given-names>M.</given-names></name> <name><surname>Weike</surname> <given-names>A. I.</given-names></name> <name><surname>Hamm</surname> <given-names>A. O.</given-names></name></person-group> (<year>2003</year>). <article-title>Attention and emotion: an ERP analysis of facilitated emotional stimulus processing.</article-title> <source><italic>Neuroreport</italic></source> <volume>14</volume> <fpage>1107</fpage>&#x2013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200306110-00002</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schupp</surname> <given-names>H. T.</given-names></name> <name><surname>Stockburger</surname> <given-names>J.</given-names></name> <name><surname>Codispoti</surname> <given-names>M.</given-names></name> <name><surname>Jungh&#x00F6;fer</surname> <given-names>M.</given-names></name> <name><surname>Weike</surname> <given-names>A. I.</given-names></name> <name><surname>Hamm</surname> <given-names>A. O.</given-names></name></person-group> (<year>2007</year>). <article-title>Selective visual attention to emotion.</article-title> <source><italic>J. Neurosci.</italic></source> <volume>27</volume> <fpage>1082</fpage>&#x2013;<lpage>1089</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3223-06.2007</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skowronski</surname> <given-names>J. J.</given-names></name> <name><surname>Carlston</surname> <given-names>D. E.</given-names></name></person-group> (<year>1987</year>). <article-title>Social judgment and social memory: the role of cue diagnosticity in negativity, positivity, and extremity biases.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>52</volume> <fpage>689</fpage>&#x2013;<lpage>698</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.52.4.689</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>N. K.</given-names></name> <name><surname>Larsen</surname> <given-names>J. T.</given-names></name> <name><surname>Chartrand</surname> <given-names>T. L.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name> <name><surname>Katafiasz</surname> <given-names>H. A.</given-names></name> <name><surname>Moran</surname> <given-names>K. E.</given-names></name></person-group> (<year>2006</year>). <article-title>Being bad isn&#x2019;t always good: affective context moderates the attention bias toward negative information.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>90</volume> <fpage>210</fpage>&#x2013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.90.2.210</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suess</surname> <given-names>F.</given-names></name> <name><surname>Rabovsky</surname> <given-names>M.</given-names></name> <name><surname>Rahman</surname> <given-names>R. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Perceiving emotions in neutral faces: expression processing is biased by affective person knowledge.</article-title> <source><italic>Soc. Cogn. Affect. Neurosci.</italic></source> <volume>10</volume> <fpage>531</fpage>&#x2013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nsu088</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsukiura</surname> <given-names>T.</given-names></name> <name><surname>Shigemune</surname> <given-names>Y.</given-names></name> <name><surname>Nouchi</surname> <given-names>R.</given-names></name> <name><surname>Kambara</surname> <given-names>T.</given-names></name> <name><surname>Kawashima</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>Insular and hippocampal contributions to remembering people with an impression of bad personality.</article-title> <source><italic>Soc. Cogn. Affect. Neurosci.</italic></source> <volume>8</volume> <fpage>515</fpage>&#x2013;<lpage>522</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nss025</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Werheid</surname> <given-names>K.</given-names></name> <name><surname>Schacht</surname> <given-names>A.</given-names></name> <name><surname>Sommer</surname> <given-names>W.</given-names></name></person-group> (<year>2007</year>). <article-title>Facial attractiveness modulates early and late event-related brain potentials.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>76</volume> <fpage>100</fpage>&#x2013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2007.06.008</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wieser</surname> <given-names>M. J.</given-names></name> <name><surname>Gerdes</surname> <given-names>A. B.</given-names></name> <name><surname>B&#x00FC;ngel</surname> <given-names>I.</given-names></name> <name><surname>Schwarz</surname> <given-names>K. A.</given-names></name> <name><surname>M&#x00FC;hlberger</surname> <given-names>A.</given-names></name> <name><surname>Pauli</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Not so harmless anymore: how context impacts the perception and electrocortical processing of neutral faces.</article-title> <source><italic>Neuroimage</italic></source> <volume>92</volume> <fpage>74</fpage>&#x2013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.01.022</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wieser</surname> <given-names>M. J.</given-names></name> <name><surname>Pauli</surname> <given-names>P.</given-names></name> <name><surname>Reicherts</surname> <given-names>P.</given-names></name> <name><surname>M&#x00FC;hlberger</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Don&#x2019;t look at me in anger! Enhanced processing of angry faces in anticipation of public speaking.</article-title> <source><italic>Psychophysiology</italic></source> <volume>47</volume> <fpage>271</fpage>&#x2013;<lpage>280</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2009.00938.x</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zajonc</surname> <given-names>R. B.</given-names></name></person-group> (<year>2001</year>). <article-title>Mere exposure: a gateway to the subliminal.</article-title> <source><italic>Curr. Dir. Psychol. Sci.</italic></source> <volume>10</volume> <fpage>224</fpage>&#x2013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1111/1467-8721.00154</pub-id></citation></ref>
</ref-list>
</back>
</article>