<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2021.771529</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Discrimination of Genuine and Posed Facial Expressions of Emotion</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Zhou</surname> <given-names>Huiyu</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/644449/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Ling</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/153906/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Shan</surname> <given-names>Shiguang</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/153076/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Wang</surname> <given-names>Shuo</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/177933/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Liu</surname> <given-names>Jian K.</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/541394/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Computing and Mathematical Sciences, University of Leicester</institution>, <addr-line>Leicester</addr-line>, <country>United Kingdom</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Computing, University of Kent</institution>, <addr-line>Canterbury</addr-line>, <country>United Kingdom</country></aff>
<aff id="aff3"><sup>3</sup><institution>Institute of Computing Technology, University of Chinese Academy of Sciences</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>Department of Chemical and Biomedical Engineering and Rockefeller Neuroscience Institute, West Virginia University</institution>, <addr-line>Morgantown, WV</addr-line>, <country>United States</country></aff>
<aff id="aff5"><sup>5</sup><institution>School of Computing, University of Leeds</institution>, <addr-line>Leeds</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Rufin VanRullen, Centre National de la Recherche Scientifique (CNRS), France</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Huiyu Zhou <email>hz143&#x00040;leicester.ac.uk</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Perception Science, a section of the journal Frontiers in Neuroscience</p></fn></author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>10</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>15</volume>
<elocation-id>771529</elocation-id>
<history>
<date date-type="received">
<day>06</day>
<month>09</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>09</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Zhou, Li, Shan, Wang and Liu.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Zhou, Li, Shan, Wang and Liu</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/9221/discrimination-of-genuine-and-posed-facial-expressions-of-emotion" ext-link-type="uri">Editorial on the Research Topic <article-title>Discrimination of Genuine and Posed Facial Expressions of Emotion</article-title></related-article>
<kwd-group>
<kwd>discrimination</kwd>
<kwd>genuine</kwd>
<kwd>posed facial expressions</kwd>
<kwd>emotion</kwd>
<kwd>visual presentation</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="0"/>
<page-count count="2"/>
<word-count count="1105"/>
</counts>
</article-meta>
</front>
<body>
<p>Facial expressions demonstrate emotional states in interpersonal situations. Evidence shows that part of the facial display reflects the emotional experience that is literally felt by the expresser. Interestingly, human beings are capable of identifying facial expressions of the sensed emotions as a form of intentional deceit to conduct social interaction and to present displays that have the support of others. Staged or posed facial expressions implement an emotion that an expresser intends to convey, where genuine expressions are considered as the companion of spontaneous emotional expressions. The ability to differentiate genuine displays of emotional experience from posed ones is very important for dealing with day-to-day social interactions.</p>
<p>Recent work has been conducted on whether or not people can distinguish between posed and genuine displays of emotion. In spite of few studies to investigate this ability, most prior research suggests that people have the ability to judge genuine and posed facial displays. Unfortunately, previous research has suffered from two major shortcomings: (1) the mixture of staged and genuine displays due to the lack of accounting for possible effects of intentional manipulation, and (2) struggling to consider dynamic aspects when people launch facial stimuli for experimental investigation.</p>
<p>This Research Topic consists of the submission of theoretical and experimental perspectives to broaden understanding of the importance of the discrimination of genuine and posed facial expressions of emotion. Some of them report new theoretical approaches, those from other disciplines of psychology not usually utilized within the discrimination of genuine and staged emotion identification or new theories and designs.</p>
<p>In the article entitled &#x0201C;The role of low-spatial frequency components in the processing of deceptive faces: A study using artificial face models,&#x0201D; <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2019.01468">Kihara and Takeda</ext-link> investigated how spatial frequency information can be used to interpret true emotion. &#x0201C;A call for the empirical investigation of tear stimuli&#x0201D; authored by <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2020.00052">Krivan and Thomas</ext-link> presents a study on the necessity for empirical investigations of the differences (or similarities) in response to posed and genuine tearful expressions. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2020.00329">Zhang et al.</ext-link> conducted research on &#x0201C;Brain activation in contrasts of micro-expression following emotional contexts&#x0201D; with the prediction that the effect of emotional contexts may be dependent on neural activities. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2020.01378">Lander and Butcher&#x00027;s</ext-link> article, entitled &#x0201C;Recognizing genuine from posed facial expressions: Exploring the role of dynamic information and face familiarity,&#x0201D; reported the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expression and distinguishing between genuine and posed expression of emotion.</p>
<p><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2020.580287">Jia et al.</ext-link> introduced a review of the relevant research including spontaneous vs. posed (SVP) facial expression databases and computer vision based detection methods in their article entitled &#x0201C;Detection of genuine and posed facial expressions of emotion: Databases and methods.&#x0201D; In the article authored by <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fncom.2020.587702">Ron-Angevin et al.</ext-link>, &#x0201C;Performance analysis with different types of visual stimuli in a BCI-based speller under an RSVP paradigm,&#x0201D; three different sets of stimuli were assessed under rapid serial visual presentation with the following communication features: white letters, famous faces and neutral pictures. In the article &#x0201C;Identifying emotional expressions: Children&#x00027;s reasoning about pretend emotions of sadness and anger,&#x0201D; <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2020.602385">Serrat et al.</ext-link> attempted to understand children&#x00027;s capacity of identifying pretend emotions by analyzing different sources of information when interpreting emotions simulated in pretend play contexts. In the research work &#x0201C;Deep neural networks for depression recognition based on 2D and 3D facial expressions under emotional stimulus tasks,&#x0201D; Guo et al. created a large scale dataset with subjects of performing five mood elicitation tasks. They also proposed a novel approach for depression recognition using two different deep belief network models. Finally, this thematic topic includes a survey authored by <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fpsyg.2021.653112">Webster et al.</ext-link>, namely &#x0201C;Review: Posed vs. genuine facial emotion recognition and expression in Autism and implications for intervention,&#x0201D; where the literature in studying the deficits of facial emotion recognition in those with autism spectrum disorder is comprehensively discussed.</p>
<p>The studies presented above have set up a landmark to the research on discrimination of genuine and posed facial expressions of emotion. Moving forward, we anticipate more and more research work on deep and thorough analysis of emotion using emerging artificial intelligence techniques.</p>
<sec id="s1">
<title>Author Contributions</title>
<p>All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s2">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
</article>