<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychology</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychology</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2012.00124</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Early Deafness Increases the Face Inversion Effect But Does Not Modulate the Composite Face Effect</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>de Heering</surname> <given-names>Ad&#x000E9;la&#x000EF;de</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001">&#x0002A;</xref>
<!-- http://www.frontiersin.org/Community/WhosWhoActivity.aspx?sname=AdelaideDe_Heering&UID=6506 -->
</contrib>
<contrib contrib-type="author">
<name><surname>Aljuhanay</surname> <given-names>Abeer</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Rossion</surname> <given-names>Bruno</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<!-- http://www.frontiersin.org/Community/WhosWhoActivity.aspx?sname=BrunoRossion&UID=2838 -->
</contrib>
<contrib contrib-type="author">
<name><surname>Pascalis</surname> <given-names>Olivier</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<!-- http://www.frontiersin.org/Community/WhosWhoActivity.aspx?sname=OlivierPascalis_1&UID=15356 -->
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Face Categorization Lab, Facult&#x000E9; de Psychologie et des Sciences de l&#x02019;Education, Universit&#x000E9; Catholique de Louvain</institution> <country>Louvain-la-Neuve, Belgium</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Psychology, The University of Sheffield</institution> <country>Sheffield, UK</country></aff>
<aff id="aff3"><sup>3</sup><institution>Laboratoire de Psychologie et Neurocognition, Centre national de la recherche scientifique, Universit&#x000E9; Pierre Mendes</institution> <country>Grenoble, France</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Laurence T. Maloney, Stanford University, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Gyula Kov&#x000E1;cs, Budapest University of Technology, Hungary; Corrado Caudek, Universit&#x000E0; di Firenze, Italy</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Ad&#x000E9;la&#x000EF;de de Heering, Face Categorization Lab, Facult&#x000E9; de Psychologie et des Sciences de l&#x02019;Education, Universit&#x000E9; Catholique de Louvain, Place Cardinal Mercier, 10, 1348 Louvain-la-Neuve, Belgium. e-mail: <email>adelaide.deheering&#x00040;uclouvain.be</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Frontiers in Perception Science, a specialty of Frontiers in Psychology.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>04</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>3</volume>
<elocation-id>124</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>10</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>04</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2012 de Heering, Aljuhanay, Rossion and Pascalis.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement"><p>This is an open-access article distributed under the terms of the <uri xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons Attribution Non Commercial License</uri>, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.</p></license>
</permissions>
<abstract>
<p>Early deprivation in audition can have striking effects on the development of visual processing. Here we investigated whether early deafness induces changes in holistic/configural face processing. To this end, we compared the results of a group of early deaf participants to those of a group of hearing participants in an inversion-matching task (Experiment 1) and a composite face task (Experiment 2). We hypothesized that deaf individuals would show an enhanced inversion effect and/or an increased composite face effect compared to hearing controls in case of enhanced holistic/configural face processing. Conversely, these effects would be reduced if they rely more on facial features than hearing controls. As a result, we found that deaf individuals showed an increased inversion effect for faces, but not for non-face objects. They were also significantly slower than hearing controls to match inverted faces. However, the two populations did not differ regarding the overall size of their composite face effect. Altogether these results suggest that early deafness does not enhance or reduce the amount of holistic/configural processing devoted to faces but may increase the dependency on this mode of processing.</p>
</abstract>
<kwd-group>
<kwd>faces</kwd>
<kwd>configural</kwd>
<kwd>holistic</kwd>
<kwd>inversion</kwd>
<kwd>composite</kwd>
<kwd>deaf</kwd>
<kwd>hearing</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="47"/>
<page-count count="10"/>
<word-count count="6743"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction">
<title>Introduction</title>
<p>Traditionally, there have been two main views on how humans recognize faces: the analytical view and the holistic/configural view (Ellis, <xref ref-type="bibr" rid="B12">1975</xref>; Sergent, <xref ref-type="bibr" rid="B38">1986</xref>). According to the analytical view (e.g., Haig, <xref ref-type="bibr" rid="B18">1984</xref>; Gosselin and Schyns, <xref ref-type="bibr" rid="B17">2001</xref>; Sadr et al., <xref ref-type="bibr" rid="B36">2003</xref>), observers explore a face by scanning local features in order to extract the most diagnostic information to individualize the face. According to the holistic/configural view, the features are not perceived and represented independently of each other. Instead the face is perceived as an integrated whole (Sergent, <xref ref-type="bibr" rid="B38">1986</xref>; Tanaka and Farah, <xref ref-type="bibr" rid="B41">1993</xref>; Farah et al., <xref ref-type="bibr" rid="B15">1998</xref>; Maurer et al., <xref ref-type="bibr" rid="B26">2002</xref>; Rossion, <xref ref-type="bibr" rid="B33">2008</xref>). The holistic/configural view has been mainly supported by behavioral studies showing that the perception of a given facial feature (e.g., a local element such as an eye, a distance between two elements, or even half of a face) is influenced by the position and the identity of other facial features (Sergent, <xref ref-type="bibr" rid="B38">1986</xref>; Young et al., <xref ref-type="bibr" rid="B47">1987</xref>; Tanaka and Farah, <xref ref-type="bibr" rid="B41">1993</xref>; Farah et al., <xref ref-type="bibr" rid="B15">1998</xref>).</p>
<p>Holistic/configural processing matures early on (e.g., Tanaka et al., <xref ref-type="bibr" rid="B42">1998</xref>; de Heering et al., <xref ref-type="bibr" rid="B11">2007</xref>) and its integrity depends on early visual experience (e.g., Maurer et al., <xref ref-type="bibr" rid="B27">2007</xref>). This is illustrated by studies performed with patients who missed early visual inputs because a dense and opaque cataract blocked any patterned visual input to reach the retina at birth, until the day of surgery. Specifically, when tested at adulthood, these cataract-reversal patients perform worse than age-matched controls when they have to discriminate faces that differ in term of the relative distance between the features (Le Grand et al., <xref ref-type="bibr" rid="B21">2001</xref>). Contrary to age-matched controls, their matching of half of a face is also not influenced by the identity of the other half (absence of composite face effect; Le Grand et al., <xref ref-type="bibr" rid="B22">2004</xref>).</p>
<p>An unresolved issue is whether the absence of early inputs in another modality than vision, audition for instance, can modulate holistic/configural processing. There are a few reasons why this could be the case. First, it has been suggested, in agreement with some experiments performed on animals, that enhanced visual performance is found in specific regions of the auditory cortex of congenital deaf cats (e.g., Lomber et al., <xref ref-type="bibr" rid="B25">2010</xref>; for a review, see Rauschecker, <xref ref-type="bibr" rid="B32">1995</xref>). The same phenomenon has also been observed in humans (for a review, see Bavelier and Neville, <xref ref-type="bibr" rid="B5">2002</xref>). More specifically, the absence of audition, sometimes together with the use of sign language, has been identified as a potential factor favoring intermediate and high-level vision: deaf are better than hearing controls to perform mental rotation (Emmorey et al., <xref ref-type="bibr" rid="B13">1993</xref>) and gestalt completion (Siple et al., <xref ref-type="bibr" rid="B39">1978</xref>), as well as to detect (Loke and Song, <xref ref-type="bibr" rid="B24">1991</xref>; Stivalet et al., <xref ref-type="bibr" rid="B40">1998</xref>; Armstrong et al., <xref ref-type="bibr" rid="B1">2002</xref>) and discriminate (Neville and Lawson, <xref ref-type="bibr" rid="B30">1987</xref>) moving stimuli appearing in periphery but not in the center of the visual field (Bavelier et al., <xref ref-type="bibr" rid="B6">2000</xref>, <xref ref-type="bibr" rid="B4">2006</xref>; Bosworth and Dobkins, <xref ref-type="bibr" rid="B8">2002</xref>; Buckley et al., <xref ref-type="bibr" rid="B9">2010</xref>; although, see Hauser et al., <xref ref-type="bibr" rid="B19">2007</xref>).</p>
<p>Regarding face recognition abilities, deaf individuals&#x02019; superiority over hearing, as well as the exact role of early exposure to sign language is still debated. Using a card memory game, Arnold and Murray (<xref ref-type="bibr" rid="B3">1998</xref>) found that deaf signers were better than hearings signers to match faces, but not objects. In turn, hearing signers performed better than hearing non-signers. Given these results, the authors attributed deaf superiority to their use of sign language and raised the appealing possibility that deafness and the long use of sign language might have additive effects. However these authors did not control for the age of sign language acquisition, which makes strong conclusions about the importance of signing difficult to make. Later on, Arnold and Mills (<xref ref-type="bibr" rid="B2">2001</xref>) compared the performance of a group of deaf signers, hearing signers, and hearing non-signers in a task where they had to memorize the location of objects, faces, and shoes. They found deaf signers to perform like hearing signers, both of whom were better than hearing non-signers on the face and shoe task. Along the same line, Bettger et al. (<xref ref-type="bibr" rid="B7">1997</xref>) reported that deaf individuals using American signed language (ASL) performed significantly better than hearing non-signers at discriminating face photographs presented under different views and lighting. In another experiment, the same authors showed that hearing signers born from deaf parents also show better results than hearing non-signers in this task, suggesting indirectly that the enhanced performance of deaf signers is linked to their experience with ASL rather than to their auditory deprivation. Interestingly deaf signers&#x02019; expertise with upright faces does not extend to inverted faces (Bettger et al., <xref ref-type="bibr" rid="B7">1997</xref>), Mooney faces or the recognition of the faces of the Warrington test (1984) in which participants have to recognize previously memorized faces (McCullough and Emmorey, <xref ref-type="bibr" rid="B28">1997</xref>).</p>
<p>Although McCullough and Emmorey&#x02019; (<xref ref-type="bibr" rid="B28">1997</xref>) study shows a superiority of deaf people to detect subtle manipulations introduced at the level of the mouth, the studies described above focus more on the global level of performance of this population with faces than on <italic>how</italic> they process faces. Here we addressed this issue by testing a group of deaf participants and a group of hearing participants in an inversion-matching task (Experiment 1) and a composite face task (Experiment 2). In Experiment 1, we used picture-plane inversion as a manipulation because it disrupts the ability to process faces to a greater extent than what is observed for non-face stimuli (Yin, <xref ref-type="bibr" rid="B46">1969</xref>). Specifically, it has been suggested that whereas upright faces are encoded as integrated wholes, inverted faces are rather processed feature-by-feature, in a piecemeal manner (e.g., Yin, <xref ref-type="bibr" rid="B46">1969</xref>; Sergent, <xref ref-type="bibr" rid="B37">1984</xref>; Farah et al., <xref ref-type="bibr" rid="B14">1995</xref>; Maurer et al., <xref ref-type="bibr" rid="B26">2002</xref>; Rossion, <xref ref-type="bibr" rid="B33">2008</xref>, <xref ref-type="bibr" rid="B34">2009</xref>). Here, we hypothesized that, independently of whether the stimuli are presented simultaneously on the screen (Experiment 1A) or with a delay between the target and the probes (Experiment 1B), if deaf participants focus more than hearing participants on the details of the faces such as the mouth for example, as previously suggested by McCullough and Emmorey (<xref ref-type="bibr" rid="B28">1997</xref>), they could show a reduced or abolished inversion effect. Indeed it has been shown that individuals with acquired prosopagnosia, who rely more on some facial features than controls without being able to integrate them into a coherent template, show an abolished face inversion effect (see Busigny and Rossion, <xref ref-type="bibr" rid="B10">2010</xref> for a recent review). Alternatively, if deaf individuals integrate facial features to a greater extent than hearing controls because they are used to process a largely distributed range of visual information (by simultaneously processing the mouth and the eyes to understand the syntactic structure of a sentence, for example), they could show an equally large or even enhanced face inversion effect compared to hearing controls. In Experiment 2, we used the composite face effect originally reported by Young et al. (<xref ref-type="bibr" rid="B47">1987</xref>). In the context of a matching task with unfamiliar faces (Hole, <xref ref-type="bibr" rid="B20">1994</xref>), it refers to the observation that two identical top parts of a face are perceived as slightly different if their respective bottom parts belong to different facial identities. This perceptual illusion is abolished or strongly reduced if the top and the bottom parts of the face are laterally offset. Here we used the paradigm as it is generally used, asking participants to pay attention to the top parts of the faces while their bottom halves are different (Experiment 2A). We expected hearing participants to show a strong composite effect on accuracy and/or correct response times (for empirical demonstrations, see for example, Hole, <xref ref-type="bibr" rid="B20">1994</xref>; Le Grand et al., <xref ref-type="bibr" rid="B22">2004</xref>; Goffaux and Rossion, <xref ref-type="bibr" rid="B16">2006</xref>; Michel et al., <xref ref-type="bibr" rid="B29">2006</xref>; Rossion and Boremanse, <xref ref-type="bibr" rid="B35">2008</xref>). Consistently with the predictions of Experiment 1, we also hypothesized that deaf individuals would show a larger composite effect than hearing controls in case of enhanced holistic/configural face processing. Conversely, their composite effect would be reduced if they rely more on facial features than hearing controls. Participants were also asked to judge the bottom parts of another set of composite faces (Experiment 2B). As previously shown (Young et al., <xref ref-type="bibr" rid="B47">1987</xref>; Ramon et al., <xref ref-type="bibr" rid="B31">2010</xref>), we expected hearing participants to show a smaller composite effect than in Experiment 2A. We also hypothesized that deaf participants would be less affected than hearing participants in this part of the experiment because of their everyday use of lip-reading.</p>
</sec>
<sec>
<title>Experiment 1: The Inversion Effect</title>
<sec>
<title>Methods</title>
<sec>
<title>Participants</title>
<p>The sample was composed of 35 deaf participants (mean age: 36&#x02009;years; 12 males) from Belgium (<italic>N</italic>&#x02009;&#x0003D;&#x02009;20; mean age: 38&#x02009;years; six males; two left-handed) and the United Kingdom (<italic>N</italic>&#x02009;&#x0003D;&#x02009;15; mean age: 35&#x02009;years; six males; all right-handed). None had history of neurological disorder and they were all characterized by a severe to profound hearing loss (&#x0003E;80&#x02009;dB, based on a questionnaire, see Table <xref ref-type="table" rid="T1">1</xref>). We refer to our participants as early deaf because they were either congenitally deaf (<italic>N</italic>&#x02009;&#x0003D;&#x02009;25; 71%), became deaf between 9&#x02009;months and 13&#x02009;years of age (<italic>N</italic>&#x02009;&#x0003D;&#x02009;8; 23%) or were deaf from an undetermined period during childhood because of the absence of an early diagnosis (<italic>N</italic>&#x02009;&#x0003D;&#x02009;2; 6%). With the exception of two participants who became deaf during infancy/childhood and who were not fluent at signing, all participants were using signed language that they learned thanks to one or two deaf signing parents or the attendance to a school where sign language was promoted (for more details, see Table <xref ref-type="table" rid="T1">1</xref>). Thirty-five Belgian (<italic>N</italic>&#x02009;&#x0003D;&#x02009;20; mean age: 37&#x02009;years; five males; one left-handed) and British (<italic>N</italic>&#x02009;&#x0003D;&#x02009;15; mean age: 34&#x02009;years; six males; all right-handed) hearing adults were also tested. None of them had signing expertise. The hearing group matched the non-hearing group in sex and age [<italic>t</italic>(68)&#x02009;&#x0003D;&#x02009;0.227, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.821]. Every participant had normal or corrected-to-normal visual acuity.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Characteristics of the deaf sample</bold>.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" valign="top">Gender</td>
<td align="left" valign="top">Female</td>
<td align="right" valign="top">23</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Male</td>
<td align="right" valign="top">12</td>
</tr>
<tr>
<td align="left" valign="top">Country</td>
<td align="left" valign="top">Belgium</td>
<td align="right" valign="top">20</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">United Kingdom</td>
<td align="right" valign="top">15</td>
</tr>
<tr>
<td align="left" valign="top">Origin of deafness</td>
<td align="left" valign="top">Birth</td>
<td align="right" valign="top">25</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Infancy or childhood</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;8</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Undefined</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;2</td>
</tr>
<tr>
<td align="left" valign="top">Causes of deafness</td>
<td align="left" valign="top">Heredity</td>
<td align="right" valign="top">15</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Pregnancy related</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;2</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">German measles</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;2</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Childhood illness</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;1</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Meningitis</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;3</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Others</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;4</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Unknown</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;8</td>
</tr>
<tr>
<td align="left" valign="top">Sign language</td>
<td align="left" valign="top">Yes</td>
<td align="right" valign="top">33</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">No</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;2</td>
</tr>
<tr>
<td align="left" valign="top">Age sign language was acquired</td>
<td align="left" valign="top">Birth</td>
<td align="right" valign="top">11</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Infancy</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;4</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Childhood</td>
<td align="right" valign="top">12</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Adulthood</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;6</td>
</tr>
<tr>
<td align="left" valign="top">Lip-reading</td>
<td align="left" valign="top">Yes</td>
<td align="right" valign="top">34</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">No</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;1</td>
</tr>
<tr>
<td align="left" valign="top">Hearing aid</td>
<td align="left" valign="top">One ear</td>
<td align="right" valign="top">&#x02009;&#x02009;&#x02009;9</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">Two ears</td>
<td align="right" valign="top">15</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">No</td>
<td align="right" valign="top">11</td>
</tr>
<tr>
<td align="left" valign="top">Family history of deafness</td>
<td align="left" valign="top">Yes</td>
<td align="right" valign="top">15</td>
</tr>
<tr>
<td align="left"/>
<td align="left" valign="top">No</td>
<td align="right" valign="top">20</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Stimuli</title>
<p>In both experiments (Experiment 1A&#x02013;Experiment 1B), grayscale photographs of 24 individuals (12 women) and 24 cars were used. One full-front and one 3/4 profile of each face and car picture were created. Faces and cars subtended 5&#x02009;&#x000D7;&#x02009;7.8&#x000B0; and 7.1&#x02009;&#x000D7;&#x02009;5.7&#x000B0; of visual angle, respectively. All stimuli were displayed on a white background and presented either in upright or inverted orientation. More details about the stimuli are provided in Busigny and Rossion (<xref ref-type="bibr" rid="B10">2010</xref>).</p>
</sec>
<sec>
<title>Procedure</title>
<p>Participants were tested at home on a laptop computer (Belgium) or at the Department of Psychology of the University of Sheffield on a computer monitor (United Kingdom), at a viewing distance of 50&#x02009;cm. Stimuli were controlled by E-prime 1.1. Participants had to locate the target between two 3/4 profile items presented at the bottom of the screen (simultaneous presentation, Experiment 1A; Figure <xref ref-type="fig" rid="F1">1</xref>), or on another screen after a target presented centrally (delayed presentation, Experiment 1B; Figure <xref ref-type="fig" rid="F2">2</xref>). The order of the experiments was counterbalanced across participants. The orientation (upright/inverted) of the target was consistent between the probe and the distracter. In the simultaneous version of the test (Experiment 1A), a trial ended by participant&#x02019;s response and was followed by a 1000-ms inter-stimulus interval. In the delayed version of the test (Experiment 1B), each trial started with a blank screen (1000&#x02009;ms), followed by a target (2000&#x02009;ms), an inter-trial interval (1000&#x02009;ms), and the probe together with a distracter until participants&#x02019; response. In all cases, participants were instructed to select the stimulus corresponding to the target, by pressing a key on a keyboard according to its position (left/right; Experiment 1A) or its similarity with the target (same/different; Experiment 1B). Experiment 1A was divided into two blocks of 72 randomized trials preceded by seven practice trials. Half of the trials (<italic>n</italic>&#x02009;&#x0003D;&#x02009;36; 1/2 upright and 1/2 inverted) were composed of face stimuli, the other half of car stimuli (<italic>n</italic>&#x02009;&#x0003D;&#x02009;36; 1/2 upright and 1/2 inverted). Experiment 1B was divided into two blocks of 48 randomized trials. Half of the trials were faces (<italic>n</italic>&#x02009;&#x0003D;&#x02009;24; 1/2 upright and 1/2 inverted), the other half were cars (<italic>n</italic>&#x02009;&#x0003D;&#x02009;24; 1/2 upright and 1/2 inverted).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>In Experiment 1A, participants had to decide which of the left or right stimulus (face or car) presented at the bottom of the panel corresponded to the target (face or car) presented at the top of the panel</bold>. The stimuli were presented in upright or inverted orientation.</p></caption>
<graphic xlink:href="fpsyg-03-00124-g001.tif"/>
</fig>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>In Experiment 1B, participants had to decide which of the left or right stimulus (face or car) presented on a second screen corresponded to the target (face or car) presented on the first screen</bold>. The stimuli were presented in upright or inverted orientation.</p></caption>
<graphic xlink:href="fpsyg-03-00124-g002.tif"/>
</fig>
</sec>
<sec>
<title>Analyses</title>
<p>Participants&#x02019; results (Deaf: <italic>N</italic>&#x02009;&#x0003D;&#x02009;35; Controls: <italic>N</italic>&#x02009;&#x0003D;&#x02009;35) were analyzed separately according to whether all three stimuli were presented simultaneously (Experiment 1A) or with a delay between the target and the probes (Experiment 1B). Participants&#x02019; accuracy (% of correct responses) and correct response times (ms) that were not exceeding 3 SDs from their own average were taken into account for analyses. We performed repeated measures analyses of variance (ANOVAs) on participants&#x02019; accuracy (%) and correct response times (ms) for each experiment separately, with the <italic>orientation</italic> of the stimulus (upright vs. inverted) and the stimulus <italic>category</italic> (faces vs. cars) as within-subject factors, and the <italic>group</italic> (deaf vs. hearing) as the between-subjects factor. We further performed additional ANOVAs for each stimulus category separately as well as independent <italic>t</italic>-tests to differentiate the groups. Finally we replicated the analyses without the two non-signer participants who became deaf during infancy/childhood (Deaf: <italic>N</italic>&#x02009;&#x0003D;&#x02009;33; Controls: <italic>N</italic>&#x02009;&#x0003D;&#x02009;33). As their results did not significantly influence the general pattern of results, their data were included in the analyses.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Experiment 1A: Simultaneous presentation</title>
<p>Participants were significantly more accurate with cars than with faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;175.958, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] as well as with upright stimuli than inverted stimuli [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;183.812, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. The inversion effect was significant for both faces [<italic>F</italic>(1, 68)&#x02009;&#x0003D;&#x02009;168.017, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] and cars [<italic>F</italic>(1, 68)&#x02009;&#x0003D;&#x02009;20.540, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] and was larger for faces than for cars, as evidenced by a significant two-way interaction between the category and the orientation of the stimulus [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;105.129, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. Deaf participants were also better than age-matched controls to match stimuli presented simultaneously [Deaf: <italic>X</italic>&#x02009;&#x0003D;&#x02009;90%, SD&#x02009;&#x0003D;&#x02009;6; Controls: <italic>X</italic>&#x02009;&#x0003D;&#x02009;88%, SD&#x02009;&#x0003D;&#x02009;6; <italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;4.187, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.045]. There was no interaction between the group of participants and either the stimulus category or the stimulus orientation or between the group, the stimulus category, and the orientation of the stimulus (<italic>p</italic>s&#x02009;&#x0003E;&#x02009;0.05).</p>
<p>Overall, participants were also faster to match cars than faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;105.318, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] and upright stimuli than inverted stimuli [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;134.847, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. However there was no significant difference between deaf and hearing subjects in term of response times [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;3.510, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.065]. As for accuracy, there was a significant inversion effect for faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;99.277, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] and for cars [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;29.447, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. Furthermore the inversion effect was larger for faces than cars, as illustrated by the significant interaction between the category and the orientation of the stimulus [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;49.741, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. A significant interaction between the group and the orientation of the stimulus [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;9.381, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.003] was also found but, crucially for the purpose of this study, there was a significant three-way interaction between category, orientation, and group [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;4.389, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.040]. As a follow-up on these interactions, we conducted ANOVAs for each stimulus category separately (faces/cars). We found a significant two-way interaction between the stimulus orientation and the group of participants for faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;7.607, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.007] but not for cars [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;1.218, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.274] because deaf participants were significantly slower than controls to match inverted faces [<italic>t</italic>(68)&#x02009;&#x0003D;&#x02009;2.001, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.049] but not upright faces [<italic>t</italic>(68)&#x02009;&#x0003D;&#x02009;0.779, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.438; Figure <xref ref-type="fig" rid="F3">3</xref>].</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Proportion of correct responses (A) and correct response times (B) of deaf participants (white) and hearing participants (black) when matching upright and inverted faces or cars presented simultaneously on the screen (Experiment 1A)</bold>. Bars represent SEs. Asterisks indicate significant differences between the two groups (<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05).</p></caption>
<graphic xlink:href="fpsyg-03-00124-g003.tif"/>
</fig>
</sec>
<sec>
<title>Experiment 1B: Delayed presentation</title>
<p>As in Experiment 1A, participants performed significantly better with cars than with faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;168.597, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] as well as with upright stimuli than inverted stimuli [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;199.781, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. These results led to significant inversion effects for both faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;209.746, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] and cars [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;14.311, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. There was also a significant two-way interaction between the stimulus category and the orientation of the stimulus indicating a larger inversion effect for faces than for cars [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;101.950, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. In addition, deaf subjects were significantly better than age-matched controls to match the target to the probe item [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;4.461, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.038]. As for experiment 1A, there was no interaction between category and orientation, as well as no interaction between the group and either the stimulus category or the orientation of the stimulus (<italic>p</italic>s&#x02009;&#x0003E;&#x02009;0.05).</p>
<p>With regard to response times, deaf participants were generally slower than controls in this version of the experiment [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;7.743, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.007]. Participants&#x02019; inversion effect was again significantly larger for faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;58.434, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] than for cars [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;76.602, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001; category by orientation: <italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;10.966, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. Here we found a significant two-way interaction between group and orientation [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;5.307, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.024] whereas the two-way interaction just failed to reach statistical significance [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;3.087, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.083]. However, upon further examination, separate ANOVAs revealed that deaf and hearing subjects showed a significantly different face inversion effect [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;4.997, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.029], which was not the case when cars were involved [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;0.459, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.5]. As in Experiment 1A, deaf individuals were significantly slower than controls to match inverted faces [<italic>t</italic>(68)&#x02009;&#x0003D;&#x02009;2.831, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.006] but not upright faces [<italic>t</italic>(68)&#x02009;&#x0003D;&#x02009;1.897, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.062; Figure <xref ref-type="fig" rid="F4">4</xref>].</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Proportion of correct responses (A) and correct response times (B) of deaf participants (white) and hearing participants (black) when matching upright and inverted faces or cars presented with a delay between the target and the probes (Experiment 1B)</bold>. Bars represent SEs. Asterisks indicate significant differences between the two groups (<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05).</p></caption>
<graphic xlink:href="fpsyg-03-00124-g004.tif"/>
</fig>
</sec>
<sec>
<title>Comparison between experiments</title>
<p>As previously suggested for some, but not all, aspects of vision (e.g., Bavelier et al., <xref ref-type="bibr" rid="B4">2006</xref>), deaf participants were generally more accurate than hearing controls to discriminate faces and cars. Their response times were also slower than those of hearing participants when the stimuli were presented with a delay. Conversely they performed as fast as controls when faces or cars appeared simultaneously on the screen (see Hauser et al., <xref ref-type="bibr" rid="B19">2007</xref> for similar results). The most interesting observation was that deaf participants showed an increased face inversion effect in response times compared to hearing participants, both during simultaneous matching and delayed matching. This finding cannot be explained by a general effect of inverting a stimulus because the inversion effect for cars was of the same magnitude for the two populations, over the two experiments. Instead the group difference suggests that deaf participants are more dependent on holistic/configural processing than normal observers, taking significantly more time than controls when they cannot rely on holistic/configural processing because of the inversion of the face.</p>
</sec>
</sec>
</sec>
<sec>
<title>Experiment 2: The Composite Face Effect</title>
<sec>
<title>Methods</title>
<sec>
<title>Participants</title>
<p>The 35 hearing and 35 non-hearing participants were the same as those tested in Experiment 1. The order of Experiment 2A and Experiment 2B was counterbalanced across participants as well as the order of Experiment 1 and Experiment 2.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>Gray-scaled full-front pictures of 40 unfamiliar faces (20 women, neutral expression, no glasses, or facial hair) were used to measure the magnitude of the composite face effect. These faces were divided into a top and a bottom segment by dividing them in the middle of the nose using Adobe Photoshop 7.0. They were considered as the original aligned faces (Figure <xref ref-type="fig" rid="F5">5</xref>) and as the original misaligned faces when their bottom part was laterally offset to the right side so that the middle of the nose (bottom part) was vertically aligned to the contour of the top part. The aligned stimuli and misaligned stimuli subtended 9.9&#x02009;&#x000D7;&#x02009;7.8&#x000B0; and 9.9&#x02009;&#x000D7;&#x02009;11.3&#x000B0; of visual angle, respectively. All stimuli were displayed on a light gray background. Each original top part (or bottom part in Experiment 2B) was also combined with the bottom part (or top part in Experiment 2B) of a randomly selected other face to generate, together with the original aligned, or misaligned faces respectively, the 40 pairs used in the &#x0201C;same&#x0201D; condition whose exemplars were therefore only differing with respect to their bottom parts (or top parts in Experiment 2B). Conversely, both the top and bottom face parts differed from the original faces in the 18 pairs composing the &#x0201C;different&#x0201D; condition. The 40 trials (1/2 aligned; 1/2 misaligned) and 18 trials (1/2 aligned; 1/2 misaligned) requiring a &#x0201C;same&#x0201D; and a &#x0201C;different&#x0201D; decision respectively were randomly presented in two blocks of 58 trials. The different proportion of same/different trials was introduced to increase the sensitivity of the composite face paradigm because participants&#x02019; performance in the aligned and misaligned condition on same trials only (see also Le Grand et al., <xref ref-type="bibr" rid="B22">2004</xref>; Michel et al., <xref ref-type="bibr" rid="B29">2006</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Time course and stimuli used to test participants&#x02019; composite face effect</bold>. Participants focused the top parts (Experiment 2A) or the bottom parts (Experiment 2B) of two faces presented sequentially in order to decide whether they were the same or different.</p></caption>
<graphic xlink:href="fpsyg-03-00124-g005.tif"/>
</fig>
</sec>
<sec>
<title>Procedure</title>
<p>The same material and testing distance were used as in Experiment 1. After a short training period, participants completed a delayed matching task with composite faces. Each trial involved the consecutive presentation of two composite stimuli, both being either aligned or misaligned. These two composite faces had to be matched with regard to the identity of the top (Experiment 2A) or the bottom part (Experiment 2B). Participants were asked to decide as accurately and quickly as possible whether the instructed face part was of the same or of a different identity by pressing a left (same) or a right (different) key of the keyboard. Trials started with the presentation of a 300-ms fixation cross at the center of a computer screen. This fixation cross was followed by a blank interval (200&#x02009;ms) upon which a target face was presented for 600&#x02009;ms. After a 300-ms inter-trial interval, a second stimulus was shown until a response was provided (Figure <xref ref-type="fig" rid="F5">5</xref>). The next trial was initiated 1000&#x02009;ms after a given response. In order to restrict the possibility of participants comparing specific locations of the display while performing the task, the target, and the probe appeared at slightly different screen locations.</p>
</sec>
<sec>
<title>Analyses</title>
<p>Participants&#x02019; results were analyzed separately according to whether they were focusing on the top (Experiment 2A) or the bottom (Experiment 2B) parts of faces (Deaf: <italic>N</italic>&#x02009;&#x0003D;&#x02009;35; Controls: <italic>N</italic>&#x02009;&#x0003D;&#x02009;35). As for Experiment 1, we only took into account their accuracy (% of correct responses on same trials) and their correct response times (ms) that were not exceeding 3 SDs from their own average. Then we performed distinct repeated measures ANOVAs on both these dependant variables, with the <italic>alignment</italic> of the face parts (aligned vs. misaligned) as the within-subject factor, and the <italic>group</italic> (deaf vs. hearing) as the between-subjects factor. As for Experiment 1, we also replicated the analyses without the two non-signer individuals who became deaf during infancy/childhood (Deaf: <italic>N</italic>&#x02009;&#x0003D;&#x02009;33; Controls: <italic>N</italic>&#x02009;&#x0003D;&#x02009;33). As their exclusion did not influence the general pattern of results, their data were included in the analyses reported below.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Experiment 2A: Focus on the top</title>
<p>Deaf participants were as good as hearing controls when they had to match the top part of faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;2.615, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.110]. For trials of interest (&#x0201C;same&#x0201D; decision; AS vs. MS trials), there was a main effect of the alignment of the face [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;17.447, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]: participants performed, as expected, better on misaligned trials (MS) than on aligned trials (AS). The composite face effect did not differ between groups, as reflected by the absence of interaction between the group and the alignment of the face parts [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;0.101, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.751; Figure <xref ref-type="fig" rid="F6">6</xref>A].</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Proportion of correct responses (A) and correct response times (B) of deaf and hearing participants when the locus of attention was the top parts of faces (Experiment 2A)</bold>. Bars represent SEs.</p></caption>
<graphic xlink:href="fpsyg-03-00124-g006.tif"/>
</fig>
<p>Overall, deaf participants were also slower than age-matched controls to perform Experiment 2A [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;7.071, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.010]. Like hearing participants, they showed a significant composite face effect on correct response times [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;40.302, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001]. The composite effect did not differ between groups [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;0.137, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.712; Figure <xref ref-type="fig" rid="F6">6</xref>B].</p>
</sec>
<sec>
<title>Experiment 2B: Focus on the bottom</title>
<p>Deaf participants were as accurate as hearing participants when they had to match the bottom parts of faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;0.959, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.331]. For trials of interest (&#x0201C;same&#x0201D; trials), there was a main effect of the alignment of the face [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;23.208, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001], with higher accuracies in misaligned trials (MS) than in aligned trials (AS). As in Experiment 2A, the composite face effect did not differ between groups [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;1.557, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.216; Figure <xref ref-type="fig" rid="F7">7</xref>A].</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Proportion of correct responses (A) and correct response times (B) of deaf and hearing participants when the locus of attention was the bottom parts of faces (Experiment 2B)</bold>. Bars represent SEs.</p></caption>
<graphic xlink:href="fpsyg-03-00124-g007.tif"/>
</fig>
<p>Deaf participants also tended to be slower than age-matched controls to perform this version of the task [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;3.957, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.051]. As for controls, their response times revealed a significant composite effect when they had to focus the bottom of faces [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;20.284, <italic>p</italic>&#x02009;&#x0003C;&#x02009;0.0001] that did not differ between the groups [<italic>F</italic>(1,68)&#x02009;&#x0003D;&#x02009;0.011, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.915; Figure <xref ref-type="fig" rid="F7">7</xref>B].</p>
</sec>
<sec>
<title>Comparison between experiments</title>
<p>Overall, deaf individuals were as accurate as controls, but slower, to perform the composite task, whether the locus of attention was the top (Experiment 2A) or the bottom of faces (Experiment 2B). They were also as sensitive as controls to the alignment of the face parts, which can be taken as the evidence that they are able to integrate two face parts in a single perceptual representation to the same extent as controls. Interestingly, when looking at the results of Experiment 2 into more details, it also appears that the bottom part of the face is more salient for deaf than for hearing participants. Indeed, from Experiment 2B (focus on the bottom; Figure <xref ref-type="fig" rid="F7">7</xref>) to Experiment 2A (focus on the top; Figure <xref ref-type="fig" rid="F6">6</xref>), the interference effect remains stable in hearing participants (6&#x02013;6%) but diminishes in deaf participants (5&#x02013;3%).</p>
</sec>
</sec>
</sec>
<sec>
<title>General Discussion</title>
<p>The aim of this study was to examine how early deaf participants process socially relevant visual stimuli such as faces given the evidence that reorganization of the cortical functions is observed in humans in case of sensory deprivation such as deafness (for a review, see Bavelier and Neville, <xref ref-type="bibr" rid="B5">2002</xref>). To our knowledge McCullough and Emmorey (<xref ref-type="bibr" rid="B28">1997</xref>) were the only authors who investigated this topic. They made the observation that deaf individuals were significantly better than controls to detect featural manipulations introduced at the level of the mouth. To date no study had focused on deaf ability to process faces holistically/configurally despite that this type of processing is thought to be at the heart of hearing individuals&#x02019; expertise with faces (e.g., Farah et al., <xref ref-type="bibr" rid="B15">1998</xref>; Maurer et al., <xref ref-type="bibr" rid="B26">2002</xref>; Van Belle et al., <xref ref-type="bibr" rid="B43">2010a</xref>). Specifically we conducted two experiments to assess whether deaf individuals would show enhanced or reduced holistic/configural face processing compared to hearing controls.</p>
<p>The results of the inversion-matching task (Experiment 1) were threefold. First, deaf participants were slightly better than hearing participants, but also generally slower, to match visual stimuli such as cars and faces except when they were simultaneously presented on the screen. They also showed an enhanced face inversion effect in response times compared to hearing participants, both during the simultaneous and the delayed matching of faces. Finally, they took longer than controls to process inverted faces, suggesting that they were more dependent on holistic/configural processing than non-deprived observers. In line with the perceptual field hypothesis of the face inversion effect (Rossion, <xref ref-type="bibr" rid="B33">2008</xref>, <xref ref-type="bibr" rid="B34">2009</xref>) suggesting that the inversion of a face induces a reduction of the size of the perceptual field to a single local feature, we would hypothesize that hearing participants were able to directly focus on the most diagnostic feature of the face to match inverted faces, namely the eyes. Conversely, we think that deaf participants needed more time to do the same because their representation of a face is probably not as biased towards the diagnostic eye region as it is for hearing participants. In other words, we believe that deaf face individualization capabilities rely on cues that are more evenly distributed on the superior and inferior parts of the face than what is observed in hearing individuals due to long-term experience with lip-reading and in discriminating grammatical facial expression used with sign language (McCullough and Emmorey, <xref ref-type="bibr" rid="B28">1997</xref>; Letourneau and Mitchell, <xref ref-type="bibr" rid="B23">2011</xref>; but, see Watanabe et al., <xref ref-type="bibr" rid="B45">2011</xref> for different results on Japanese deaf participants).</p>
<p>The results of the composite task (Experiment 2) indicated a composite face effect of the same magnitude in deaf and hearing participants, independently of whether the locus of attention was the top or the bottom of the face. Both populations could therefore rely on a holistic template to simultaneously extract information from the whole face configuration. This observation is compatible with Experiment 1 because the inversion-matching paradigm and the composite face paradigm do not measure the exact same thing. Specifically the inversion paradigm is a measure of participants&#x02019; dependency on holistic/configural processing. That is, the more a participant needs holistic/configural face processing to match inverted faces, the more his/her performance drops because of his/her inability to recruit this kind of processing. In contrast, the composite face paradigm provides an index of how strongly the different parts of the face are integrated into a holistic representation when one part of the face (e.g., the bottom of the face) does not provide additional diagnostic information and is only there to interfere.</p>
<p>In sum, the current study suggest that early deafness does not enhance or reduce the amount of holistic/configural processing devoted to faces but rather increases the dependency on this mode of processing. Future studies recording eye&#x02013;gaze fixations during upright and inverted face individualization could help clarifying this issue. For example, gaze-contingency stimulation (Van Belle et al., <xref ref-type="bibr" rid="B43">2010a</xref>,<xref ref-type="bibr" rid="B44">b</xref>) could be used to test a group of early deaf individuals as well as a group of hearing participants with faces. If, as predicted by McCullough and Emmorey (<xref ref-type="bibr" rid="B28">1997</xref>), the absence of one sensory modality such as audition leads to the enhancement of the visual representation of the mouth in an upright face, then deaf individuals should be less impaired than hearing individuals in a condition where a gaze-contingent window only reveals this internal feature (central window condition). The same experiment with inverted faces would confirm or infirm that deaf individuals&#x02019; first fixation on an inverted face is closer to the mouth than what is observed in hearing individuals. Furthermore, if deaf individuals rely on a larger area than hearing controls when they process inverted faces and if, consequently, their dependency on holistic/configural face processing is particularly salient with this face category, they should be more impaired than controls in a condition where the central features of inverted faces, but not upright faces, are masked (central mask condition) that forces the observers to rely on the whole face (Van Belle et al., <xref ref-type="bibr" rid="B43">2010a</xref>,<xref ref-type="bibr" rid="B44">b</xref>).</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>Ad&#x000E9;la&#x000EF;de de Heering and Bruno Rossion are supported by the Belgian National Fund for Scientific Research (FNRS). Olivier Pascalis is supported by ANR Plasticity and Multimodality in Oral Communication for the Deaf.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Armstrong</surname> <given-names>B. A.</given-names></name> <name><surname>Neville</surname> <given-names>H. J.</given-names></name> <name><surname>Hillyard</surname> <given-names>S. A.</given-names></name> <name><surname>Mitchell</surname> <given-names>T. V.</given-names></name></person-group> (<year>2002</year>). <article-title>Auditory deprivation affects processing of motion, but not color</article-title>. <source>Brain Res. Cogn. Brain Res.</source> <volume>14</volume>, <fpage>422</fpage>&#x02013;<lpage>434</lpage>.<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00211-2</pub-id><pub-id pub-id-type="pmid">12421665</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnold</surname> <given-names>P.</given-names></name> <name><surname>Mills</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Memory for faces, shoes, and objects by deaf and hearing signers and hearing nonsigners</article-title>. <source>J. Psycholinguist. Res.</source> <volume>30</volume>, <fpage>185</fpage>&#x02013;<lpage>195</lpage>.<pub-id pub-id-type="doi">10.1023/A:1010329912848</pub-id><pub-id pub-id-type="pmid">11385825</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnold</surname> <given-names>P.</given-names></name> <name><surname>Murray</surname> <given-names>C.</given-names></name></person-group> (<year>1998</year>). <article-title>Memory for faces and objects by deaf and hearing signers and hearing nonsigners</article-title>. <source>J. Psycholinguist. Res.</source> <volume>27</volume>, <fpage>481</fpage>&#x02013;<lpage>497</lpage>.<pub-id pub-id-type="doi">10.1023/A:1023277220438</pub-id><pub-id pub-id-type="pmid">9691334</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavelier</surname> <given-names>D.</given-names></name> <name><surname>Dye</surname> <given-names>M. W. G.</given-names></name> <name><surname>Hauser</surname> <given-names>P. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Do deaf individuals see better?</article-title> <source>Trends Cogn. Sci. (Regul. Ed.)</source> <volume>10</volume>, <fpage>512</fpage>&#x02013;<lpage>518</lpage>.<pub-id pub-id-type="doi">10.1016/j.tics.2006.09.006</pub-id><pub-id pub-id-type="pmid">17015029</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavelier</surname> <given-names>D.</given-names></name> <name><surname>Neville</surname> <given-names>H. J.</given-names></name></person-group> (<year>2002</year>). <article-title>Cross-modal plasticity: where and how?</article-title> <source>Nat. Rev. Neurosci.</source> <volume>3</volume>, <fpage>443</fpage>&#x02013;<lpage>452</lpage>.<pub-id pub-id-type="pmid">12042879</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavelier</surname> <given-names>D.</given-names></name> <name><surname>Tomann</surname> <given-names>A.</given-names></name> <name><surname>Hutton</surname> <given-names>C.</given-names></name> <name><surname>Mitchell</surname> <given-names>T.</given-names></name> <name><surname>Corina</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>Neville</surname> <given-names>H.</given-names></name></person-group> (<year>2000</year>). <article-title>Visual attention to the periphery is enhanced in congenitally deaf individuals</article-title>. <source>J. Neurosci.</source> <volume>20</volume>, <fpage>RC93</fpage>.<pub-id pub-id-type="pmid">10952732</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bettger</surname> <given-names>J.</given-names></name> <name><surname>Emmorey</surname> <given-names>K.</given-names></name> <name><surname>McCullough</surname> <given-names>S.</given-names></name> <name><surname>Bellugi</surname> <given-names>U.</given-names></name></person-group> (<year>1997</year>). <article-title>Enhanced facial discrimination: effects of experience with American sign language</article-title>. <source>J. Deaf Stud. Deaf Educ.</source> <volume>2</volume>, <fpage>223</fpage>&#x02013;<lpage>233</lpage>.<pub-id pub-id-type="doi">10.1093/oxfordjournals.deafed.a014328</pub-id><pub-id pub-id-type="pmid">15579850</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bosworth</surname> <given-names>R. G.</given-names></name> <name><surname>Dobkins</surname> <given-names>K. R.</given-names></name></person-group> (<year>2002</year>). <article-title>Visual field asymmetries for motion processing in deaf and hearing signers</article-title>. <source>Brain Cogn.</source> <volume>49</volume>, <fpage>170</fpage>&#x02013;<lpage>181</lpage>.<pub-id pub-id-type="doi">10.1006/brcg.2001.1497</pub-id><pub-id pub-id-type="pmid">12027401</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buckley</surname> <given-names>D.</given-names></name> <name><surname>Codina</surname> <given-names>C.</given-names></name> <name><surname>Bhardwaj</surname> <given-names>P.</given-names></name> <name><surname>Pascalis</surname> <given-names>O.</given-names></name></person-group> (<year>2010</year>). <article-title>Action video game players and deaf observers have larger Goldmann visual fields</article-title>. <source>Vision Res.</source> <volume>50</volume>, <fpage>548</fpage>&#x02013;<lpage>556</lpage>.<pub-id pub-id-type="doi">10.1016/j.visres.2009.11.018</pub-id><pub-id pub-id-type="pmid">19962395</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Busigny</surname> <given-names>T.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>Acquired prosopagnosia abolishes the face inversion effect</article-title>. <source>Cortex</source> <volume>46</volume>, <fpage>965</fpage>&#x02013;<lpage>981</lpage>.<pub-id pub-id-type="doi">10.1016/j.cortex.2009.07.004</pub-id><pub-id pub-id-type="pmid">19683710</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Heering</surname> <given-names>A.</given-names></name> <name><surname>Houthuys</surname> <given-names>S.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>Holistic face processing is mature at 4 years of age: evidence from the composite face effect</article-title>. <source>J. Exp. Child. Psychol.</source> <volume>96</volume>, <fpage>57</fpage>&#x02013;<lpage>70</lpage>.<pub-id pub-id-type="doi">10.1016/j.jecp.2006.07.001</pub-id><pub-id pub-id-type="pmid">17007869</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ellis</surname> <given-names>H. D.</given-names></name></person-group> (<year>1975</year>). <article-title>Recognizing faces</article-title>. <source>Br. J. Psychol.</source> <volume>66</volume>, <fpage>409</fpage>&#x02013;<lpage>426</lpage>.<pub-id pub-id-type="doi">10.1111/j.2044-8295.1975.tb01437.x</pub-id><pub-id pub-id-type="pmid">1106805</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Emmorey</surname> <given-names>K.</given-names></name> <name><surname>Kosslyn</surname> <given-names>S. M.</given-names></name> <name><surname>Bellugi</surname> <given-names>U.</given-names></name></person-group> (<year>1993</year>). <article-title>Visual imagery and visual-spatial language: enhanced imagery abilities in deaf and hearing ASL signers</article-title>. <source>Cognition</source> <volume>46</volume>, <fpage>139</fpage>&#x02013;<lpage>181</lpage>.<pub-id pub-id-type="doi">10.1016/0010-0277(93)90017-P</pub-id><pub-id pub-id-type="pmid">8432094</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farah</surname> <given-names>M. J.</given-names></name> <name><surname>Tanaka</surname> <given-names>J. W.</given-names></name> <name><surname>Drain</surname> <given-names>H. M.</given-names></name></person-group> (<year>1995</year>). <article-title>What causes the face inversion effect?</article-title> <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>21</volume>, <fpage>628</fpage>&#x02013;<lpage>634</lpage>.<pub-id pub-id-type="doi">10.1037/0096-1523.21.3.628</pub-id><pub-id pub-id-type="pmid">7790837</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farah</surname> <given-names>M. J.</given-names></name> <name><surname>Wilson</surname> <given-names>K. D.</given-names></name> <name><surname>Drain</surname> <given-names>M.</given-names></name> <name><surname>Tanaka</surname> <given-names>J. N.</given-names></name></person-group> (<year>1998</year>). <article-title>What is &#x0201C;special&#x0201D; about face perception?</article-title> <source>Psychol. Rev.</source> <volume>105</volume>, <fpage>482</fpage>&#x02013;<lpage>498</lpage>.<pub-id pub-id-type="doi">10.1037/0033-295X.105.3.482</pub-id><pub-id pub-id-type="pmid">9697428</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goffaux</surname> <given-names>V.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2006</year>). <article-title>Faces are &#x0201C;spatial&#x0201D; &#x02013; holistic face perception is supported by low spatial frequencies</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>32</volume>, <fpage>1023</fpage>&#x02013;<lpage>1039</lpage>.<pub-id pub-id-type="doi">10.1037/0096-1523.32.4.1023</pub-id><pub-id pub-id-type="pmid">16846295</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gosselin</surname> <given-names>F.</given-names></name> <name><surname>Schyns</surname> <given-names>P. G.</given-names></name></person-group> (<year>2001</year>). <article-title>Bubbles: a technique to reveal the use of information in recognition tasks</article-title>. <source>Vision Res.</source> <volume>41</volume>, <fpage>2261</fpage>&#x02013;<lpage>2271</lpage>.<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00097-9</pub-id><pub-id pub-id-type="pmid">11448718</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haig</surname> <given-names>N. D.</given-names></name></person-group> (<year>1984</year>). <article-title>The effect of feature displacement on face recognition</article-title>. <source>Perception</source> <volume>13</volume>, <fpage>505</fpage>&#x02013;<lpage>512</lpage>.<pub-id pub-id-type="doi">10.1068/p130505</pub-id><pub-id pub-id-type="pmid">6535975</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauser</surname> <given-names>P. C.</given-names></name> <name><surname>Dye</surname> <given-names>M. W. G.</given-names></name> <name><surname>Boutla</surname> <given-names>M.</given-names></name> <name><surname>Green</surname> <given-names>S.</given-names></name> <name><surname>Bavelier</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>Deafness and visual enumeration: not all aspects of attention are modified by deafness</article-title>. <source>Brain Res.</source> <volume>1153</volume>, <fpage>178</fpage>&#x02013;<lpage>187</lpage>.<pub-id pub-id-type="doi">10.1016/j.brainres.2007.03.065</pub-id><pub-id pub-id-type="pmid">17467671</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hole</surname> <given-names>G. J.</given-names></name></person-group> (<year>1994</year>). <article-title>Configural factors in the perception of unfamiliar faces</article-title>. <source>Perception</source> <volume>23</volume>, <fpage>65</fpage>&#x02013;<lpage>74</lpage>.<pub-id pub-id-type="doi">10.1068/p230065</pub-id><pub-id pub-id-type="pmid">7936977</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Le Grand</surname> <given-names>R.</given-names></name> <name><surname>Mondloch</surname> <given-names>C.</given-names></name> <name><surname>Maurer</surname> <given-names>D.</given-names></name> <name><surname>Brent</surname> <given-names>H. P.</given-names></name></person-group> (<year>2001</year>). <article-title>Early visual experience and face processing</article-title>. <source>Nature 2001</source>, <volume>410</volume>, <fpage>890</fpage>.</citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Le Grand</surname> <given-names>R.</given-names></name> <name><surname>Mondloch</surname> <given-names>C. J.</given-names></name> <name><surname>Maurer</surname> <given-names>D.</given-names></name> <name><surname>Brent</surname> <given-names>H. P.</given-names></name></person-group> (<year>2004</year>). <article-title>Impairment in holistic face processing following early visual deprivation</article-title>. <source>Psychol. Sci.</source> <volume>15</volume>, <fpage>762</fpage>&#x02013;<lpage>768</lpage>.<pub-id pub-id-type="doi">10.1111/j.0956-7976.2004.00753.x</pub-id><pub-id pub-id-type="pmid">15482448</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Letourneau</surname> <given-names>S. M.</given-names></name> <name><surname>Mitchell</surname> <given-names>T. V.</given-names></name></person-group> (<year>2011</year>). <article-title>Gaze patterns during identity and emotion judgments in hearing adults and deaf users of American Sign Language</article-title>. <source>Perception</source> <volume>40</volume>, <fpage>563</fpage>&#x02013;<lpage>575</lpage>.<pub-id pub-id-type="doi">10.1068/p6858</pub-id><pub-id pub-id-type="pmid">21882720</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Loke</surname> <given-names>W. H.</given-names></name> <name><surname>Song</surname> <given-names>S.</given-names></name></person-group> (<year>1991</year>). <article-title>Central and peripheral visual processing in hearing and nonhearing individuals</article-title>. <source>Bull. Psychon. Soc.</source> <volume>29</volume>, <fpage>437</fpage>&#x02013;<lpage>440</lpage>.</citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lomber</surname> <given-names>S. G.</given-names></name> <name><surname>Meredith</surname> <given-names>M. A.</given-names></name> <name><surname>Kral</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf</article-title>. <source>Nat. Neurosci.</source> <volume>13</volume>, <fpage>1421</fpage>&#x02013;<lpage>1427</lpage>.<pub-id pub-id-type="doi">10.1038/nn.2653</pub-id><pub-id pub-id-type="pmid">20935644</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maurer</surname> <given-names>D.</given-names></name> <name><surname>Le Grand</surname> <given-names>R.</given-names></name> <name><surname>Mondloch</surname> <given-names>C. J.</given-names></name></person-group> (<year>2002</year>). <article-title>The many faces of configural processing</article-title>. <source>Trends Cogn. Sci. (Regul. Ed.)</source> <volume>6</volume>, <fpage>255</fpage>&#x02013;<lpage>260</lpage>.<pub-id pub-id-type="doi">10.1016/S1364-6613(02)01903-4</pub-id><pub-id pub-id-type="pmid">12039607</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maurer</surname> <given-names>D.</given-names></name> <name><surname>Mondloch</surname> <given-names>C. J.</given-names></name> <name><surname>Lewis</surname> <given-names>T. L.</given-names></name></person-group> (<year>2007</year>). <article-title>Sleeper effects</article-title>. <source>Dev. Sci.</source> <volume>10</volume>, <fpage>40</fpage>&#x02013;<lpage>47</lpage>.<pub-id pub-id-type="doi">10.1111/j.1467-7687.2007.00562.x</pub-id><pub-id pub-id-type="pmid">17181698</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCullough</surname> <given-names>S.</given-names></name> <name><surname>Emmorey</surname> <given-names>K.</given-names></name></person-group> (<year>1997</year>). <article-title>Face processing by deaf ASL signers: evidence for expertise in distinguishing local features</article-title>. <source>J. Deaf Stud. Deaf Educ.</source> <volume>2</volume>, <fpage>212</fpage>&#x02013;<lpage>222</lpage>.<pub-id pub-id-type="doi">10.1093/oxfordjournals.deafed.a014327</pub-id><pub-id pub-id-type="pmid">15579849</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Michel</surname> <given-names>C.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name> <name><surname>Han</surname> <given-names>J.</given-names></name> <name><surname>Chung</surname> <given-names>C.-S.</given-names></name> <name><surname>Caldara</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Holistic processing is finely tuned for faces of our own race</article-title>. <source>Psychol. Sci.</source> <volume>17</volume>, <fpage>608</fpage>&#x02013;<lpage>615</lpage>.<pub-id pub-id-type="doi">10.1111/j.1467-9280.2006.01752.x</pub-id><pub-id pub-id-type="pmid">16866747</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neville</surname> <given-names>H. J.</given-names></name> <name><surname>Lawson</surname> <given-names>D.</given-names></name></person-group> (<year>1987</year>). <article-title>Attention to central and peripheral visual space in a movement detection task: an event-related potential and behavioral study. II. Congenitally deaf adults</article-title>. <source>Brain Res.</source> <volume>405</volume>, <fpage>268</fpage>&#x02013;<lpage>283</lpage>.<pub-id pub-id-type="doi">10.1016/0006-8993(87)90297-6</pub-id><pub-id pub-id-type="pmid">3567605</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramon</surname> <given-names>M.</given-names></name> <name><surname>Busigny</surname> <given-names>T.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>Impaired holistic processing of unfamiliar individual faces in a case of acquired prosopagnosia</article-title>. <source>Neuropsychologia</source> <volume>48</volume>, <fpage>933</fpage>&#x02013;<lpage>944</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2009.11.014</pub-id><pub-id pub-id-type="pmid">19944710</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rauschecker</surname> <given-names>J. P.</given-names></name></person-group> (<year>1995</year>). <article-title>Compensatory plasticity and sensory substitution in the cerebral cortex</article-title>. <source>Trends Neurosci.</source> <volume>18</volume>, <fpage>36</fpage>&#x02013;<lpage>43</lpage>.<pub-id pub-id-type="doi">10.1016/0166-2236(95)93948-W</pub-id><pub-id pub-id-type="pmid">7535489</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2008</year>). <article-title>Picture-plane inversion leads to qualitative changes of face perception</article-title>. <source>Acta Psychol. (Amst.)</source> <volume>128</volume>, <fpage>274</fpage>&#x02013;<lpage>289</lpage>.<pub-id pub-id-type="doi">10.1016/j.actpsy.2008.02.003</pub-id><pub-id pub-id-type="pmid">18396260</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2009</year>). <article-title>Distinguishing the cause and consequence of face inversion: the perceptual field hypothesis</article-title>. <source>Acta Psychol. (Amst.)</source> <volume>132</volume>, <fpage>300</fpage>&#x02013;<lpage>312</lpage>.<pub-id pub-id-type="doi">10.1016/j.actpsy.2009.08.002</pub-id><pub-id pub-id-type="pmid">19747674</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossion</surname> <given-names>B.</given-names></name> <name><surname>Boremanse</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>Nonlinear relationship between holistic processing of individual faces and picture-plane rotation: evidence from the face composite illusion</article-title>. <source>J. Vis.</source> <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>.<pub-id pub-id-type="doi">10.1167/8.16.1</pub-id><pub-id pub-id-type="pmid">18484842</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sadr</surname> <given-names>J.</given-names></name> <name><surname>Jarudi</surname> <given-names>I.</given-names></name> <name><surname>Sinha</surname> <given-names>P.</given-names></name></person-group> (<year>2003</year>). <article-title>The role of eyebrows in face recognition</article-title>. <source>Perception</source> <volume>32</volume>, <fpage>285</fpage>&#x02013;<lpage>293</lpage>.<pub-id pub-id-type="doi">10.1068/p5027</pub-id><pub-id pub-id-type="pmid">12729380</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sergent</surname> <given-names>J.</given-names></name></person-group> (<year>1984</year>). <article-title>An investigation into component and configurational processes underlying face recognition</article-title>. <source>Br. J. Psychol.</source> <volume>75</volume>, <fpage>221</fpage>&#x02013;<lpage>242</lpage>.<pub-id pub-id-type="doi">10.1111/j.2044-8295.1984.tb01895.x</pub-id><pub-id pub-id-type="pmid">6733396</pub-id></citation></ref>
<ref id="B38"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Sergent</surname> <given-names>J.</given-names></name></person-group> (<year>1986</year>). <article-title>&#x0201C;Microgenesis of face perception,&#x0201D;</article-title> in <source>Aspects of Face Processing</source>, eds <person-group person-group-type="editor"><name><surname>Ellis</surname> <given-names>H. D.</given-names></name> <name><surname>Jeeves</surname> <given-names>M. A.</given-names></name> <name><surname>Newcombe</surname> <given-names>F.</given-names></name> <name><surname>Young</surname> <given-names>A. M.</given-names></name></person-group> (<publisher-loc>Dordrecht</publisher-loc>: <publisher-name>Kluwer</publisher-name>), <fpage>17</fpage>&#x02013;<lpage>33</lpage>.</citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Siple</surname> <given-names>P.</given-names></name> <name><surname>Hatfield</surname> <given-names>N.</given-names></name> <name><surname>Caccamise</surname> <given-names>F.</given-names></name></person-group> (<year>1978</year>). <article-title>The role of visual perceptual abilities in the acquisition and comprehension of sign language</article-title>. <source>Am. Ann. Deaf</source> <volume>123</volume>, <fpage>852</fpage>&#x02013;<lpage>856</lpage>.<pub-id pub-id-type="pmid">747154</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stivalet</surname> <given-names>P.</given-names></name> <name><surname>Moreno</surname> <given-names>Y.</given-names></name> <name><surname>Richard</surname> <given-names>J.</given-names></name> <name><surname>Barraud</surname> <given-names>P. A.</given-names></name> <name><surname>Raphel</surname> <given-names>C.</given-names></name></person-group> (<year>1998</year>). <article-title>Differences in visual search tasks between congenitally deaf and normally hearing adults</article-title>. <source>Brain. Res. Cogn. Brain Res.</source> <volume>6</volume>, <fpage>227</fpage>&#x02013;<lpage>232</lpage>.<pub-id pub-id-type="doi">10.1016/S0926-6410(97)00026-8</pub-id><pub-id pub-id-type="pmid">9479074</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanaka</surname> <given-names>J. W.</given-names></name> <name><surname>Farah</surname> <given-names>M. J.</given-names></name></person-group> (<year>1993</year>). <article-title>Parts and wholes in face recognition</article-title>. <source>Q. J. Exp. Psychol. (Hove)</source> <volume>46A</volume>, <fpage>225</fpage>&#x02013;<lpage>245</lpage>.<pub-id pub-id-type="doi">10.1080/14640749308401045</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanaka</surname> <given-names>J. W.</given-names></name> <name><surname>Kay</surname> <given-names>J. B.</given-names></name> <name><surname>Grinnell</surname> <given-names>E.</given-names></name> <name><surname>Stanfield</surname> <given-names>B.</given-names></name> <name><surname>Szechter</surname> <given-names>L.</given-names></name></person-group> (<year>1998</year>). <article-title>Face recognition in young children: when the whole is greater than the sum of its parts</article-title>. <source>Vis. Cogn.</source> <volume>5</volume>, <fpage>479</fpage>&#x02013;<lpage>496</lpage>.<pub-id pub-id-type="doi">10.1080/713756795</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Belle</surname> <given-names>G.</given-names></name> <name><surname>de Graef</surname> <given-names>P.</given-names></name> <name><surname>Verfaillie</surname> <given-names>K.</given-names></name> <name><surname>Busigny</surname> <given-names>T.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2010a</year>). <article-title>Whole not hole: expert face recognition requires holistic perception</article-title>. <source>Neuropsychologia</source> <volume>48</volume>, <fpage>2609</fpage>&#x02013;<lpage>2620</lpage>.<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.04.034</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Belle</surname> <given-names>G.</given-names></name> <name><surname>de Graef</surname> <given-names>P.</given-names></name> <name><surname>Verfaillie</surname> <given-names>K.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name> <name><surname>Lef&#x000E8;vre</surname> <given-names>P.</given-names></name></person-group> (<year>2010b</year>). <article-title>Face inversion impairs holistic perception: evidence from gaze-contingent stimulation</article-title>. <source>J. Vis.</source> <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>.<pub-id pub-id-type="doi">10.1167/10.3.12</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watanabe</surname> <given-names>K.</given-names></name> <name><surname>Matsuda</surname> <given-names>T.</given-names></name> <name><surname>Nishioka</surname> <given-names>T.</given-names></name> <name><surname>Namatame</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Eye gaze during observation of static faces in deaf people</article-title>. <source>PLoS ONE</source> <volume>6</volume>, <fpage>e16919</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pone.0016919</pub-id><pub-id pub-id-type="pmid">21359223</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>R. K.</given-names></name></person-group> (<year>1969</year>). <article-title>Looking at upside-down faces</article-title>. <source>J. Exp. Psychol.</source> <volume>81</volume>, <fpage>41</fpage>&#x02013;<lpage>145</lpage>.<pub-id pub-id-type="doi">10.1037/h0027474</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Hellawell</surname> <given-names>D.</given-names></name> <name><surname>Hay</surname> <given-names>D. C.</given-names></name></person-group> (<year>1987</year>). <article-title>Configural information in face perception</article-title>. <source>Perception</source> <volume>16</volume>, <fpage>747</fpage>&#x02013;<lpage>759</lpage>.<pub-id pub-id-type="doi">10.1068/p160747</pub-id><pub-id pub-id-type="pmid">3454432</pub-id></citation></ref>
</ref-list>
</back>
</article>
