<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.898027</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial Intelligence Can&#x2019;t Be Charmed: The Effects of Impartiality on Laypeople&#x2019;s Algorithmic Preferences</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Claudy</surname> <given-names>Marius C.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1724494/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Aquino</surname> <given-names>Karl</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1841963/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Graso</surname> <given-names>Maja</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Business, University College Dublin</institution>, <addr-line>Dublin</addr-line>, <country>Ireland</country></aff>
<aff id="aff2"><sup>2</sup><institution>Sauder School of Business, University of British Columbia</institution>, <addr-line>Vancouver, BC</addr-line>, <country>Canada</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Management, University of Otago</institution>, <addr-line>Dunedin</addr-line>, <country>New Zealand</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Pablo Garc&#x00ED;a Ruiz, University of Zaragoza, Spain</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ulrich Leicht-Deobald, University of St. Gallen, Switzerland; Sebastian Hafenbr&#x00E4;dl, University of Navarra, Spain</p></fn>
<corresp id="c001">&#x002A;Correspondence: Marius C. Claudy, <email>Marius.claudy@ucd.ie</email></corresp>
<fn fn-type="other" id="fn004"><p>This article was submitted to Organizational Psychology, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>06</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>898027</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>03</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>05</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Claudy, Aquino and Graso.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Claudy, Aquino and Graso</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article disributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Over the coming years, AI could increasingly replace humans for making complex decisions because of the promise it holds for standardizing and debiasing decision-making procedures. Despite intense debates regarding algorithmic fairness, little research has examined how laypeople react when resource-allocation decisions are turned over to AI. We address this question by examining the role of perceived impartiality as a factor that can influence the acceptance of AI as a replacement for human decision-makers. We posit that laypeople attribute greater impartiality to AI than human decision-makers. Our investigation shows that people value impartiality in decision procedures that concern the allocation of scarce resources and that people perceive AI as more capable of impartiality than humans. Yet, paradoxically, laypeople prefer human decision-makers in allocation decisions. This preference reverses when potential human biases are made salient. The findings highlight the importance of impartiality in AI and thus hold implications for the design of policy measures.</p>
</abstract>
<kwd-group>
<kwd>algorithm aversion</kwd>
<kwd>artificial intelligence</kwd>
<kwd>procedural justice</kwd>
<kwd>decision-making</kwd>
<kwd>impartiality</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="68"/>
<page-count count="10"/>
<word-count count="7952"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Allocating scarce resources between groups and individuals is a perennial challenge of social life (<xref ref-type="bibr" rid="B30">Hardin, 1968</xref>). Deciding who is worthy of university admission, a loan to start a new business, or even an organ transplant involves trade-offs among competing claims and values. Decision-makers charged with the task of allocating such scarce resources face a daunting challenge. Because resources are finite, allocation decisions will benefit some and disadvantage others, and outcomes will often be perceived as unfair by some affected parties (<xref ref-type="bibr" rid="B6">Camps et al., 2019</xref>). Historically, nearly all such decisions were made by humans. What reliably emerges from both research and the common experience is that human decision-makers are not always impartial and often show systematic biases in judgment (<xref ref-type="bibr" rid="B57">Simon, 1951</xref>; <xref ref-type="bibr" rid="B61">Tversky and Kahneman, 1973</xref>, <xref ref-type="bibr" rid="B62">1974</xref>; <xref ref-type="bibr" rid="B23">Gilovich et al., 2002</xref>). Studies show that desirable attributes of decision processes, like consistency, integrity, and impartiality (<xref ref-type="bibr" rid="B60">Thibaut and Walker, 1975</xref>; <xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>), can be easily derailed by ingroup biases (<xref ref-type="bibr" rid="B3">Brewer, 1979</xref>; <xref ref-type="bibr" rid="B33">Hughes, 2017</xref>), biases against the outgroup (<xref ref-type="bibr" rid="B31">Hebl et al., 2020</xref>), or simple preference for those who offer instrumental value to the decision-maker (<xref ref-type="bibr" rid="B12">Cornelis et al., 2013</xref>; <xref ref-type="bibr" rid="B67">Zhao et al., 2015</xref>; <xref ref-type="bibr" rid="B33">Hughes, 2017</xref>). Regardless of the domain (e.g., HR or marketing), decisions are perceived as fairer when they are made without biases or prejudices, and when they are based on accurate information (<xref ref-type="bibr" rid="B10">Colquitt et al., 2013</xref>, <xref ref-type="bibr" rid="B11">2018</xref>; <xref ref-type="bibr" rid="B41">Matta et al., 2017</xref>). Emphasizing the importance of using fair procedures to make decisions has long been advocated as one of the most effective ways to counteract the many biases that can undermine the acceptance and legitimacy of allocation decisions (<xref ref-type="bibr" rid="B15">Cropanzano et al., 2001</xref>, <xref ref-type="bibr" rid="B14">2007</xref>; <xref ref-type="bibr" rid="B42">Miller, 2001</xref>; <xref ref-type="bibr" rid="B32">Helberger et al., 2020</xref>).</p>
<p>Recent advances in Artificial Intelligence (AI) computer systems that can sense, reason, and respond to their environment in real-time, often with human-like intelligence (<xref ref-type="bibr" rid="B51">Robert et al., 2020</xref>), have made many optimistic that AI will soon eliminate human biases and overcome the limitations that often lead to injustice and suboptimal allocation decisions (<xref ref-type="bibr" rid="B22">Ghahramani, 2015</xref>; <xref ref-type="bibr" rid="B55">Silver et al., 2017</xref>; <xref ref-type="bibr" rid="B58">Singh et al., 2017</xref>; <xref ref-type="bibr" rid="B43">Mozer et al., 2019</xref>; <xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>). Indeed, forecasts predict that decision-makers will increasingly turn to AI when allocating scarce resources in domains such as business, law, and healthcare (<xref ref-type="bibr" rid="B5">Brynjolfsson and Mitchell, 2017</xref>; <xref ref-type="bibr" rid="B20">Fountaine et al., 2019</xref>; <xref ref-type="bibr" rid="B21">Frank et al., 2019</xref>; <xref ref-type="bibr" rid="B50">Rawson et al., 2019</xref>). The use of algorithms that rely on big data holds the promise of debiasing decision-making procedures by removing human subjectivity typically inherent in judging and comparing individuals (<xref ref-type="bibr" rid="B44">Newman et al., 2020</xref>). For example, much work has focused on utilizing AI to detect bribery and other forms of corruption to eliminate impartiality violations in governmental and other organizational contexts (<xref ref-type="bibr" rid="B36">K&#x00F6;bis and Mossink, 2021</xref>).</p>
<p>Despite their apparent advantages as decision-making tools, people&#x2019;s trust in AI often lags behind its rising capabilities, and many are averse to turning over allocation decisions to non-human entities (<xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>). Their aversion has been traced to a belief that machines do not possess a complete mind and, therefore, cannot freely choose actions, nor can they adequately reflect on their consequences (<xref ref-type="bibr" rid="B34">Johnson, 2015</xref>; <xref ref-type="bibr" rid="B1">Bigman and Gray, 2018</xref>; <xref ref-type="bibr" rid="B2">Bigman et al., 2019</xref>). Furthermore, <xref ref-type="bibr" rid="B44">Newman et al. (2020)</xref> found that people perceive algorithmically-driven decisions as less fair because of AI&#x2019;s inability to consider and contextualize qualitative information. <xref ref-type="bibr" rid="B7">Castelo et al. (2019)</xref> also investigated people&#x2019;s aversion to relying on algorithms to perform tasks previously done by humans and found that algorithms are trusted and relied on more for tasks that require cognitive abilities and rationality vs. tasks that depend more on emotional intelligence or intuition.</p>
<p>However, whether laypeople perceive AI as more impartial than human deciders has not been explicitly addressed. The aim of the present study is to bridge research on procedural justice (<xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>) with algorithm aversion to explain how laypeople&#x2019;s impartiality perceptions between AI and human decision-makers differ (Study 1), and whether impartiality violations shift people&#x2019;s preferences for AI in allocation decisions (Study 2 and 3).</p>
</sec>
<sec id="S2">
<title>Procedural Justice, Impartiality, and Algorithmic Preferences</title>
<p>Philosophers and scholars have offered several perspectives on why just procedures matter (<xref ref-type="bibr" rid="B49">Rawls, 1971/1999</xref>; <xref ref-type="bibr" rid="B37">Leventhal, 1976</xref>, <xref ref-type="bibr" rid="B38">1980</xref>; <xref ref-type="bibr" rid="B39">Lind and Lissak, 1985</xref>; <xref ref-type="bibr" rid="B65">Tyler et al., 1985</xref>; <xref ref-type="bibr" rid="B53">Sheppard and Lewicki, 1987</xref>; <xref ref-type="bibr" rid="B63">Tyler, 1988</xref>; <xref ref-type="bibr" rid="B54">Sheppard et al., 1992</xref>). In organizational and legal contexts, procedural justice is concerned with people&#x2019;s fairness perceptions regarding the processes or rules applied throughout the decision-making process (<xref ref-type="bibr" rid="B60">Thibaut and Walker, 1975</xref>). When it comes to the allocation of scarce resources like getting a job or securing a loan, procedures will be seen as fairer when they are impartial, i.e., if procedures are applied consistently and without biases; are based on accurate information; are correctable and ethical, and are representative of relevant parties involved in the decision (<xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>). Impartiality means that when people are making moral decisions (e.g., allocating scarce goods and resources) they should not give any special treatment to themselves, or to members of their own ingroup, and instead take a neutral and unbiased position (<xref ref-type="bibr" rid="B13">Cottingham, 1983</xref>).</p>
<p>Decades of organizational justice research show that impartiality perceptions are positively associated with cooperation, performance, or job satisfaction whilst reducing potentially damaging behaviors and attitudes such as retaliation, complaints, or negative word-of-mouth (<xref ref-type="bibr" rid="B8">Cohen-Charash and Spector, 2001</xref>; <xref ref-type="bibr" rid="B10">Colquitt et al., 2013</xref>; <xref ref-type="bibr" rid="B9">Colquitt and Zipay, 2015</xref>). Furthermore, knowing that procedures are impartial can make people more accepting of authorities, laws, and policies, even when the outcomes are disadvantageous to them (<xref ref-type="bibr" rid="B64">Tyler, 1994</xref>; <xref ref-type="bibr" rid="B59">Sunshine and Tyler, 2003</xref>).</p>
<p>One of the greatest hopes regarding algorithmically-driven decisions in organizational contexts lies in AI&#x2019;s ability to suppress or even eliminate common human biases that threaten the enactment of fair procedures (<xref ref-type="bibr" rid="B29">Graso et al., 2019</xref>). AI has the potential for standardizing decision-making processes, thereby eliminating many of the idiosyncrasies that can lead human decision-makers to depart from impartiality (<xref ref-type="bibr" rid="B27">Grace et al., 2018</xref>; <xref ref-type="bibr" rid="B48">Raisch and Krakowski, 2020</xref>). <xref ref-type="bibr" rid="B24">Giubilini and Savulescu (2018)</xref>, for example, argue that AI could serve as an &#x201C;artificial moral advisor&#x201D; because of its ability &#x201C;to take into account the human agent&#x2019;s own principles and values&#x201D; whilst making consistent judgments without human biases and prejudice. In principle, using AI in allocation decisions should thus increase impartiality by &#x201C;standardizing decision procedures and reducing potential conflicts through highly consistent and supposedly unbiased decisions&#x201D; (<xref ref-type="bibr" rid="B45">&#x00D6;tting and Maier, 2018</xref>), which directly correspond to tenets of fair procedures (<xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>).</p>
<p>However, empirical evidence regarding laypeople&#x2019;s perceptions of impartiality in algorithmically-driven decisions is limited, and findings in adjacent domains are ambiguous (for an overview, see <xref ref-type="bibr" rid="B7">Castelo et al., 2019</xref>; <xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>). For example, Newman and colleagues find that algorithm-driven (vs. human-driven) hiring and lay-off decisions are perceived as less fair because people view algorithms as reductionist and unable to contextualize information (<xref ref-type="bibr" rid="B44">Newman et al., 2020</xref>). Yet, the authors do not test for perceptional differences between human and AI deciders. Furthermore, <xref ref-type="bibr" rid="B45">&#x00D6;tting and Maier (2018)</xref> found that procedural justice had a positive impact on employee behaviors and attitudes, irrespective of whether the decider was a human, a robot, or a computer system. One limitation of their study was that the authors manipulated procedural justice (fair vs. unfair) and did not explicitly measure people&#x2019;s baseline perceptions regarding human vs. AI deciders. Other evidence suggests that people might associate greater impartiality with AI-based decision procedures. For example, people generally perceive robots and artificial intelligence systems as consistent and reliable (<xref ref-type="bibr" rid="B17">Dzindolet et al., 2003</xref>), and they are more likely to rely on algorithmic advice than on human advice, particularly when their expertise is limited (<xref ref-type="bibr" rid="B40">Logg et al., 2019</xref>). In comparison, there is ample evidence to suggest that human decisions are often biased, for example, by prejudice (<xref ref-type="bibr" rid="B31">Hebl et al., 2020</xref>) or favoritism toward people who are close to them (<xref ref-type="bibr" rid="B33">Hughes, 2017</xref>). It is, therefore, reasonable to posit that people are more likely to attribute greater impartiality to AI than to a human decision-maker because they will view the former as having more of the attributes that characterize an unbiased decision-maker. Formally, we thus argue that</p>
<list list-type="simple">
<list-item><p><italic>H1</italic> = <italic>Laypeople will associate greater impartiality with AI (vs. human) decision-makers in allocation decisions</italic>.</p>
</list-item>
</list>
<p>We are not assuming that people will believe that AI is entirely impartial, only that they will be viewed as <italic>more</italic> capable of approaching this standard than humans. If so, based on considerable evidence suggesting that people value impartiality and bias-suppression in decision-makers (<xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>), people should prefer an AI over human-decision makers in contexts in which impartiality by human deciders is potentially jeopardized. That is because implementing AI holds the potential to remove subjective (and potentially biased) judgments from allocation decisions and instead make those decisions on more objective or quantifiable competence criteria. It seems possible, therefore, that laypeople who may be subjected to biased evaluations, especially when these judgments are based on prejudice or stereotypes, should prefer AI decision-makers over humans. Formally, we posit that</p>
<list list-type="simple">
<list-item><p><italic>H2</italic> = <italic>When standards of impartiality are potentially violated, laypeople show greater preferences for AI (vs. human) decision-makers in allocation decisions</italic>.</p>
</list-item>
</list>
</sec>
<sec id="S3" sec-type="materials|methods">
<title>Materials and Methods: All Studies</title>
<p>In an exploratory study, we first test whether people perceive allocation decisions that involve AI as more impartial compared to procedures that are led by humans (Study 1). Next, we test whether laypeople&#x2019;s preference for AI (vs. human) decision-makers shifts if they are prompted to think that a human decision-maker might be partial (Study 2 and 3). Our studies comply with ethical regulations regarding human research participants, and we received ethical approval from the Human Research Ethics Committee at a major European university. We obtained informed consent from all participants. We informed them that participation was voluntary and that they could stop their participation at any time. We recruited all participants from Prolific Academic (<xref ref-type="bibr" rid="B46">Peer et al., 2017</xref>). All our studies contain measures of gender and age. Demographic information and sample sizes for each study are presented in <xref ref-type="table" rid="T1">Table 1</xref>. Except for Study 1, all studies were pre-registered.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Samples&#x2019; demographic information.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="center"><italic>N</italic> recruited</td>
<td valign="top" align="center"><italic>N</italic> retained<xref ref-type="table-fn" rid="t1fns1">&#x002A;</xref></td>
<td valign="top" align="center">% Male</td>
<td valign="top" align="center">Age <italic>M</italic></td>
<td valign="top" align="center">Age <italic>SD</italic></td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Study 1</td>
<td valign="top" align="center">120</td>
<td valign="top" align="center">118</td>
<td valign="top" align="center">58.3</td>
<td valign="top" align="center">31.6</td>
<td valign="top" align="center">13.5</td>
</tr>
<tr>
<td valign="top" align="left">Study 2</td>
<td valign="top" align="center">440</td>
<td valign="top" align="center">369</td>
<td valign="top" align="center">51.1</td>
<td valign="top" align="center">32.7</td>
<td valign="top" align="center">11.5</td>
</tr>
<tr>
<td valign="top" align="left">Study 3</td>
<td valign="top" align="center">323</td>
<td valign="top" align="center">318</td>
<td valign="top" align="center">48.4</td>
<td valign="top" align="center">34.6</td>
<td valign="top" align="center">12.2</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t1fns1"><p><italic>&#x002A;We eliminated responses from participants who failed attention or crucial comprehension check questions. We specify our elimination strategy for each study in the text.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<p>All measures, manipulations, and exclusions are disclosed in the &#x201C;Materials and Methods&#x201D; section and <xref ref-type="supplementary-material" rid="DS1">Supplementary Material</xref>. We determined sample sizes by assuming a medium effect (Cohen&#x2019;s <italic>d</italic> = 0.50), and we conducted a power analysis to calculate the required number of participants per condition to obtain a power of 0.95. Data collection was stopped once the pre-determined sample size was reached. All studies included attention or comprehension checks, which resulted in the exclusion of participants who failed those checks. The complete stimulus material and data can be publicly accessed in <xref ref-type="supplementary-material" rid="DS1">Supplementary Material</xref>. Test statistics presented in this research are all two-sided.</p>
</sec>
<sec id="S4">
<title>Study 1</title>
<sec id="S4.SS1">
<title>Materials and Methods</title>
<p>In this exploratory study, we tested our hypothesis that laypeople perceive AI and human decision-makers differently regarding impartiality. We informed participants that they would be presented with two decision procedures and that they had to indicate which of these they would prefer if the decision were being made about them. Participants were asked to assume that they had applied for positions at two different companies. They then learned that &#x201C;both companies consider your qualifications, experience, and skills before making a decision&#x201D; and that &#x201C;the process also involves assessing the likelihood of you performing well on the job and your fit with the company culture.&#x201D; The description of both companies read as follows:</p>
<list list-type="simple">
<list-item><p><bold><italic>Company A</italic></bold> <italic>uses a skilled and experienced Human Resource (HR) manager to evaluate your application and to assess your suitability for the position. In this process, the HR manager uses his/her personal judgment to decide whether you should be hired.</italic></p>
</list-item>
</list>
<list list-type="simple">
<list-item><p><bold><italic>Company B</italic></bold> <italic>uses a highly sophisticated computer program that relies on Artificial Intelligence (AI) to automatically evaluate your application and to predict your suitability for the position. In this process, the computer program uses large amounts of data to decide whether you should be hired.</italic></p>
</list-item>
</list>
<p>The order in which Company A and B were presented was randomized.</p>
<sec id="S4.SS1.SSS1">
<title>Measures</title>
<sec id="S4.SS1.SSS1.Px1">
<title>Impartiality Perceptions</title>
<p>In this exploratory study, we asked participants to compare the two decision procedures on procedural justice dimensions (<xref ref-type="bibr" rid="B60">Thibaut and Walker, 1975</xref>; <xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>), including impartiality. This exploratory study included several items that are not relevant in the context of this study, but are nevertheless included in <xref ref-type="supplementary-material" rid="DS1">Supplementary Material</xref>. We asked participants to use a slider scale to indicate whether they believed that the decision procedure involving the human (&#x2212;10) or the AI (+ 10) would be more impartial. Additionally, we asked participants to indicate which procedure would give them a better chance of getting the job.</p>
</sec>
<sec id="S4.SS1.SSS1.Px2">
<title>Choice</title>
<p>We asked participants to indicate which one of these decision procedures they would prefer if the decision was being made about them (1 = <italic>HR manager</italic>; 2 = <italic>AI</italic>).</p>
</sec>
<sec id="S4.SS1.SSS1.Px3">
<title>Open-Ended Rationale</title>
<p>We also asked participants to briefly explain their choice using a minimum of 100 characters. We used these answers to further inform our theorizing regarding the role of impartiality in allocation decisions.</p>
</sec>
</sec>
</sec>
<sec id="S4.SS2">
<title>Results</title>
<p>Supporting our prediction, results of the single-sample <italic>t</italic>-test showed that people perceive AI as more impartial than humans, [<italic>t</italic>(118) = 8.82; <italic>p</italic> &#x003C; 0.001; <italic>M</italic> = 4.18, <italic>SD</italic> = 5.20, 95% CI = (3.31&#x2013;5.12]; Cohen&#x2019;s <italic>d</italic> = 0.812]. Furthermore, a &#x03C7;<sup>2</sup>-test revealed that the majority of people (76.7%) preferred human deciders over AI if the decision was being made about them, &#x03C7;<sup>2</sup>(1) = 34.13, <italic>p</italic> &#x003C; 0.001. Notably, this preference emerged even though people perceived the AI as more impartial than a human. Finally, results showed that people associated a higher chances of getting the job with the human decider [<italic>t</italic>(118) = &#x2013;6.36; <italic>p</italic> &#x003C; 0.001; <italic>M</italic> = &#x2013;2.68, <italic>SD</italic> = 4.60, 95% CI = (&#x2013;3.52 to &#x2013;1.85); Cohen&#x2019;s <italic>d</italic> = 0.58].</p>
<sec id="S4.SS2.SSS1">
<title>Qualitative Responses</title>
<p>The qualitative answers revealed that a significant proportion of people who chose the AI decider (93%) did so because they felt it was less biased and more impartial than a human decision-maker. For example, one respondent wrote: <italic>&#x201C;I think humans are inherently biased</italic>&#x2026; <italic>I would prefer to be judged solely on my skills and experiences and I think a computer program would do a better job of this because it would not be swayed by my gender or appearance.&#x201D;</italic> In comparison, explanations for the choice of human deciders often involved opposite justifications. As one participant wrote: <italic>&#x201C;I think I would react better toward a person than a machine. The machine doesn&#x2019;t take into account my charm,&#x201D;</italic> hinting that they would have a better chance at influencing a human than a machine.</p>
</sec>
<sec id="S4.SS2.SSS2">
<title>Discussion</title>
<p>Our results support previous research on algorithm aversion by showing that most people prefer humans over AI to make allocation decisions. At the same time, participants believe that an AI would be more impartial than human decision-makers, thus supporting hypothesis 1.</p>
</sec>
</sec>
</sec>
<sec id="S5">
<title>Study 2</title>
<p>Study 1 showed that people perceive AI (vs. human) decision-makers to be more impartial. Based on research we reviewed that shows that people value impartial and unbiased decision-makers in allocation decisions, laypeople&#x2019;s preferences for AI decision-makers should thus shift when they believe that a human decision-maker might be partial. We test this possibility in Study 2 by examining people&#x2019;s preference for an AI over a human when the human is likely to be biased in favor of them, against them, or when they know the human is biased but are uncertain of the direction. We expected people to prefer AI (vs. human) decision-makers when they know that a person is negatively (vs. positively) biased against them.</p>
<sec id="S5.SS1">
<title>Materials and Methods</title>
<p>Assuming a medium effect size (Cohen&#x2019;s <italic>d</italic> = 0.50), power analysis suggested that we need 104 participants per condition to obtain a power of 0.95. To allow for participants failing the attention check, we aimed for 130 participants per condition. The study was pre-registered at: <ext-link ext-link-type="uri" xlink:href="https://aspredicted.org/blind.php?x=xi9es2">https://aspredicted.org/blind.php?x=xi9es2</ext-link>. Sample characteristics are provided in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<p>We utilized a mixed design in which a single factor was varied between subjects: a human decision-maker that was <italic>partial in their favor</italic> (i.e., the decider prefers people who have certain characteristics which the participant possesses); <italic>partial in a way that disadvantaged them</italic> (i.e., decider prefers people who have certain characteristics which the participant does not possess); or partial in an <italic>uncertain</italic> way that could favor or disadvantage them (i.e., the decider is partial, but it is unclear whether they are partial toward the participant).</p>
<p>As a within-subject factor, all participants responded to four scenarios presented in random order in which they had to specify whether they preferred a partial human or an impartial AI to make hiring, lay-off, university admission, and loan approval decisions. For example, in the university admission scenario, participants were informed that &#x201C;a university is deciding which students to admit to their incoming class.&#x201D; Participants then learned that the decision was either made by:</p>
<list list-type="simple">
<list-item>
<label>A.</label>
<p>An admissions officer who has preferences to admit the best students who also fit certain demographic categories such as race, gender, age or social class. It turns out that this admissions officer has preferences for characteristics (<italic>you possess/you do not possess/don&#x2019;t know if you possess or not</italic>), which makes it likely that the officer will be (<italic>favorable/unfavorable/either favorable or unfavorable</italic>) toward you when evaluating your application.</p>
</list-item>
</list>
<p>Or;</p>
<list list-type="simple">
<list-item>
<label>B.</label>
<p>A sophisticated computer program that has been programmed to automatically identify and admit the best students, irrespective of demographic factors such as nationality, gender, age, or social class. The program shows no preferences for demographic characteristics, which means that the program will evaluate your application solely on your academic qualifications. All admissions decisions will be made by the program, with no human input.</p>
</list-item>
</list>
<p>The full description of our vignettes can be found in <xref ref-type="supplementary-material" rid="DS1">Supplementary Material</xref>.</p>
<sec id="S5.SS1.SSS1">
<title>Measure</title>
<sec id="S5.SS1.SSS1.Px1">
<title>Choice</title>
<p>The dependent variable in this study was a dichotomous choice, where we asked people: &#x201C;Which decision procedure would you prefer if you wanted to secure this (outcome: get this job; obtain this loan; be admitted to this University course; keep your job)?&#x201D; (human vs. AI).</p>
</sec>
</sec>
</sec>
<sec id="S5.SS2">
<title>Results</title>
<p>Our pre-registered plan was to analyze this between-subject study and test our second hypothesis using a chi-square difference test. Across the four decisions scenarios, a Chi-squared test, &#x03C7;<sup>2</sup>(2), (<italic>N</italic> = 369) = 60.88; <italic>p</italic> &#x003C; 0.001, revealed that people were more likely to choose a human (63%) when the decision-maker was partial in their favor and less likely to select a human (19%) when the decider was biased against them (<xref ref-type="fig" rid="F1">Figure 1</xref>). A majority chose the AI (63%) when they did not know whether the human decider would be biased favorably or unfavorably toward them (uncertain condition).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Preferences for AI decision-makers in favorably biased, unfavorably biased and uncertain conditions (Study 2). &#x03C7;<sup>2</sup>(2), (<italic>N</italic> = 369) = 60.88; <italic>p</italic> &#x003C; 0.001.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-898027-g001.tif"/>
</fig>
<p>To test the robustness of these findings and detect differences between the four decision contexts, we also conducted an additional (i.e., not pre-registered) repeated measures ANOVA where we treated the preference for AI in each scenario as a repeated measure. Although the dependent variable is binary, prior research has suggested that such violations of normality might be largely inconsequential (<xref ref-type="bibr" rid="B25">Glass et al., 1972</xref>). The ANOVA confirmed a significant main effect of condition on choice, [<italic>F</italic>(2, 366) = 50.38, <italic>p</italic> &#x003C; 0.001, &#x03B7;<sup>2</sup>partial = 0.22]. This finding was further supported by <italic>post hoc</italic> tests, which showed that choices in all conditions were significantly different. We found a small but significant main effect for the scenario, [<italic>F</italic>(3, 366) = 12.78, <italic>p</italic> &#x003C; 0.001, &#x03B7;<sup>2</sup>partial = 0.03], but not for scenario &#x00D7; condition interaction, [<italic>F</italic>(6, 366) = 0.86, <italic>p</italic> = 0.523, &#x03B7;<sup>2</sup> partial = 0.01], providing further support that the results are robust across multiple decision contexts.</p>
</sec>
<sec id="S5.SS3">
<title>Discussion</title>
<p>The results provide evidence that impartiality constitutes an essential determinant of laypeople&#x2019;s preferences for AI in allocation decisions. Specifically, people showed greater preferences for an AI when they believed that human decision-makers might show prejudice or negative biases toward them, thus supporting hypothesis 2. Only if the human was partial in their favor did people prefer the human decision-maker. The finding thus highlights an important boundary condition to people&#x2019;s algorithm aversion, which has been observed across a broad range of decision contexts (<xref ref-type="bibr" rid="B1">Bigman and Gray, 2018</xref>; <xref ref-type="bibr" rid="B66">Young and Monroe, 2019</xref>; <xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>).</p>
</sec>
</sec>
<sec id="S6">
<title>Study 3</title>
<p>In our final study, we examined whether people&#x2019;s preferences for an AI (vs. human) might reverse when they believe that a human decision-maker might be partial against them because of their social status within their profession. We selected respect as a form of one&#x2019;s social resource (<xref ref-type="bibr" rid="B19">Foa and Foa, 1980</xref>; <xref ref-type="bibr" rid="B4">Brown et al., 2020</xref>) which may make a human more partial than AI. The study thus aimed to replicate and advance findings from Study 2.</p>
<sec id="S6.SS1">
<title>Materials and Methods</title>
<p>In this study, we utilized a different manipulation of partiality. Specifically, we varied the degree to which people were respected by others within their profession based on their status. Assuming a medium effect size (Cohen&#x2019;s <italic>d</italic> = 0.50) and a statistical power level of 0.85, power analysis suggests that we needed a minimum of 142 participants per condition to obtain a power of 0.95 for a two-tailed hypothesis test. We recruited 320 participants (160 per condition) to allow for failed attention checks. The study was pre-registered at: <ext-link ext-link-type="uri" xlink:href="https://aspredicted.org/aw8ax.pdf">https://aspredicted.org/aw8ax.pdf</ext-link>. The full sample characteristics are shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
</sec>
<sec id="S6.SS2">
<title>Procedures</title>
<p>We utilized a between-subject design in which we varied partiality (favorable vs. unfavorable) between subjects. People were told that they were highly respected because of their status (partial in their favor), or looked down upon (partial against them) by others within their profession (<xref ref-type="bibr" rid="B4">Brown et al., 2020</xref>). Specifically, we asked people to imagine a situation that could occur in everyday work life. The scenario read as follows:</p>
<disp-quote>
<p>&#x201C;You are a retail manager. You&#x2019;ve been working at a large (<italic>small</italic>) and very prestigious (<italic>insignificant</italic>) retail store in the US for most of your career and generally enjoy what you do. Your role is (<italic>not</italic>) very well respected in your profession and generally, people in your industry highly respect (<italic>look down upon</italic>) you and your work. As a result, many (<italic>very few</italic>) people in your industry have supported you in your career progression.</p>
</disp-quote>
<disp-quote>
<p>Last week you applied for a new managerial position at a large department store, where you and other applicants will have to perform an online job interview. Overall, when you interview for new roles, you are very highly respected (<italic>looked down upon</italic>).&#x201D;</p>
</disp-quote>
<p>We then asked participants to write down three potential upsides (downsides) of having their current job, when applying for a new position.</p>
<p>Next, participants learned that the manager of the retail store &#x201C;informed them that they can choose to be interviewed by the current manager or a highly sophisticated Artificial Intelligence (AI) software.&#x201D; We then provided participants with an explanation of the interview processes, informing them that &#x201C;during the human-led (AI-led) interview, the current manager (a highly sophisticated AI) will ask you a series of questions about your previous work experience and will use your answers to assess your suitability for the position. In this process, the manager (AI) will use personal judgment and experience (an advanced algorithm) to determine whether you are the right fit for the position. The final hiring decisions will be made by the manager (AI).&#x201D;</p>
<sec id="S6.SS2.SSS1">
<title>Measures</title>
<sec id="S6.SS2.SSS1.Px1">
<title>Choice</title>
<p>The dependent variable in this study was a dichotomous choice. We asked participants: &#x201C;Please select who you prefer to interview you for the position at the department store (manager vs. AI).&#x201D;</p>
</sec>
<sec id="S6.SS2.SSS1.Px2">
<title>Procedural Justice</title>
<p>Furthermore, we included the impartiality items from Studies 1 and 2 and asked people to indicate &#x201C;how important were the following criteria for you in choosing your interviewer?,&#x201D; and measured their responses on a scale from 1 (<italic>not at all important</italic>) to 5 (<italic>extremely important</italic>).</p>
</sec>
<sec id="S6.SS2.SSS1.Px3">
<title>Manipulation Check</title>
<p>To test whether our manipulation induced a sense of favorable (vs. unfavorable) status evaluations, we asked participants to complete a three-item scale (&#x03B1; = 0.87): &#x201C;I feel that the other job candidates are more qualified than I am&#x201D;; &#x201C;I feel that the other candidates have more status than I do&#x201D;; &#x201C;I feel that the other job candidates are more experienced than I am.&#x201D; They noted their responses on a scale from 1 (<italic>strongly disagree</italic>) to 5 (<italic>strongly agree</italic>).</p>
</sec>
<sec id="S6.SS2.SSS1.Px4">
<title>Realism Check</title>
<p>We assessed realism with two items (&#x03B1; = 0.75). Participants indicated the extent to which they agree that &#x201C;the presented scenario was realistic&#x201D; and whether they could &#x201C;imagine being in the described situation.&#x201D; They noted their responses on a scale from 1 (<italic>strongly disagree</italic>) to 7 (<italic>strongly agree</italic>).</p>
</sec>
</sec>
</sec>
<sec id="S6.SS3">
<title>Results</title>
<p>The manipulation check shows that people in the unfavorable partiality condition (<italic>M</italic> = 3.20, <italic>SD</italic> = 1.03) felt less qualified than in the favorable condition (<italic>M</italic> = 2.20, <italic>SD</italic> = <italic>0.89</italic>), [<italic>t</italic>(316) = &#x2013;9.37, <italic>p</italic> &#x003C; 0.001, <italic>d</italic> = 1.05]. Furthermore, participants felt that the presented scenario was realistic (<italic>M</italic> = 5.27, <italic>SD</italic> = 1.23).</p>
<p>Our pre-registered plan was to analyze this between-subject study and test our second hypothesis using a chi-square difference test. Across the two partiality conditions, a Chi-squared test, &#x03C7;<sup>2</sup>(1), (<italic>N</italic> = 318) = 18.62; <italic>p</italic> &#x003C; 0.001, revealed that only 19.3% of people wanted to be interviewed by the AI when they believed that others were partial in their favor. In comparison, 41.4% of people who thought that others might be partial against them preferred to be interviewed by an AI (<xref ref-type="fig" rid="F2">Figure 2</xref>). The results thus lend further support to our hypothesis that algorithmic preferences are conditional upon people&#x2019;s perception of the impartiality of human decision-makers.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Preferences for AI decision-makers in favorably biased vs. unfavorably biased conditions (Study 3). &#x03C7;<sup>2</sup>(1), (<italic>N</italic> = 318) = 18.62; <italic>p</italic> &#x003C; 0.001.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-898027-g002.tif"/>
</fig>
<p>An independent samples <italic>t</italic>-test showed that participants in the unfavorable condition placed greater importance on bias suppression (<italic>M</italic> = 4.14, <italic>SD</italic> = 1.088) and impartiality, (<italic>M</italic> = 4.03, <italic>SD</italic> = 1.019) compared to people in the favorable condition (<italic>M</italic> = 3.84, <italic>SD</italic> = 1.175; <italic>M</italic> = 3.74, <italic>SD</italic> = 1.128), [<italic>t</italic>(316) = &#x2013;2.366, <italic>p</italic> &#x003C; 0.019, <italic>d</italic> = 0.265]; [<italic>t</italic>(316) = &#x2013;2.414, <italic>p</italic> &#x003C; 0.016, <italic>d</italic> = 0.271], respectively. No other differences were detected.</p>
</sec>
<sec id="S6.SS4">
<title>Discussion</title>
<p>Study 3 offered further evidence in support of hypothesis 2, i.e., people&#x2019;s preferences for algorithmically-driven allocation decisions depend on the impartiality of the decision-maker. In the unfavorable condition, more than twice as many people chose to be interviewed by an AI compared to the favorable group. The results suggest that these differences can be explained by the greater importance people place on bias suppression and impartiality when they are subjected to unfavorable evaluations by others.</p>
</sec>
</sec>
<sec id="S7" sec-type="discussion">
<title>General Discussion</title>
<p>Technological advances are expected to result in increased use of AI in decisions that distribute scarce goods and resources between competing parties. The transition toward AI-led decision-making raises important moral and ethical questions for businesses, many of which concern algorithmic fairness and transparency (<xref ref-type="bibr" rid="B47">Rahwan et al., 2019</xref>; <xref ref-type="bibr" rid="B48">Raisch and Krakowski, 2020</xref>). But despite the critical importance of these issues, we still have a limited understanding of how people perceive allocation decisions in which human deciders are replaced by artificial forms of intelligence. While a growing body of work has explored the cognitive-affective factors behind people&#x2019;s algorithm aversion (<xref ref-type="bibr" rid="B35">Khalil, 1993</xref>; <xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>; <xref ref-type="bibr" rid="B48">Raisch and Krakowski, 2020</xref>; <xref ref-type="bibr" rid="B51">Robert et al., 2020</xref>), few studies have investigated the role of impartiality in human-AI interactions (<xref ref-type="bibr" rid="B38">Leventhal, 1980</xref>). Our study addresses this paucity and contributes to the literature by highlighting an important boundary condition to laypeople&#x2019;s algorithm aversion.</p>
<p>First, our study sheds new light on the role of impartiality in human-AI interactions. Since AI is not freighted with some of the characteristics that can lead humans to stray from impartiality, it holds the promise of enhancing the accuracy, consistency, and incorruptibility from the social influence of many decision procedures. Importantly, impartiality has been shown to influence the acceptance and legitimacy of decision procedures (<xref ref-type="bibr" rid="B65">Tyler et al., 1985</xref>). People consistently value impartiality and prefer deciders who make allocation decisions without biases, prejudices, or previously determined personal preferences (<xref ref-type="bibr" rid="B42">Miller, 2001</xref>; <xref ref-type="bibr" rid="B52">Shaw and Olson, 2014</xref>). While we provide further evidence that people prefer humans over AI in decisions concerning them (Study 1), we consistently show that laypeople associate greater impartiality with AI (vs. human) decision-makers. This finding is noteworthy because technologists, scholars, and policy-makers have often raised concerns regarding the prevalence of partiality in AI that stems from historically biased data and poorly designed algorithms (<xref ref-type="bibr" rid="B35">Khalil, 1993</xref>; <xref ref-type="bibr" rid="B16">De Cremer, 2020</xref>; <xref ref-type="bibr" rid="B51">Robert et al., 2020</xref>). Our findings suggest that laypeople perceive AI as more capable of achieving impartiality than humans (<xref ref-type="bibr" rid="B32">Helberger et al., 2020</xref>). This is not to say that laypeople believe that AI is completely unbiased&#x2014;it merely suggests that despite the many flaws and limitations within algorithmic decision making, laypeople still perceive AI to be less biased than humans.</p>
<p>Secondly, our findings show that impartiality concerns constitute an important boundary condition to people&#x2019;s algorithmic preferences (Study 2 and 3). We show that when laypeople are concerned about the negative biases of human deciders, their preferences shift toward AI decision-makers. This is because they emphasize impartiality and bias suppression, which AI is perceived to be more capable of than human decision-makers. In other words, people who are potentially subjected to negative biases show greater preferences for AI deciders because it increases impartiality and removes biases, which might curb their chances of obtaining desired outcomes (e.g., securing a job or getting admitted to a university). The only exception is when human decision-makers are partial in people&#x2019;s favor, in which case most people prefer the human over an AI. For a person who potentially benefits from a partial decider, choosing an AI to make decisions about them might even be self-destructive. This finding also has important implications for AI ethics. While previous studies have suggested that designing policies and regulations that continue to build trust in AI is likely to enhance further the acceptance and legitimacy of AI-led decisions (<xref ref-type="bibr" rid="B35">Khalil, 1993</xref>; <xref ref-type="bibr" rid="B26">Glikson and Woolley, 2020</xref>), we have identified an important caveat to this goal. Namely, despite its positive features and potential for standardizing and debiasing decision procedures (<xref ref-type="bibr" rid="B68">Zou and Schiebinger, 2018</xref>), people might not actually wish to endorse AI if they believe that partial decision-makers will help them to attain desired outcomes. Therefore, AI&#x2019;s capabilities are likely to be valued more by those who experience negative evaluations or even prejudice in intra-human interactions.</p>
<p>In our studies, we only assessed prejudice (Study 2) and respect based on one&#x2019;s status (Study 3) as examples of partiality that may lead people to endorse AI decision-makers. Future research could examine whether other impartiality violations might lead people to support AI against human-human interactions. Future research could also investigate the moderating role of related constructs like power (<xref ref-type="bibr" rid="B18">Fast and Schroeder, 2020</xref>). For example, in some instances, an AI might be perceived as less of a threat to one&#x2019;s position in the organizational hierarchy. Indeed, research has shown that people prefer to be replaced by robots (vs. humans) when their job loss is at stake (<xref ref-type="bibr" rid="B28">Granulo et al., 2019</xref>).</p>
<p>Our study also offers managerial implications. Despite significant value being placed on justice, fair procedures are still frequently ignored, and decision-makers routinely deviate from principles of impartiality that they often claim to value (<xref ref-type="bibr" rid="B29">Graso et al., 2019</xref>). When this is the case, implementing AI-led decision procedures might provide some way to improve the accuracy, consistency, and impartiality of organizational decision procedures. However, such changes might also be met by resistance from employees and other stakeholders who have little to gain from such changes. While people who are concerned with being evaluated negatively are more likely to endorse such changes, people who benefit from partiality in the organizations are more likely to resist handing over decisions to non-human entities. Indeed, our study suggests (and future research can attest) how social resource-rich groups may be less invested in endorsing impartial decision-making tools such as AI. Future research should further explore how resistance to AI decision-makers, specifically among powerful individuals, can be overcome via algorithmic design or supporting procedures and policies.</p>
<p>In summary, our set of pre-registered studies conducted across diverse and complementary contexts has notable strengths. It builds on existing findings and advances our understanding of people&#x2019;s perceptions of AI vs. human decision-makers. Furthermore, we have identified perceived impartiality as an important boundary condition to people&#x2019;s algorithmic preferences in allocation decisions. Our simple design involved repeatedly contrasting the two decision-makers, and it allowed us to identify people&#x2019;s underlying and reflexive assumptions regarding the impartiality of AI as decision-makers, and how they influence preferences for AI in different contexts. Nonetheless, our study has caveats and limitations, which we discuss with the hope of encouraging future research.</p>
</sec>
<sec id="S8">
<title>Limitations and Opportunities for Future Research</title>
<p>First, we only assessed people&#x2019;s perceptions of AI systems that do not influence participants directly. Throughout our studies, we asked participants to assume that these decisions affect them. Still, as our participants were online platform users, there were no real consequences to their strongly endorsing AI or human decision-makers. Scarce research resources permitting, we recommend that future studies assess whether impartiality influences people&#x2019;s preference for human or AI decision-makers when they themselves are invested in the outcome in question. An example would be giving employees in a large company an option to choose an AI or a human decision-maker when assessing promotions, raises, or bonuses. Furthermore, emerging research shows that in real-life scenarios, people might fail to reliably detect differences between algorithmically-generated and human-generated content, and that stated preferences might therefore diverge from actual behavior (<xref ref-type="bibr" rid="B36">K&#x00F6;bis and Mossink, 2021</xref>).</p>
<p>Second, while this is not explicitly tested in this study, it is possible that the use of AI in allocation decisions would have stronger support among people who believe that they might be disadvantaged or who experience prejudice because of their race, gender, or sexual orientation (<xref ref-type="bibr" rid="B68">Zou and Schiebinger, 2018</xref>). On the other hand, this support might be tempered by concerns that a more impartial decision-maker might not produce the outcomes they desire. We leave it to future research to investigate this possibility.</p>
<p>Finally, we focused on people&#x2019;s perceptions of the human vs. AI decision-makers, but we did not examine attitudes toward the calibrators behind the AI. Any algorithm-based system is only as good as its inputs, and those inputs are only as good as the person calibrating the system. Perhaps people&#x2019;s trust in AI as an impartial entity is inextricably linked with their trust in the AI&#x2019;s calibrator. The appeal of certain AI systems is that they are self-correcting and capable of learning (<xref ref-type="bibr" rid="B56">Silver and Silver, 2017</xref>; <xref ref-type="bibr" rid="B55">Silver et al., 2017</xref>) which should presumably increase people&#x2019;s perceptions of AI as distinct entities with minds of their own (<xref ref-type="bibr" rid="B1">Bigman and Gray, 2018</xref>; <xref ref-type="bibr" rid="B2">Bigman et al., 2019</xref>). Alternatively, we may witness a reality in which AI will remain simple reflections of their human masters and their calibrating powers, incapable of ever achieving true impartiality.</p>
</sec>
<sec id="S9" sec-type="data-availability">
<title>Data Availability Statement</title>
<p>The original contributions presented in this study are included in the article/<xref ref-type="supplementary-material" rid="DS1">Supplementary Material</xref>, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec id="S10">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by the UCD Office of Research Ethics, University College Dublin, Belfield, Dublin (No. HS-E-19-102-CLAUDY). The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S11">
<title>Author Contributions</title>
<p>MC co-designed and ran the studies, wrote up the results, and wrote parts of introduction and discussion. KA provided overall guidance regarding the direction of the study, co-designed studies, and wrote parts of introduction and discussion. MG provided input on theoretical framing and study design and wrote parts of introduction and discussion. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="pudiscl1" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<sec id="S12" sec-type="supplementary-material">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.898027/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpsyg.2022.898027/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.docx" id="DS1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_2.xlsx" id="DS2" mimetype="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_3.xlsx" id="DS3" mimetype="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_4.xlsx" id="DS4" mimetype="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bigman</surname> <given-names>Y. E.</given-names></name> <name><surname>Gray</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>People are averse to machines making moral decisions.</article-title> <source><italic>Cognition</italic></source> <volume>181</volume> <fpage>21</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2018.08.003</pub-id> <pub-id pub-id-type="pmid">30107256</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bigman</surname> <given-names>Y. E.</given-names></name> <name><surname>Waytz</surname> <given-names>A.</given-names></name> <name><surname>Alterovitz</surname> <given-names>R.</given-names></name> <name><surname>Gray</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Holding robots responsible: the elements of machine morality.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>23</volume> <fpage>365</fpage>&#x2013;<lpage>368</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2019.02.008</pub-id> <pub-id pub-id-type="pmid">30962074</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brewer</surname> <given-names>M. B.</given-names></name></person-group> (<year>1979</year>). <article-title>In-group bias in the minimal intergroup situation: a cognitive-motivational analysis.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>86</volume> <fpage>307</fpage>&#x2013;<lpage>324</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.86.2.307</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>Z. C.</given-names></name> <name><surname>Anicich</surname> <given-names>E. M.</given-names></name> <name><surname>Galinsky</surname> <given-names>A. D.</given-names></name></person-group> (<year>2020</year>). <article-title>Compensatory conspicuous communication: low status increases jargon use.</article-title> <source><italic>Organ. Behav. Hum. Decis. Process.</italic></source> <volume>161</volume> <fpage>274</fpage>&#x2013;<lpage>290</lpage>. <pub-id pub-id-type="doi">10.1016/j.obhdp.2020.07.001</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brynjolfsson</surname> <given-names>E.</given-names></name> <name><surname>Mitchell</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>What can machine learning do? Workforce implications.</article-title> <source><italic>Science</italic></source> <volume>358</volume>:<fpage>1530</fpage>. <pub-id pub-id-type="doi">10.1126/science.aap8062</pub-id> <pub-id pub-id-type="pmid">29269459</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Camps</surname> <given-names>J.</given-names></name> <name><surname>Graso</surname> <given-names>M.</given-names></name> <name><surname>Brebels</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). <article-title>When organizational justice enactment is a zero sum game: a trade-off and self-concept maintenance perspective.</article-title> <source><italic>Acad. Manag. Perspect.</italic></source> <volume>36</volume>:<fpage>35</fpage>. <pub-id pub-id-type="doi">10.5465/amp.2018.0003</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castelo</surname> <given-names>N.</given-names></name> <name><surname>Bos</surname> <given-names>M. W.</given-names></name> <name><surname>Lehmann</surname> <given-names>D. R.</given-names></name></person-group> (<year>2019</year>). <article-title>Task-dependent algorithm aversion.</article-title> <source><italic>J. Mark. Res.</italic></source> <volume>56</volume> <fpage>809</fpage>&#x2013;<lpage>825</lpage>. <pub-id pub-id-type="doi">10.1177/0022243719851788</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cohen-Charash</surname> <given-names>Y.</given-names></name> <name><surname>Spector</surname> <given-names>P. E.</given-names></name></person-group> (<year>2001</year>). <article-title>The role of justice in organizations: a meta-analysis.</article-title> <source><italic>Organ. Behav. Hum. Decis. Process.</italic></source> <volume>86</volume> <fpage>278</fpage>&#x2013;<lpage>321</lpage>. <pub-id pub-id-type="doi">10.1006/obhd.2001.2958</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Colquitt</surname> <given-names>J. A.</given-names></name> <name><surname>Zipay</surname> <given-names>K. P.</given-names></name></person-group> (<year>2015</year>). <article-title>Justice, fairness, and employee reactions.</article-title> <source><italic>Annu. Rev. Organ. Psychol. Organ. Behav.</italic></source> <volume>2</volume> <fpage>75</fpage>&#x2013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-orgpsych-032414-111457</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Colquitt</surname> <given-names>J. A.</given-names></name> <name><surname>Scott</surname> <given-names>B. A.</given-names></name> <name><surname>Rodell</surname> <given-names>J. B.</given-names></name> <name><surname>Long</surname> <given-names>D. M.</given-names></name> <name><surname>Zapata</surname> <given-names>C. P.</given-names></name> <name><surname>Conlon</surname> <given-names>D. E.</given-names></name><etal/></person-group> (<year>2013</year>). <article-title>Justice at the millennium, a decade later: a meta-analytic test of social exchange and affect-based perspectives.</article-title> <source><italic>J. Appl. Psychol.</italic></source> <volume>98</volume> <fpage>199</fpage>&#x2013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1037/a0031757</pub-id> <pub-id pub-id-type="pmid">23458336</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Colquitt</surname> <given-names>J. A.</given-names></name> <name><surname>Zipay</surname> <given-names>K. P.</given-names></name> <name><surname>Lynch</surname> <given-names>J. W.</given-names></name> <name><surname>Outlaw</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>Bringing &#x201C;the beholder&#x201D; center stage: on the propensity to perceive overall fairness.</article-title> <source><italic>Organ. Behav. Hum. Decis. Process.</italic></source> <volume>148</volume> <fpage>159</fpage>&#x2013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1016/j.obhdp.2018.08.001</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cornelis</surname> <given-names>I.</given-names></name> <name><surname>Hiel</surname> <given-names>A. V.</given-names></name> <name><surname>Cremer</surname> <given-names>D. D.</given-names></name> <name><surname>Mayer</surname> <given-names>D. M.</given-names></name></person-group> (<year>2013</year>). <article-title>When leaders choose to be fair: follower belongingness needs and leader empathy influences leaders&#x2019; adherence to procedural fairness rules.</article-title> <source><italic>J. Exp. Soc. Psychol.</italic></source> <volume>49</volume> <fpage>605</fpage>&#x2013;<lpage>613</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2013.02.016</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cottingham</surname> <given-names>J.</given-names></name></person-group> (<year>1983</year>). <article-title>The nature of political theory.</article-title> <source><italic>Philos. Books</italic></source> <volume>24</volume> <fpage>252</fpage>&#x2013;<lpage>254</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-0149.1983.tb02775.x</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cropanzano</surname> <given-names>R.</given-names></name> <name><surname>Bowen</surname> <given-names>D. E.</given-names></name> <name><surname>Gilliland</surname> <given-names>S. W.</given-names></name></person-group> (<year>2007</year>). <article-title>The management of organizational justice.</article-title> <source><italic>Acad. Manag. Perspect.</italic></source> <volume>21</volume> <fpage>34</fpage>&#x2013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.5465/amp.2007.27895338</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cropanzano</surname> <given-names>R.</given-names></name> <name><surname>Byrne</surname> <given-names>Z. S.</given-names></name> <name><surname>Bobocel</surname> <given-names>D. R.</given-names></name> <name><surname>Rupp</surname> <given-names>D. E.</given-names></name></person-group> (<year>2001</year>). <article-title>Moral virtues, fairness heuristics, social entities, and other denizens of organizational justice.</article-title> <source><italic>J. Vocat. Behav.</italic></source> <volume>58</volume> <fpage>164</fpage>&#x2013;<lpage>209</lpage>. <pub-id pub-id-type="doi">10.1006/jvbe.2001.1791</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Cremer</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <source><italic>What Does Building a Fair AI Really Entail? : Harvard Business Review.</italic></source> Available online at: <ext-link ext-link-type="uri" xlink:href="https://hbr.org/2020/09/what-does-building-a-fair-ai-really-entail">https://hbr.org/2020/09/what-does-building-a-fair-ai-really-entail</ext-link> <comment>(accessed September 03, 2020)</comment>.</citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dzindolet</surname> <given-names>M. T.</given-names></name> <name><surname>Peterson</surname> <given-names>S. A.</given-names></name> <name><surname>Pomranky</surname> <given-names>R. A.</given-names></name> <name><surname>Pierce</surname> <given-names>L. G.</given-names></name> <name><surname>Beck</surname> <given-names>H. P.</given-names></name></person-group> (<year>2003</year>). <article-title>The role of trust in automation reliance.</article-title> <source><italic>Int. J. Hum. Comput. Stud.</italic></source> <volume>58</volume> <fpage>697</fpage>&#x2013;<lpage>718</lpage>. <pub-id pub-id-type="doi">10.1016/S1071-5819(03)00038-7</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fast</surname> <given-names>N. J.</given-names></name> <name><surname>Schroeder</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Power and decision making: new directions for research in the age of artificial intelligence.</article-title> <source><italic>Curr. Opin. Psychol.</italic></source> <volume>33</volume> <fpage>172</fpage>&#x2013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1016/j.copsyc.2019.07.039</pub-id> <pub-id pub-id-type="pmid">31473586</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foa</surname> <given-names>E. B.</given-names></name> <name><surname>Foa</surname> <given-names>U. G.</given-names></name></person-group> (<year>1980</year>). &#x201C;<article-title>Resource theory: interpersonal behavior as exchange</article-title>,&#x201D; in <source><italic>Social Exchange</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Gergen</surname> <given-names>K. J.</given-names></name> <name><surname>Greenberg</surname> <given-names>M. S.</given-names></name> <name><surname>Willis</surname> <given-names>R. H.</given-names></name></person-group> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>77</fpage>&#x2013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4613-3087-5_4</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fountaine</surname> <given-names>T.</given-names></name> <name><surname>McCarthy</surname> <given-names>B.</given-names></name> <name><surname>Saleh</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>Building the AI-powered organization.</article-title> <source><italic>Harv. Bus. Rev.</italic></source> <volume>97</volume> <fpage>62</fpage>&#x2013;<lpage>73</lpage>.</citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frank</surname> <given-names>M. R.</given-names></name> <name><surname>Autor</surname> <given-names>D.</given-names></name> <name><surname>Bessen</surname> <given-names>J. E.</given-names></name> <name><surname>Brynjolfsson</surname> <given-names>E.</given-names></name> <name><surname>Cebrian</surname> <given-names>M.</given-names></name> <name><surname>Deming</surname> <given-names>D. J.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Toward understanding the impact of artificial intelligence on labor.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>116</volume> <fpage>6531</fpage>&#x2013;<lpage>6539</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1900949116</pub-id> <pub-id pub-id-type="pmid">30910965</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghahramani</surname> <given-names>Z.</given-names></name></person-group> (<year>2015</year>). <article-title>Probabilistic machine learning and artificial intelligence.</article-title> <source><italic>Nature</italic></source> <volume>521</volume> <fpage>452</fpage>&#x2013;<lpage>459</lpage>. <pub-id pub-id-type="doi">10.1038/nature14541</pub-id> <pub-id pub-id-type="pmid">26017444</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gilovich</surname> <given-names>T.</given-names></name> <name><surname>Griffin</surname> <given-names>D.</given-names></name> <name><surname>Kahneman</surname> <given-names>D. (eds)</given-names></name></person-group> (<year>2002</year>). <source><italic>Heuristics and Biases: The Psychology of Intuitive Judgment.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511808098</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giubilini</surname> <given-names>A.</given-names></name> <name><surname>Savulescu</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>The artificial moral advisor. the &#x201C;ideal observer&#x201D; meets artificial intelligence.</article-title> <source><italic>Philos. Technol.</italic></source> <volume>31</volume> <fpage>169</fpage>&#x2013;<lpage>188</lpage>. <pub-id pub-id-type="doi">10.1007/s13347-017-0285-z</pub-id> <pub-id pub-id-type="pmid">29974033</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Glass</surname> <given-names>G. V.</given-names></name> <name><surname>Peckham</surname> <given-names>P. D.</given-names></name> <name><surname>Sanders</surname> <given-names>J. R.</given-names></name></person-group> (<year>1972</year>). <article-title>Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance.</article-title> <source><italic>Rev. Educ. Res.</italic></source> <volume>42</volume> <fpage>237</fpage>&#x2013;<lpage>288</lpage>. <pub-id pub-id-type="doi">10.3102/00346543042003237</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Glikson</surname> <given-names>E.</given-names></name> <name><surname>Woolley</surname> <given-names>A. W.</given-names></name></person-group> (<year>2020</year>). <article-title>Human trust in artificial intelligence: review of empirical research.</article-title> <source><italic>Acad. Manag. Ann.</italic></source> <volume>14</volume> <fpage>627</fpage>&#x2013;<lpage>660</lpage>. <pub-id pub-id-type="doi">10.5465/annals.2018.0057</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grace</surname> <given-names>K.</given-names></name> <name><surname>Salvatier</surname> <given-names>J.</given-names></name> <name><surname>Dafoe</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Evans</surname> <given-names>O.</given-names></name></person-group> (<year>2018</year>). <article-title>Viewpoint: when will AI exceed human performance? Evidence from AI experts.</article-title> <source><italic>J. Artif. Intell. Res.</italic></source> <volume>62</volume> <fpage>729</fpage>&#x2013;<lpage>754</lpage>. <pub-id pub-id-type="doi">10.1613/jair.1.11222</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Granulo</surname> <given-names>A.</given-names></name> <name><surname>Fuchs</surname> <given-names>C.</given-names></name> <name><surname>Puntoni</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Psychological reactions to human versus robotic job replacement.</article-title> <source><italic>Nat. Hum. Behav.</italic></source> <volume>3</volume> <fpage>1062</fpage>&#x2013;<lpage>1069</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-019-0670-y</pub-id> <pub-id pub-id-type="pmid">31384025</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Graso</surname> <given-names>M.</given-names></name> <name><surname>Camps</surname> <given-names>J.</given-names></name> <name><surname>Strah</surname> <given-names>N.</given-names></name> <name><surname>Brebels</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). <article-title>Organizational justice enactment: an agent-focused review and path forward.</article-title> <source><italic>J. Vocat. Behav.</italic></source> <volume>116</volume>:<fpage>103296</fpage>.</citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hardin</surname> <given-names>G.</given-names></name></person-group> (<year>1968</year>). <article-title>The tragedy of the commons.</article-title> <source><italic>Science</italic></source> <volume>162</volume> <fpage>1243</fpage>&#x2013;<lpage>1248</lpage>.</citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hebl</surname> <given-names>M.</given-names></name> <name><surname>Cheng</surname> <given-names>S. K.</given-names></name> <name><surname>Ng</surname> <given-names>L. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Modern discrimination in organizations.</article-title> <source><italic>Annu. Rev. Organ. Psychol. Organ. Behav.</italic></source> <volume>7</volume> <fpage>257</fpage>&#x2013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-orgpsych-012119-044948</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Helberger</surname> <given-names>N.</given-names></name> <name><surname>Araujo</surname> <given-names>T.</given-names></name> <name><surname>de Vreese</surname> <given-names>C. H.</given-names></name></person-group> (<year>2020</year>). <article-title>Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making.</article-title> <source><italic>Comput. Law Secur. Rev.</italic></source> <volume>39</volume>:<fpage>105456</fpage>.</citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hughes</surname> <given-names>J. S.</given-names></name></person-group> (<year>2017</year>). <article-title>In a moral dilemma, choose the one you love: impartial actors are seen as less moral than partial ones.</article-title> <source><italic>Br. J. Soc. Psychol.</italic></source> <volume>56</volume> <fpage>561</fpage>&#x2013;<lpage>577</lpage>. <pub-id pub-id-type="doi">10.1111/bjso.12199</pub-id> <pub-id pub-id-type="pmid">28474440</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>D. G.</given-names></name></person-group> (<year>2015</year>). <article-title>Technology with no human responsibility?</article-title> <source><italic>J. Bus. Ethics</italic></source> <volume>127</volume> <fpage>707</fpage>&#x2013;<lpage>715</lpage>.</citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khalil</surname> <given-names>O. E. M.</given-names></name></person-group> (<year>1993</year>). <article-title>Artificial decision-making and artificial ethics: a management concern.</article-title> <source><italic>J. Bus. Ethics</italic></source> <volume>12</volume> <fpage>313</fpage>&#x2013;<lpage>321</lpage>. <pub-id pub-id-type="doi">10.1007/BF01666535</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x00F6;bis</surname> <given-names>N.</given-names></name> <name><surname>Mossink</surname> <given-names>L. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Artificial intelligence versus Maya Angelou: experimental evidence that people cannot differentiate ai-generated from human-written poetry.</article-title> <source><italic>Comput. Hum. Behav.</italic></source> <volume>114</volume>:<fpage>106553</fpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2020.106553</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leventhal</surname> <given-names>G. S.</given-names></name></person-group> (<year>1976</year>). &#x201C;<article-title>The distribution of rewards and resources in groups and organizations</article-title>,&#x201D; in <source><italic>Advances in Experimental Social Psychology</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Berkowitz</surname> <given-names>L.</given-names></name> <name><surname>Walster</surname> <given-names>E.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Elsevier</publisher-name>), <fpage>91</fpage>&#x2013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1016/S0065-2601(08)60059-3</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leventhal</surname> <given-names>G. S.</given-names></name></person-group> (<year>1980</year>). &#x201C;<article-title>What should be done with equity theory?</article-title>,&#x201D; in <source><italic>Social Exchange</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Gergen</surname> <given-names>K. J.</given-names></name> <name><surname>Greenberg</surname> <given-names>M. S.</given-names></name> <name><surname>Willis</surname> <given-names>R. H.</given-names></name></person-group> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>27</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4613-3087-5_2</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lind</surname> <given-names>E. A.</given-names></name> <name><surname>Lissak</surname> <given-names>R. I.</given-names></name></person-group> (<year>1985</year>). <article-title>Apparent impropriety and procedural fairness judgments.</article-title> <source><italic>J. Exp. Soc. Psychol.</italic></source> <volume>21</volume> <fpage>19</fpage>&#x2013;<lpage>29</lpage>.</citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Logg</surname> <given-names>J. M.</given-names></name> <name><surname>Minson</surname> <given-names>J. A.</given-names></name> <name><surname>Moore</surname> <given-names>D. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Algorithm appreciation: people prefer algorithmic to human judgment.</article-title> <source><italic>Organ. Behav. Hum. Decis. Process.</italic></source> <volume>151</volume> <fpage>90</fpage>&#x2013;<lpage>103</lpage>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matta</surname> <given-names>F. K.</given-names></name> <name><surname>Scott</surname> <given-names>B. A.</given-names></name> <name><surname>Colquitt</surname> <given-names>J. A.</given-names></name> <name><surname>Koopman</surname> <given-names>J.</given-names></name> <name><surname>Passantino</surname> <given-names>L. G.</given-names></name></person-group> (<year>2017</year>). <article-title>Is consistently unfair better than sporadically fair? An investigation of justice variability and stress.</article-title> <source><italic>Acad. Manag. J.</italic></source> <volume>60</volume> <fpage>743</fpage>&#x2013;<lpage>770</lpage>.</citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>D. T.</given-names></name></person-group> (<year>2001</year>). <article-title>Disrespect and the experience of injustice.</article-title> <source><italic>Annu. Rev. Psychol.</italic></source> <volume>52</volume> <fpage>527</fpage>&#x2013;<lpage>553</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.52.1.527</pub-id> <pub-id pub-id-type="pmid">11148316</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mozer</surname> <given-names>M. C.</given-names></name> <name><surname>Wiseheart</surname> <given-names>M.</given-names></name> <name><surname>Novikoff</surname> <given-names>T. P.</given-names></name></person-group> (<year>2019</year>). <article-title>Artificial intelligence to support human instruction.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>116</volume> <fpage>3953</fpage>&#x2013;<lpage>3955</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1900370116</pub-id> <pub-id pub-id-type="pmid">30782820</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>D. T.</given-names></name> <name><surname>Fast</surname> <given-names>N. J.</given-names></name> <name><surname>Harmon</surname> <given-names>D. J.</given-names></name></person-group> (<year>2020</year>). <article-title>When eliminating bias isn&#x2019;t fair: algorithmic reductionism and procedural justice in human resource decisions.</article-title> <source><italic>Organ. Behav. Hum. Decis. Process.</italic></source> <volume>160</volume> <fpage>149</fpage>&#x2013;<lpage>167</lpage>. <pub-id pub-id-type="doi">10.1016/j.obhdp.2020.03.008</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x00D6;tting</surname> <given-names>S. K.</given-names></name> <name><surname>Maier</surname> <given-names>G. W.</given-names></name></person-group> (<year>2018</year>). <article-title>The importance of procedural justice in human&#x2013;machine interactions: intelligent systems as new decision agents in organizations.</article-title> <source><italic>Comput. Hum. Behav.</italic></source> <volume>89</volume> <fpage>27</fpage>&#x2013;<lpage>39</lpage>.</citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peer</surname> <given-names>E.</given-names></name> <name><surname>Brandimarte</surname> <given-names>L.</given-names></name> <name><surname>Samat</surname> <given-names>S.</given-names></name> <name><surname>Acquisti</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Beyond the Turk: alternative platforms for crowdsourcing behavioral research.</article-title> <source><italic>J. Exp. Soc. Psychol.</italic></source> <volume>70</volume> <fpage>153</fpage>&#x2013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2017.01.006</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahwan</surname> <given-names>I.</given-names></name> <name><surname>Cebrian</surname> <given-names>M.</given-names></name> <name><surname>Obradovich</surname> <given-names>N.</given-names></name> <name><surname>Bongard</surname> <given-names>J.</given-names></name> <name><surname>Bonnefon</surname> <given-names>J.-F.</given-names></name> <name><surname>Breazeal</surname> <given-names>C.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Machine behaviour.</article-title> <source><italic>Nature</italic></source> <volume>568</volume> <fpage>477</fpage>&#x2013;<lpage>486</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-1138-y</pub-id> <pub-id pub-id-type="pmid">31019318</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raisch</surname> <given-names>S.</given-names></name> <name><surname>Krakowski</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Artificial intelligence and management: the automation-augmentation paradox.</article-title> <source><italic>Acad. Manag. Rev.</italic></source> <volume>46</volume> <fpage>192</fpage>&#x2013;<lpage>210</lpage>. <pub-id pub-id-type="doi">10.5465/amr.2018.0072</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rawls</surname> <given-names>J.</given-names></name></person-group> (<year>1971/1999</year>). <source><italic>A theory of Justice</italic></source>, <edition>Revised Edn</edition>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard University Press</publisher-name></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rawson</surname> <given-names>T. M.</given-names></name> <name><surname>Ahmad</surname> <given-names>R.</given-names></name> <name><surname>Toumazou</surname> <given-names>C.</given-names></name> <name><surname>Georgiou</surname> <given-names>P.</given-names></name> <name><surname>Holmes</surname> <given-names>A. H.</given-names></name></person-group> (<year>2019</year>). <article-title>Artificial intelligence can improve decision-making in infection management.</article-title> <source><italic>Nat. Hum. Behav.</italic></source> <volume>3</volume> <fpage>543</fpage>&#x2013;<lpage>545</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-019-0583-9</pub-id> <pub-id pub-id-type="pmid">31190023</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robert</surname> <given-names>L. P.</given-names></name> <name><surname>Pierce</surname> <given-names>C.</given-names></name> <name><surname>Marquis</surname> <given-names>L.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Alahmad</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Designing fair AI for managing employees in organizations: a review, critique, and design agenda.</article-title> <source><italic>Hum. Comput. Interact.</italic></source> <volume>35</volume> <fpage>545</fpage>&#x2013;<lpage>575</lpage>. <pub-id pub-id-type="doi">10.1080/07370024.2020.1735391</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shaw</surname> <given-names>A.</given-names></name> <name><surname>Olson</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>Fairness as partiality aversion: the development of procedural justice.</article-title> <source><italic>J. Exp. Child Psychol.</italic></source> <volume>119</volume> <fpage>40</fpage>&#x2013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1016/j.jecp.2013.10.007</pub-id> <pub-id pub-id-type="pmid">24291349</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sheppard</surname> <given-names>B. H.</given-names></name> <name><surname>Lewicki</surname> <given-names>R. J.</given-names></name></person-group> (<year>1987</year>). <article-title>Toward general principles of managerial fairness.</article-title> <source><italic>Soc. Justice Res.</italic></source> <volume>1</volume> <fpage>161</fpage>&#x2013;<lpage>176</lpage>.</citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sheppard</surname> <given-names>B. H.</given-names></name> <name><surname>Lewicki</surname> <given-names>R. J.</given-names></name> <name><surname>Minton</surname> <given-names>J. W.</given-names></name></person-group> (<year>1992</year>). <source><italic>Organizational Justice: The Search for Fairness in the Workplace.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Lexington Books</publisher-name>.</citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silver</surname> <given-names>D.</given-names></name> <name><surname>Schrittwieser</surname> <given-names>J.</given-names></name> <name><surname>Simonyan</surname> <given-names>K.</given-names></name> <name><surname>Antonoglou</surname> <given-names>I.</given-names></name> <name><surname>Huang</surname> <given-names>A.</given-names></name> <name><surname>Guez</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>Mastering the game of Go without human knowledge.</article-title> <source><italic>Nature</italic></source> <volume>550</volume> <fpage>354</fpage>&#x2013;<lpage>359</lpage>. <pub-id pub-id-type="doi">10.1038/nature24270</pub-id> <pub-id pub-id-type="pmid">29052630</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silver</surname> <given-names>J. R.</given-names></name> <name><surname>Silver</surname> <given-names>E.</given-names></name></person-group> (<year>2017</year>). <article-title>Why are conservatives more punitive than liberals? A moral foundations approach.</article-title> <source><italic>Law Hum. Behav.</italic></source> <volume>41</volume> <fpage>258</fpage>&#x2013;<lpage>272</lpage>. <pub-id pub-id-type="doi">10.1037/lhb0000232</pub-id> <pub-id pub-id-type="pmid">28150974</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simon</surname> <given-names>H. A.</given-names></name></person-group> (<year>1951</year>). <article-title>A formal theory of the employment relationship.</article-title> <source><italic>Econometrica</italic></source> <volume>19</volume> <fpage>293</fpage>&#x2013;<lpage>305</lpage>. <pub-id pub-id-type="doi">10.2307/1906815</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>S.</given-names></name> <name><surname>Okun</surname> <given-names>A.</given-names></name> <name><surname>Jackson</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Learning to play Go from scratch.</article-title> <source><italic>Nature</italic></source> <volume>550</volume> <fpage>336</fpage>&#x2013;<lpage>337</lpage>. <pub-id pub-id-type="doi">10.1038/550336a</pub-id> <pub-id pub-id-type="pmid">29052631</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sunshine</surname> <given-names>J.</given-names></name> <name><surname>Tyler</surname> <given-names>T. R.</given-names></name></person-group> (<year>2003</year>). <article-title>The role of procedural justice and legitimacy in shaping public support for policing.</article-title> <source><italic>Law Soc. Rev.</italic></source> <volume>37</volume> <fpage>513</fpage>&#x2013;<lpage>548</lpage>. <pub-id pub-id-type="doi">10.1177/1529100615617791</pub-id> <pub-id pub-id-type="pmid">26635334</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thibaut</surname> <given-names>J.</given-names></name> <name><surname>Walker</surname> <given-names>L.</given-names></name></person-group> (<year>1975</year>). <source><italic>Procedural Justice: A Psychological Analysis.</italic></source> <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>.</citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tversky</surname> <given-names>A.</given-names></name> <name><surname>Kahneman</surname> <given-names>D.</given-names></name></person-group> (<year>1973</year>). <article-title>Availability: a heuristic for judging frequency and probability.</article-title> <source><italic>Cogn. Psychol.</italic></source> <volume>5</volume> <fpage>207</fpage>&#x2013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0285(73)90033-9</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tversky</surname> <given-names>A.</given-names></name> <name><surname>Kahneman</surname> <given-names>D.</given-names></name></person-group> (<year>1974</year>). <article-title>Judgment under uncertainty: heuristics and biases.</article-title> <source><italic>Science</italic></source> <volume>185</volume> <fpage>1124</fpage>&#x2013;<lpage>1131</lpage>. <pub-id pub-id-type="doi">10.1126/science.185.4157.1124</pub-id> <pub-id pub-id-type="pmid">17835457</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyler</surname> <given-names>T. R.</given-names></name></person-group> (<year>1988</year>). <article-title>What is procedural justice?: criteria used by citizens to assess the fairness of legal procedures.</article-title> <source><italic>Law Soc. Rev.</italic></source> <volume>22</volume> <fpage>103</fpage>&#x2013;<lpage>135</lpage>.</citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyler</surname> <given-names>T. R.</given-names></name></person-group> (<year>1994</year>). <article-title>Psychological models of the justice motive: antecedents of distributive and procedural justice.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>67</volume> <fpage>850</fpage>&#x2013;<lpage>863</lpage>.</citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyler</surname> <given-names>T. R.</given-names></name> <name><surname>Rasinski</surname> <given-names>K. A.</given-names></name> <name><surname>Spodick</surname> <given-names>N.</given-names></name></person-group> (<year>1985</year>). <article-title>Influence of voice on satisfaction with leaders: exploring the meaning of process control.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>48</volume> <fpage>72</fpage>&#x2013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.48.1.72</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>A. D.</given-names></name> <name><surname>Monroe</surname> <given-names>A. E.</given-names></name></person-group> (<year>2019</year>). <article-title>Autonomous morals: inferences of mind predict acceptance of AI behavior in sacrificial moral dilemmas.</article-title> <source><italic>J. Exp. Soc. Psychol.</italic></source> <volume>85</volume>:<fpage>103870</fpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2019.103870</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>G.</given-names></name> <name><surname>Chen</surname> <given-names>Y.-R.</given-names></name> <name><surname>Brockner</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>What influences managers&#x2019; procedural fairness towards their subordinates? The role of subordinates&#x2019; trustworthiness.</article-title> <source><italic>J. Exp. Soc. Psychol.</italic></source> <volume>59</volume> <fpage>96</fpage>&#x2013;<lpage>112</lpage>. <pub-id pub-id-type="doi">10.1016/j.jesp.2015.04.002</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>J.</given-names></name> <name><surname>Schiebinger</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>AI can be sexist and racist &#x2013; it&#x2019;s time to make it fair.</article-title> <source><italic>Nature</italic></source> <volume>449</volume> <fpage>324</fpage>&#x2013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1038/d41586-018-05707-8</pub-id> <pub-id pub-id-type="pmid">30018439</pub-id></citation></ref>
</ref-list>
</back>
</article>
