<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2023.1118723</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Check the box! How to deal with automation bias in AI-based personnel selection</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Kupfer</surname>
<given-names>Cordula</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2008816/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Prassl</surname>
<given-names>Rita</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Flei&#x00DF;</surname>
<given-names>J&#x00FC;rgen</given-names>
</name>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2227074/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Malin</surname>
<given-names>Christine</given-names>
</name>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2249672/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Thalmann</surname>
<given-names>Stefan</given-names>
</name>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kubicek</surname>
<given-names>Bettina</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1436017/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Work and Organizational Psychology, Institute of Psychology, University of Graz</institution>, <addr-line>Graz</addr-line>, <country>Austria</country></aff>
<aff id="aff2"><sup>2</sup><institution>Business Analytics and Data Science-Center, University of Graz</institution>, <addr-line>Graz</addr-line>, <country>Austria</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by"><p>Edited by: Alejo Sison, University of Navarra, Spain</p></fn>
<fn id="fn0002" fn-type="edited-by"><p>Reviewed by: Benjamin Strenge, Bielefeld University, Germany; Piers D. L. Howe, The University of Melbourne, Australia</p></fn>
<corresp id="c001">&#x002A;Correspondence: Cordula Kupfer, <email>cordula.kupfer@uni-graz.at</email></corresp>
<fn id="fn0003" fn-type="other"><p>This article was submitted to Organizational Psychology, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>04</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>14</volume>
<elocation-id>1118723</elocation-id>
<history>
<date date-type="received">
<day>07</day>
<month>12</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>03</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Kupfer, Prassl, Flei&#x00DF;, Malin, Thalmann and Kubicek.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Kupfer, Prassl, Flei&#x00DF;, Malin, Thalmann and Kubicek</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system&#x2019;s recommendation. As this so-called automation bias contradicts ethical and legal requirements of human oversight for the use of AI-based recommendations in personnel preselection, the present study investigates strategies to reduce automation bias and increase decision quality. Based on the Elaboration Likelihood Model, we assume that instructing decision makers about the possibility of system errors and their responsibility for the decision, as well as providing an appropriate level of data aggregation should encourage decision makers to process information systematically instead of heuristically. We conducted a 3 (general information, information about system errors, information about responsibility) x 2 (low vs. high aggregated data) experiment to investigate which strategy can reduce automation bias and enhance decision quality. We found that less automation bias in terms of higher scores on verification intensity indicators correlated with higher objective decision quality, i.e., more suitable applicants selected. Decision makers who received information about system errors scored higher on verification intensity indicators and rated subjective decision quality higher, but decision makers who were informed about their responsibility, unexpectedly, did not. Regarding aggregation level of data, decision makers of the highly aggregated data group spent less time on the level of the dashboard where highly aggregated data were presented. Our results show that it is important to inform decision makers who interact with AI-based decision-support systems about potential system errors and provide them with less aggregated data to reduce automation bias and enhance decision quality.</p>
</abstract>
<kwd-group>
<kwd>automation bias</kwd>
<kwd>elaboration likelihood model</kwd>
<kwd>experiment</kwd>
<kwd>dashboard</kwd>
<kwd>decision support systems</kwd>
<kwd>recruiting</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="75"/>
<page-count count="16"/>
<word-count count="13657"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<label>1.</label>
<title>Introduction</title>
<p>An increasing number of organizations use systems based on artificial intelligence (AI) to support decision-making in personnel selection (<xref ref-type="bibr" rid="ref7">Black and van Esch, 2020</xref>). Many of those decision support systems focus on the preselection phase, e.g., resume screening (<xref ref-type="bibr" rid="ref33">Lacroux and Martin-Lacroux, 2022</xref>). Such systems are ascribed multiple benefits for both organizations and applicants, such as a more efficient personnel selection process (<xref ref-type="bibr" rid="ref68">Suen et al., 2019</xref>), as well as fairer and more accurate decisions (<xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>). AI collects, analyzes and visualizes data that is then presented in a dashboard to provide a solid decision base for first-party users, i.e., people who interact with the output of AI-based systems to make the selection decision, such as HR professionals (<xref ref-type="bibr" rid="ref36">Langer and Landers, 2021</xref>). However, one major problem frequently reported with the use of AI-based decision support systems is the occurrence of automation bias, that is the thoughtless acceptance of decisions or recommendations made by the system (e.g., <xref ref-type="bibr" rid="ref47">Mosier et al., 1996</xref>; <xref ref-type="bibr" rid="ref64">Skitka et al., 1999</xref>; <xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>). Automation bias can lead to system errors being overlooked and thus result in poor decision quality (<xref ref-type="bibr" rid="ref46">Mosier and Skitka, 1999</xref>; <xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>).</p>
<p>Despite the potential negative effects of automation bias on decision quality, little is known about the factors that might counteract thoughtless acceptance of AI-based recommendations in personnel preselection. Thus far, most research on AI-based decision support systems in personnel selection focuses on reliability and efficiency (e.g., <xref ref-type="bibr" rid="ref12">Campion et al., 2016</xref>; <xref ref-type="bibr" rid="ref69">Suen et al., 2020</xref>) or fairness perception and acceptance by applicants (e.g., <xref ref-type="bibr" rid="ref25">Gonzalez et al., 2019</xref>; <xref ref-type="bibr" rid="ref35">Langer et al., 2019</xref>; <xref ref-type="bibr" rid="ref1">Acikgoz et al., 2020</xref>; <xref ref-type="bibr" rid="ref62">Schick and Fischer, 2021</xref>; <xref ref-type="bibr" rid="ref71">van Esch et al., 2021</xref>; for reviews see <xref ref-type="bibr" rid="ref36">Langer and Landers, 2021</xref>; <xref ref-type="bibr" rid="ref27">Hilliard et al., 2022</xref>). Only a few studies examine the decision makers of such systems in personnel selection (e.g., <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>; <xref ref-type="bibr" rid="ref37">Li et al., 2021</xref>; <xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>; <xref ref-type="bibr" rid="ref33">Lacroux and Martin-Lacroux, 2022</xref>). Those studies show that decision makers see the potential of a more efficient and fairer personnel selection process through AI-based systems (<xref ref-type="bibr" rid="ref37">Li et al., 2021</xref>), while, at the same time, they seem to prefer recommendations from other HR professionals over those from an AI-based system (<xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>; <xref ref-type="bibr" rid="ref33">Lacroux and Martin-Lacroux, 2022</xref>). <xref ref-type="bibr" rid="ref34">Langer et al. (2021)</xref> demonstrated that proper timing of support from the system can influence decision makers` satisfaction with the selection decision as well as self-efficacy. However, factors that can minimize automation bias and, by doing so, increase the decision quality in the context of AI-based decision support systems for personnel preselection have to the best of our knowledge not yet been studied. In other contexts of AI-based decision support systems, strategies such as increasing decision maker responsibility, providing training and briefings, or having a group of humans as decision makers who monitor each other, have been investigated (<xref ref-type="bibr" rid="ref75">Zerilli et al., 2019</xref>). An examination of these strategies in the personnel preselection context has yet to be conducted.</p>
<p>Providing empirical evidence on how to counteract automation bias and ensure a high decision quality in personnel preselection is important from ethical and legal perspectives. First, ethical standards call for human oversight of automation to address potential risks associated with AI use, e.g., system shortcomings (<xref ref-type="bibr" rid="ref28">Hunkenschroer and Luetge, 2022</xref>). Second, the proposed European Union (EU) AI act (<xref ref-type="bibr" rid="ref20">European Commission, 2021</xref>) requires human oversight in high-risk application areas of AI, such as personnel selection systems, which means a human investigation of each case and the possibility for decision makers to override AI recommendations. Moreover, Article 22 of the General Data Protection Regulation (GDPR) that applies in the EU states that &#x201C;the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her [&#x2026;]&#x201D; (<xref ref-type="bibr" rid="ref70">The European Parliament and the Council of the European Union, 2016</xref>). For personnel preselection, this means that automatically processed information and AI-based decision-making must be reviewed by humans, a so-called human-in-the-loop, unless applicants voluntarily renounce their right. However, a passive approval of automated processing falls too short; human oversight rather needs to be an active assessment and verification (<xref ref-type="bibr" rid="ref39">Malgieri and Comand&#x00E9;, 2017</xref>). Therefore, it is essential to investigate how decision makers can be encouraged to actively review AI-based recommendations in personnel preselection and meet these ethical and legal requirements, instead of blindly following decisions made by the AI.</p>
<p>Hence, we examine whether the way users are instructed about an AI-based system (i.e., receiving information about potential system errors and being made aware of the responsibility for the decision) as well as how the data is visualized (high versus low level of aggregation) has an impact on automation bias and decision quality. We do so by conducting an experimental study where participants made personnel preselection decisions with the help of a dashboard. We base our assumptions on the Elaboration Likelihood Model (ELM; <xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>), a dual-process theory that describes how human information processing can either follow the systematic central route or the heuristic peripheral route. We assume that organizational factors, i.e., the instruction about a system regarding errors and responsibility, and technological design factors, i.e., the level of data aggregation, can decrease the heuristic processing that is prone to automation bias and can increase decision quality.</p>
<p>Our study contributes to the literature on AI-based personnel preselection and automation bias in at least two ways. First, we provide recommendations on how AI-based decision support systems shall be introduced and designed to support decision makers to overcome automation bias and fulfil the legal requirement of human oversight. Only then such systems can be used to their full potential. Second, by integrating research on intelligent systems, automation bias, and ELM, we provide a solid theoretical basis to investigate which factors might counteract automation bias when interacting with AI-based decision support systems. This theoretical basis might also inspire future research on AI-based decision support systems in personnel preselection and other fields and business areas.</p>
<sec id="sec2">
<label>1.1.</label>
<title>AI-based systems in personnel preselection</title>
<p>Technological advances in AI-technologies have led to their growing use in various business areas, including human resource management and personnel selection (<xref ref-type="bibr" rid="ref7">Black and van Esch, 2020</xref>; <xref ref-type="bibr" rid="ref72">Vrontis et al., 2022</xref>). In personnel selection, AI-based systems can support all phases of the selection process. AI-based systems can be used for the identification and attraction of potential candidates <italic>via</italic> databases and social media, for the screening and assessment of applicants <italic>via</italic> chatbots, video- or resume-analysis tools, and can administrate and communicate with applicants along the process (<xref ref-type="bibr" rid="ref28">Hunkenschroer and Luetge, 2022</xref>). The most studied applications are chatbots (e.g., <xref ref-type="bibr" rid="ref18">Ei&#x00DF;er et al., 2020</xref>), resume screening tools (e.g., <xref ref-type="bibr" rid="ref1">Acikgoz et al., 2020</xref>; <xref ref-type="bibr" rid="ref48">Noble et al., 2021</xref>), and digital, highly automated video-interview tools that evaluate speech, facial expressions, and gestures (e.g., <xref ref-type="bibr" rid="ref35">Langer et al., 2019</xref>; <xref ref-type="bibr" rid="ref1">Acikgoz et al., 2020</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). The use of AI-based systems can make the laborious and time-consuming task of identifying and assessing potential candidates for decision makers easier and at the same time ensure a more consistent and fair decision-making process (<xref ref-type="bibr" rid="ref33">Lacroux and Martin-Lacroux, 2022</xref>).</p>
<p>When it comes to the definition of AI, there is no consensus in the literature. AI is often used as an umbrella term for various approaches and techniques such as machine learning, deep learning or natural language processing (e.g., <xref ref-type="bibr" rid="ref56">Pillai and Sivathanu, 2020</xref>). In the literature on AI in personnel selection, many authors follow machine learning approaches (e.g., <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>; <xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>; <xref ref-type="bibr" rid="ref29">Kim and Heo, 2022</xref>) or have described AI as a technology that takes over tasks, particularly decision-making tasks, that previously required human intelligence (e.g., <xref ref-type="bibr" rid="ref1">Acikgoz et al., 2020</xref>; <xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>; <xref ref-type="bibr" rid="ref52">Pan et al., 2022</xref>). Following the initial idea by <xref ref-type="bibr" rid="ref42">McCarthy et al. (1955)</xref>, we define AI as the science and engineering of making intelligent machines. It seems that it is this vision that computers can do intelligent tasks that unites the research field (<xref ref-type="bibr" rid="ref45">Moor, 2006</xref>).</p>
<p>Organizations use AI-based systems because they expect from them both efficient and impartial recommendations for selection decisions (<xref ref-type="bibr" rid="ref50">Oberst et al., 2021</xref>; <xref ref-type="bibr" rid="ref33">Lacroux and Martin-Lacroux, 2022</xref>). Hence, AI-based systems shall address a well-known challenge in personnel selection, namely the selection of applicants purely based on their qualifications and expected job performance, without (oftentimes unconscious and unintended) discrimination based on personal characteristics such as ethnicity (<xref ref-type="bibr" rid="ref57">Quillian et al., 2017</xref>) or gender (<xref ref-type="bibr" rid="ref13">Casta&#x00F1;o et al., 2019</xref>). This places AI-based systems in the tradition of other decision support systems, such as paper-pencil tests, standardized interviews, or mechanical, algorithmic approaches, that have been shown to be clearly superior to so-called holistic methods, such as intuitive decisions by HR professionals (<xref ref-type="bibr" rid="ref26">Highhouse, 2008</xref>; <xref ref-type="bibr" rid="ref32">Kuncel et al., 2013</xref>; <xref ref-type="bibr" rid="ref49">Nolan et al., 2016</xref>; <xref ref-type="bibr" rid="ref43">Meijer et al., 2020</xref>). While a considerable increase in the practical use of AI-based systems for personnel preselection decisions is to be expected in the upcoming years, research is still in its infancy (<xref ref-type="bibr" rid="ref52">Pan et al., 2022</xref>).</p>
<p>Systems to support human decision-making differ in their levels of automation, which refers to the balance of automation and human control in the decision-making process (<xref ref-type="bibr" rid="ref54">Parasuraman et al., 2000</xref>; <xref ref-type="bibr" rid="ref15">Cummings, 2017</xref>). Higher levels of automation provide fully automated decisions without a human decision maker involved. Lower levels of automation only provide recommendations as decision support and a human decision maker has control over which option is chosen (<xref ref-type="bibr" rid="ref15">Cummings, 2017</xref>). As higher levels of automation in decision-making violate the legal requirements of Article 22 of the GDPR (<xref ref-type="bibr" rid="ref70">The European Parliament and the Council of the European Union, 2016</xref>) and other ethical standards (<xref ref-type="bibr" rid="ref28">Hunkenschroer and Luetge, 2022</xref>) that demand human oversight and thus a human who reviews the data and has control over the decision being made, we focus on AI-based decision support systems. One example of such a system that supports decision-making in personnel preselection <italic>via</italic> recommendations is a dashboard. A dashboard is defined as a data-driven system, which analyzes and visually presents data in a specific format to support decision-making (<xref ref-type="bibr" rid="ref74">Yigitbasioglu and Velcu, 2012</xref>; <xref ref-type="bibr" rid="ref60">Sarikaya et al., 2019</xref>). These visualizations of the data can have different designs and aim to extract information relevant to the decision. In the context of personnel preselection, data visualization by a dashboard means the analysis of the applicants&#x2019; data, including filtering irrelevant information, highlighting specific keywords, and assessing the applicants&#x2019; suitability for the job in form of a ranking list or a diagram (<xref ref-type="bibr" rid="ref58">Raghavan et al., 2020</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). Relating to the field of visual analytics, it is essential how the analyzed data is presented or rather what data visualization format is chosen, to enable an effective information processing of the user (<xref ref-type="bibr" rid="ref14">Cui, 2019</xref>).</p>
</sec>
<sec id="sec3">
<label>1.2.</label>
<title>Human information processing and automation bias</title>
<p>Due to ethical standards and regulations such as Article 22 of the GDPR, human oversight is demanded and, unless explicitly waived by applicants, legally required for AI-based personnel selection systems. Decision makers have to interact with the system to check recommendations and detect possible system errors. Previous research on AI-based decision support systems requiring human oversight highlighted the risk of automation bias (<xref ref-type="bibr" rid="ref15">Cummings, 2017</xref>). Automation bias describes the tendency of people to thoughtlessly accept an automated decision or recommendation. Thus far, automation bias and its negative outcomes have primarily been investigated in aviation contexts (e.g., <xref ref-type="bibr" rid="ref46">Mosier and Skitka, 1999</xref>; <xref ref-type="bibr" rid="ref16">Davis et al., 2020</xref>) and medical contexts (e.g., <xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref38">Lyell et al., 2018</xref>), but have also been found in the military domain and in process control (<xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>; <xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>) as well as in quality control (<xref ref-type="bibr" rid="ref30">Kloker et al., 2022</xref>). However, automation bias can occur in every work field that includes human-system-interaction (<xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>). In the case of AI-based personnel preselection systems, the occurrence of automation bias means that users do not review the data and actively make the decision, and hence the legal requirement of human control in a personnel selection decision is violated. It is thus crucial to investigate factors that might intensify verification behavior and thus prevent the occurrence of automation bias during the use of AI-based systems for personnel preselection.</p>
<p>To understand the origins of systematic distortions in human judgment, such as automation bias, it is important to take a closer look at human information processing. Several so-called &#x2018;dual-process theories&#x2019; have described human information processing as a process with two distinct underlying systems (for an overview see <xref ref-type="bibr" rid="ref21">Evans, 2011</xref>). These theories have great overlap in their theoretical foundations, however, we specifically base our assumptions on the ELM (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>), as it provides a comprehensive ground for our study and has been used to explain the acceptance of AI-based recommendations before (<xref ref-type="bibr" rid="ref44">Michels et al., 2022</xref>). ELM describes how information processing occurs either <italic>via</italic> the peripheral route, which is characterized by fast, uncritical and heuristic information processing, or the central route, which describes thorough and systematic information processing. While the peripheral route is applied under time pressure or when limited or ambiguous information is available, the central route is engaged whenever decision makers have enough time and personal interest or motivation to critically process information (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>).</p>
<p>Automation bias aligns with the peripheral route of information processing according to the ELM. The users thereby use the automation&#x2019;s recommendation as a heuristic replacement for thoughtful information seeking and processing (<xref ref-type="bibr" rid="ref46">Mosier and Skitka, 1999</xref>). As this uncritical acceptance of system recommendations is to be avoided, it is imperative to promote information processing on the central route when using AI-based systems for personnel preselection.</p>
</sec>
<sec id="sec4">
<label>1.3.</label>
<title>Automation bias and decision quality</title>
<p>The use of AI-based systems, such as dashboards, in personnel preselection aims to enhance the efficiency and the quality of the decision-making process (<xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>; <xref ref-type="bibr" rid="ref37">Li et al., 2021</xref>). A high decision quality relies on the critical analysis of all applicants and the selection of the applicant, who best matches the job requirements (<xref ref-type="bibr" rid="ref31">Kowalczyk and Gerlach, 2015</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). With regards to ELM, a systematical and critical elaboration of the applicants&#x2019; information is thereby crucial to ensure a high decision quality (<xref ref-type="bibr" rid="ref31">Kowalczyk and Gerlach, 2015</xref>). Moreover, a dashboard serves as additional input for the decision maker, which helps mitigate the unconscious biases of the recruiter and increase the organization&#x2019;s diversity. If the systems are used as assistance and recommendations are critically scrutinized, the additional input might disrupt fast and heuristic decision making and encourage the user to review hastily overlooked applicants more carefully (<xref ref-type="bibr" rid="ref58">Raghavan et al., 2020</xref>; <xref ref-type="bibr" rid="ref37">Li et al., 2021</xref>). Of course, this is only true if AI-based systems are not biased themselves. AI-based systems trained with insufficient or distorted data will fail to make correct predictions (<xref ref-type="bibr" rid="ref29">Kim and Heo, 2022</xref>). However, the proposed European Union (EU) AI act (<xref ref-type="bibr" rid="ref20">European Commission, 2021</xref>) aims to prevent these cases by setting quality criteria for training, validation and testing data sets for AI-based systems in high-risk areas, such as personnel selection. Optimally, the combination of human and AI-based information processing leads to a less biased and more thorough decision-making process (<xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>).</p>
<p>One factor that affects user decision quality is the reliability and correctness of the system. If the decision recommended by the system is correct, users are more likely to efficiently make good decisions (<xref ref-type="bibr" rid="ref8">Brauner et al., 2016</xref>). However, if the system&#x2019;s recommendations are incorrect, the users&#x2019; decision quality is negatively affected. Users receiving incorrect advice show lower accuracy and longer decision times than people, who did not receive any support (<xref ref-type="bibr" rid="ref9">Brauner et al., 2019</xref>). This impact on the decision quality can be explained by automation bias. Due to automation bias, users could either blindly follow the systems&#x2019; incorrect recommendation or check necessary information and still follow the incorrect advice of the system (<xref ref-type="bibr" rid="ref41">Manzey et al., 2012</xref>). This means, that the users do not systematically elaborate the complete data but use the systems&#x2019; recommendation as a heuristic decision technique to avoid cognitive effort (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>). Therefore, automation bias, including not seeking out confirmatory or contradictory information, can lead the user to follow a recommendation, even if it is not the best choice (<xref ref-type="bibr" rid="ref46">Mosier and Skitka, 1999</xref>; <xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>), resulting in poor decision quality. On the other hand, decision makers who thoroughly process available information and thus exhibit a high verification intensity, should reach better decision quality.</p>
<disp-quote>
<p><italic>H1</italic>: Verification intensity indicators are positively associated with decision quality when using an imperfect system.</p>
</disp-quote>
</sec>
<sec id="sec5">
<label>1.4.</label>
<title>Factors to mitigate automation bias and foster decision quality</title>
<p>Several previous studies addressing the views of decision makers and AI-based personnel selection systems have identified technological, organizational, and environmental factors for successful deployment (e.g., <xref ref-type="bibr" rid="ref56">Pillai and Sivathanu, 2020</xref>; <xref ref-type="bibr" rid="ref52">Pan et al., 2022</xref>). However, those studies were cross-sectional surveys in companies on HR professionals&#x2019; perception of AI-based personnel preselection systems. They do not give us any information about the actual interaction with the systems during work processes and how good decision quality can be achieved. Additionally, automation bias has, to our knowledge, not yet been studied in the context of AI-based personnel preselection before. However, strategies to avoid automation bias have been tested in other application areas, especially in the aviation and medical context. It was found that responsibility for overall performance or decision accuracy can reduce automation bias in flight simulations (<xref ref-type="bibr" rid="ref65">Skitka et al., 2000a</xref>). In another flight simulation study, joint decision-making in crews was compared with that of a single decision maker (<xref ref-type="bibr" rid="ref66">Skitka et al., 2000b</xref>). However, team decision-making did not prove to be a suitable strategy to reduce automation bias; both crews and single decision-makers were equally subject to automation bias. In the same study, some participants were instructed about the phenomenon of automation bias and encouraged to verify the system. These participants performed better than participants in the control group and those who were prompted to verify the system (<xref ref-type="bibr" rid="ref66">Skitka et al., 2000b</xref>). Other studies with process control tasks had the participants go through a training where they experienced that the supporting system was erroneous. This training led them to rely less on the system later in the test situation (<xref ref-type="bibr" rid="ref40">Manzey et al., 2006</xref>; <xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>; <xref ref-type="bibr" rid="ref41">Manzey et al., 2012</xref>). A review by <xref ref-type="bibr" rid="ref24">Goddard et al. (2012)</xref> on automation bias and clinical decision support systems also emphasized responsibility, information and training as successful mitigation strategies. In addition, the design of the system, for example the dominant positioning of a recommendation on the screen, also had an impact on automation bias. In order to verify successful mitigation strategies also in the context of AI-based personnel selection, we conducted a work design study, focusing on organizational factors, i.e., information about system errors and responsibility, and technological design factors, i.e., the aggregation level of presented data (see <xref rid="fig1" ref-type="fig">Figure 1</xref>).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Proposed research model.</p>
</caption>
<graphic xlink:href="fpsyg-14-1118723-g001.tif"/>
</fig>
</sec>
<sec id="sec6">
<label>1.5.</label>
<title>Information about system errors and automation bias</title>
<p>When introducing AI-based systems in personnel preselection contexts it can be crucial to inform users about possible system errors and make them reflect system recommendations more thoroughly. According to the ELM, the credibility of the information source has an impact on whether the presented information is either scrutinized or accepted uncritically (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>). In terms of using technology, the unawareness of the system&#x2019;s capacities, i.e., its reliability, can lead to an overestimation of the systems&#x2019; credibility as users might heuristically decide to trust a system without systematically evaluating its capacities (<xref ref-type="bibr" rid="ref10">Bu&#x00E7;inca et al., 2021</xref>). The overestimation of the system&#x2019;s capacities results in an inappropriately high level of trust and a heuristic reliance on the system, and thus enhances automation bias (<xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref10">Bu&#x00E7;inca et al., 2021</xref>). In line with that, prior research showed that increasing the users&#x2019; awareness of system errors and weaknesses can decrease automation bias: Users, who already experienced system errors during an initial training session, showed more verification behavior and thus less automation bias while later using the system (<xref ref-type="bibr" rid="ref40">Manzey et al., 2006</xref>; <xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>). Consequently, making users aware of the systems&#x2019; capacity encourages them to process the systems&#x2019; recommendation more thoughtfully and control its recommendation more carefully. Therefore, users who are informed about potential system errors should show less automation bias in terms of higher verification intensity.</p>
<disp-quote>
<p><italic>H2</italic>: Participants who are made aware of system errors score higher on verification intensity indicators than the control group.</p>
</disp-quote>
</sec>
<sec id="sec7">
<label>1.6.</label>
<title>Information about system errors and decision quality</title>
<p>More information about the AI-based system&#x2019;s capacities, including its reliability, might stimulate a more critical investigation of the system&#x2019;s recommendations, which positively influences decision quality (<xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>; <xref ref-type="bibr" rid="ref73">Wickens et al., 2015</xref>; <xref ref-type="bibr" rid="ref61">Sauer et al., 2016</xref>). As stated before, the unawareness of the system&#x2019;s capacities might result in an overreliance on the system and thereby a heuristic acceptance of its recommendations (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref10">Bu&#x00E7;inca et al., 2021</xref>). In the context of personnel preselection, decision makers might solely focus on best-ranked candidates while ignoring other lower ranked, but suitable candidates (<xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). However, increasing the users&#x2019; awareness of the systems&#x2019; reliability and possible system errors might increase the users&#x2019; motivation to critically engage with all the available information (<xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>; <xref ref-type="bibr" rid="ref61">Sauer et al., 2016</xref>; <xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>). This systematic information processing enhances decision quality as the decision maker verifies the systems&#x2019; recommendation and is less likely to follow a wrong recommendation (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref31">Kowalczyk and Gerlach, 2015</xref>; <xref ref-type="bibr" rid="ref10">Bu&#x00E7;inca et al., 2021</xref>).</p>
<disp-quote>
<p><italic>H3</italic>: Participants who are made aware of system errors show a higher decision quality than the control group when using an imperfect system.</p>
</disp-quote>
</sec>
<sec id="sec8">
<label>1.7.</label>
<title>Responsibility and automation bias</title>
<p>Many guidelines, laws and regulations, such as the GDPR, demand human oversight and thus users must be made aware of their responsibility and accountability for the decision-making process and their obligation to monitor and control decisions from an AI-based system. Accountability and responsibility are two terms that are often used interchangeably but are in fact two distinct constructs. Accountability refers to a person&#x2019;s obligation to explain and justify their decision and often arises from legislative or organizational sources. Responsibility, however, is more strongly related to the duty of completing a certain task and can be taken on by individuals themselves. In the context of personnel preselection, an HR professional is responsible for the task of selecting qualified personnel and he or she can be held accountable for the decision (<xref ref-type="bibr" rid="ref2">Adensamer et al., 2021</xref>). We use the term responsibility in our study, as being held accountable for something also presumes being responsible for it in the first place.</p>
<p>One reason why automation bias might occur is the diffusion of responsibility mechanism. Diffusion of responsibility describes the psychological phenomenon of a decreased feeling of responsibility within a shared task as people unconsciously delegate their responsibility to their co-workers (<xref ref-type="bibr" rid="ref65">Skitka et al., 2000a</xref>). Diffusion of responsibility also occurs in tasks humans share with automatic systems (<xref ref-type="bibr" rid="ref65">Skitka et al., 2000a</xref>; <xref ref-type="bibr" rid="ref75">Zerilli et al., 2019</xref>). Consequently, people who share a decision-making task with an AI-based decision support system, feel less responsible for the decision and reduce their cognitive effort. This leads to a more heuristic and peripheral information processing, which increases automation bias (<xref ref-type="bibr" rid="ref65">Skitka et al., 2000a</xref>; <xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>).</p>
<p>Conversely, people who feel responsible for the outcome of the decision tend to critically engage with and scrutinize the given information (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>). <xref ref-type="bibr" rid="ref65">Skitka et al. (2000a)</xref> found that increasing the person&#x2019;s responsibility for the decision can induce deeper information processing. People who were made responsible for the quality of the decision before the decision-making process engaged in more careful and deep information processing. This resulted in more verification behavior of the systems&#x2019; recommendation and thus decreased automation bias. Therefore, we propose that people who are made responsible for a decision show less automation bias in terms of higher verification intensity.</p>
<disp-quote>
<p><italic>H4</italic>: Participants who are made aware of their responsibility for the decision score higher on verification intensity indicators than the control group.</p>
</disp-quote>
</sec>
<sec id="sec9">
<label>1.8.</label>
<title>Responsibility and decision quality</title>
<p>When sharing the selection task with an AI-based decision support system, decision makers might not attribute the decision outcome to their own effort (<xref ref-type="bibr" rid="ref49">Nolan et al., 2016</xref>). This reduced feeling of responsibility may lead to a decrease in motivation, and cognitive effort and consequently impact the decision quality (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>). According to the ELM, the feeling of responsibility increases the central processing of given information. Therefore, a stronger feeling of responsibility for the outcome of the decision should lead to a more critical engagement with the information, resulting in a more careful decision-making process and higher decision quality. In line with this argument, <xref ref-type="bibr" rid="ref65">Skitka et al. (2000a)</xref> found that people who were specifically made responsible for the overall performance in a decision-making task made significantly better decisions than people who were not aware of their responsibility. Therefore, we propose that people who are made responsible for a decision show higher decision quality.</p>
<disp-quote>
<p><italic>H5</italic>: Participants who are made aware of their responsibility for the decision show a higher decision quality than the control group when using an imperfect system.</p>
</disp-quote>
</sec>
<sec id="sec10">
<label>1.9.</label>
<title>Level of data aggregation and automation bias</title>
<p>Drawing from the field of visual analytics, the amount and format of the represented data of an AI-based system, such as a dashboard, can have a significant impact on how users process the information and how good the jointly reached decisions are (<xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>; <xref ref-type="bibr" rid="ref67">Sosulski, 2018</xref>). Presenting too much data at one point can negatively impact the readability and understandability of the data visualization. The user might not be able to filter the relevant information and understand the key message of the visualization correctly (<xref ref-type="bibr" rid="ref67">Sosulski, 2018</xref>). Conversely, presenting too little information, or information that is highly aggregated, can decrease transparency and limit critical elaboration of the data (<xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>; <xref ref-type="bibr" rid="ref60">Sarikaya et al., 2019</xref>). Therefore, it is crucial to find the right level of data aggregation to enable an effective but reflected decision-making process.</p>
<p>AI-based data visualization refers to the dashboard&#x2019;s capability to screen a big amount of data, summarize it and only present the most important information contained in the data (<xref ref-type="bibr" rid="ref54">Parasuraman et al., 2000</xref>; <xref ref-type="bibr" rid="ref60">Sarikaya et al., 2019</xref>). In the context of personnel preselection, this includes a summary of applicants&#x2019; qualifications and an assessment of their suitability for the position (<xref ref-type="bibr" rid="ref58">Raghavan et al., 2020</xref>). Such a summary might be highly aggregated, presenting only an overall matching score of the candidates&#x2019; suitability or it might be less aggregated, presenting information on the candidates&#x2019; suitability in different areas such as qualification, abilities, and personality factors. According to the ELM, the presentation of strongly aggregated data might induce a more peripheral information processing, as presenting only specific parts of the data might lead users to pay less attention to the entire underlying data (<xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>). Moreover, the presentation of a specific recommendation, for example, a ranking list, might lead the users to solely focus on the AI-based recommendation, e.g., the best-ranked applicants (<xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). This means that the users reduce their information processing effort and use the systems&#x2019; recommendation as a heuristic to make a quick decision with relatively little cognitive effort. The reduced effort, however, increases automation bias (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref51">Onnasch et al., 2014</xref>). Thus, it can be argued that the display of more strongly aggregated data induces a heuristic information processing, which is expressed by accepting the recommended assessment without seeking and verifying background information, i.e., low verification intensity.</p>
<disp-quote>
<p><italic>H6</italic>: Participants who see highly aggregated data visualizations score lower on verification intensity indicators than participants who see less aggregated data visualizations.</p>
</disp-quote>
</sec>
<sec id="sec11">
<label>1.10.</label>
<title>Level of data aggregation and decision quality</title>
<p>The format of data visualization affects how users interpret the underlying data and thereby influences their decision-making process (<xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>; <xref ref-type="bibr" rid="ref67">Sosulski, 2018</xref>). Highly aggregated data, such as a single matching score, might on the one hand increase the users&#x2019; efficiency, as it provides a simple overview of the applicants&#x2019; suitability for the position (<xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). On the other hand, it decreases the users&#x2019; ability to validate the data. Therefore, a system error, i.e., an imperfect recommendation, might not be detected, resulting in the acceptance of a deflective decision (<xref ref-type="bibr" rid="ref54">Parasuraman et al., 2000</xref>; <xref ref-type="bibr" rid="ref3">Alberdi et al., 2009</xref>; <xref ref-type="bibr" rid="ref41">Manzey et al., 2012</xref>).</p>
<p>Moreover, highlighting information and visualizing this information in a highly aggregated form can be problematic, as users tend to strongly focus on the highlighted information while ignoring contradictory information (<xref ref-type="bibr" rid="ref3">Alberdi et al., 2009</xref>; <xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>). This means that users do not critically engage with the total information, but solely focus on information which the system deemed relevant (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref19">Endsley, 2017</xref>). Hence, presenting a highly aggregated summary of the candidates&#x2019; suitability for a job position might encourage a peripheral and heuristic elaboration of the presented data as not all data is taken into consideration, which decreases the soundness of the decision (<xref ref-type="bibr" rid="ref31">Kowalczyk and Gerlach, 2015</xref>). Therefore, we propose that people who are presented with a highly aggregated data visualization, i.e., an overall matching score, show lower decision quality than people who are presented a less aggregated data visualization, i.e., a 5-point rating of three key indicators.</p>
<disp-quote>
<p><italic>H7</italic>: Participants who see highly aggregated data visualizations show a lower decision quality than participants who see less aggregated data visualizations when using an imperfect system.</p>
</disp-quote>
</sec>
</sec>
<sec id="sec12" sec-type="methods">
<label>2.</label>
<title>Methods</title>
<sec id="sec13">
<label>2.1.</label>
<title>Research design</title>
<p>We conducted an experimental study using a 3&#x2009;&#x00D7;&#x2009;2 design, with the two between-subject factors system instruction (control group vs. error-awareness vs. responsibility) and data visualization (matching score vs. 5-point rating). The control group only received basic information about the dashboard and its functions. The error-awareness group additionally received more detailed information about the dashboard and a warning about possible system errors. The responsibility group received the basic information and information about their responsibility for the decision prescribed by the GDPR. They were told that they had to justify their decision at the end of the experiment. <xref rid="tab1" ref-type="table">Table 1</xref> provides the instruction texts of all groups. Concerning dashboard design (see <xref rid="fig2" ref-type="fig">Figure 2</xref>), the matching score group received an overall assessment of the candidates in form of a percentage score referring to the suitability of the candidates for the position. The other group received a 5-point rating of the candidates&#x2019; suitability concerning three key indicators, namely education, abilities and personality.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Instruction presented to the participants in the different instruction conditions.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Condition</th>
<th align="left" valign="top">Instruction</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Control group</td>
<td align="left" valign="middle">This dashboard assesses applicants&#x2019; suitability for the position. The dashboard contains three different levels. The first level gives you an overview of all applicants, who applied for the position and their calculated suitability. The second level shows you more information about each applicant regarding their education, abilities, and personality. At the third level can access a protocol of a conversation between a chatbot and an applicant, in which the applicant answered questions about his or her personality. You can switch between the levels and the applicants any number of times.</td>
</tr>
<tr>
<td align="left" valign="top">Error-awareness group</td>
<td align="left" valign="middle">[In addition to the information of the control group]. Applicants&#x2019; information has been processed and evaluated through artificial intelligence. The information has been extracted from the application using intelligent language processing. An algorithm compared this information with the job requirements and calculated applicants&#x2019; suitability. The calculation results are presented in the graphs. Prior studies have shown that when using similar systems, errors might occur. Therefore, it is essential to verify the dashboard&#x2019;s assessment by checking all relevant information before decision-making.</td>
</tr>
<tr>
<td align="left" valign="top">Responsibility group</td>
<td align="left" valign="middle">[In addition to the information of the control group] Please note that the dashboard is a decision support system and does not make the final decision. Article 22 of the General Data Protection Regulation (GDPR) states that subjects (here the applicants) shall have the right not to be subject to a decision based solely on automated processing. This means that you are obligated to verify the dashboard&#x2019;s assessment. You are responsible for the selection decision.<break/>After the selection task, you will answer questions about the reasoning behind your selection decision.</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Instruction texts are translated from German.</p>
</table-wrap-foot>
</table-wrap>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Dashboard level 1 &#x2013; highly aggregated matching score (above) and low aggregated 5-point rating (below).</p>
</caption>
<graphic xlink:href="fpsyg-14-1118723-g002.tif"/>
</fig>
</sec>
<sec id="sec14">
<label>2.2.</label>
<title>Participants</title>
<p>The sample size for the study was determined by an <italic>a priori</italic> power analysis using G&#x002A;Power (<xref ref-type="bibr" rid="ref22">Faul et al., 2009</xref>). Assuming a power of 1&#x2013;&#x03B2;&#x2009;=&#x2009;0.80, and an &#x03B1;-error of 0.05, we calculated a required sample size of 90 participants to be able to consistently detect medium-sized effects of <italic>d</italic>&#x2009;=&#x2009;0.4. To account for potential dropouts, we recruited 100 participants <italic>via</italic> a student mail distribution list, flyers, and social media. Inclusion criteria for the study were the age of majority, the ability to understand German, and a general interest in personnel selection. Three participants were excluded from further analysis because they reported technical problems with the dashboard at the end of the study. After carefully checking the data, we removed an additional four participants as they had response times 1.5 SD below average in both experimental tasks. The final sample consisted of <italic>N</italic>&#x2009;=&#x2009;93 participants (68% female and 2% diverse). The majority (90%) were full-time students, of which 82% studied psychology and 12% studied business administration. The remaining participants were full-time employees. The mean age was 23&#x2009;years (SD&#x2009;=&#x2009;3.89). Ten participants reported prior experience in human resource management. Psychology students received course credits for participating in the study.</p>
</sec>
<sec id="sec15">
<label>2.3.</label>
<title>Procedure</title>
<p>The study was conducted in a computer room at the university, where six participants could participate at the same time. Participants were randomly assigned to one of the six conditions. Participants were seated in front of a computer and were asked to read a printed written instruction. They were told to imagine themselves as HR professionals. Depending on the system instruction treatment, they either received basic information, information highlighting the potential error-proneness of the system or information highlighting the participants&#x2019; responsibility for the decision. After reading the instruction, participants had to complete two personnel preselection tasks for two different positions using either the low or highly aggregated ranking of the dashboard. Everyone completed the tasks in the same order. The first task was filling the position of head of the marketing department. The second task was filling the position of branch manager of a psychosocial facility. Participants were provided with a printed job description (see <xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>) with the requirements for each position. Participants had to select five out of ten applicants for each task and rank them according to how likely they would be to hire them. For each task, we intentionally included three errors. Two of the ten applicants were overrated by the dashboard, as they did not fulfil an essential requirement, while one applicant was underrated, because the dashboard did not recognize the applicant&#x2019;s academic title (&#x201C;Magister&#x201D;). Participants could make their selection choice and end the task at any time but had a maximum of ten minutes to complete each task. After finishing the two experimental tasks, participants had to complete a series of questionnaires outlined in the measures subsection below. At the end of the experiment, the responsibility group received a debriefing, as they did not have to justify their decision as announced in the instruction.</p>
</sec>
<sec id="sec16">
<label>2.4.</label>
<title>The dashboard</title>
<p>The dashboard was designed with the software Preely (<xref ref-type="bibr" rid="ref4">Testlab ApS, 2020</xref>). The main interface of the dashboard gave an overview of the ten applicants, an assessment of the applicants&#x2019; suitability for the position, and a few keywords from their CV (level 1, see <xref rid="fig2" ref-type="fig">Figure 2</xref>). By clicking on each applicant, the participants could access an overview of the applicants&#x2019; professional background (level 2, see <xref rid="fig3" ref-type="fig">Figure 3A</xref>). This overview contained more detailed information about the applicants&#x2019; education, prior work experience, and personality traits. From this interface, the participants could access an even more detailed interface for each key indicator (level 2 detail, see <xref rid="fig3" ref-type="fig">Figures 3B</xref>&#x2013;<xref rid="fig3" ref-type="fig">D</xref>). The detailed interface contained a radar chart displaying how well the applicants match the job requirements regarding the key indicators. The dashboard was designed in such a way that the decision makers could quickly make a decision using the AI-based assessment at level 1, a realistic scenario in personnel selection. However, a decision based only on this assessment would mean that there would be no verification behavior by the decision makers. While level 1 provided an overview of all applicants, only level 2 provided enough information to thoroughly evaluate the applicants&#x2019; suitability for the position. Moreover, the integrated system errors could only be discovered at level 2. Therefore, level 2 must be accessed to verify the dashboard&#x2019;s assessment. Proceeding from the level 2 interface, the participants could access a protocol of a conversation between a chatbot and the applicants, in which the applicants&#x2019; answers to questions from a personality inventory were displayed (level 3, see <xref rid="fig4" ref-type="fig">Figure 4</xref>). The participants were allowed to access every level and every applicant as often as they wanted. The dashboard was presented on 1,680&#x2009;&#x00D7;&#x2009;1,050 screens.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Dashboard level 2 &#x2013; overview <bold>(A)</bold>, detail level education <bold>(B)</bold>, detail level abilities <bold>(C)</bold>, and detail level personality <bold>(D)</bold>.</p>
</caption>
<graphic xlink:href="fpsyg-14-1118723-g003.tif"/>
</fig>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Dashboard level 3 &#x2013; chatbot protocol.</p>
</caption>
<graphic xlink:href="fpsyg-14-1118723-g004.tif"/>
</fig>
</sec>
<sec id="sec17">
<label>2.5.</label>
<title>Measures</title>
<p>All measurements were administrated in German. Unless stated otherwise, all items were answered on a 5-point scale ranging from (1) <italic>strongly disagree</italic> to (5) <italic>strongly agree</italic>.</p>
<p>We included a manipulation check after the experimental tasks to verify if the experimental manipulation was effective. One item measured the effect of the warning presented to the error-awareness group. I controlled the dashboard&#x2019;s assessment because I was aware that system errors might occur. Another item examined the feeling of responsibility. I controlled the dashboard&#x2019;s assessment as I felt responsible for the selection decision due to the GDPR.</p>
<p><italic>Verification intensity indicators</italic> were operationalized with three verification behavior variables, i.e., time spent at each level, the number of clicks, and the number of pages visited at each level during the decision-making process. Time, number of clicks, and visited pages were recorded with the software Preely (<xref ref-type="bibr" rid="ref4">Testlab ApS, 2020</xref>).</p>
<p><italic>Decision quality</italic> was measured in an objective and a subjective way. <italic>Objective decision quality</italic> was assessed by the number of correctly selected applicants. Five out of the ten applicants were designed to be better suited for each position than the other five applicants. Participants received one point for each correctly selected applicant, resulting in a possible score from 0 to 5. In addition, we assessed <italic>subjective decision quality</italic> by asking participants to rate their performance on the tasks using four self-developed items (see <xref rid="tab2" ref-type="table">Table 2</xref>). A sample item is: &#x201C;With the help of the dashboard, I selected the most suitable applicants.&#x201D; The scale had an acceptable internal consistency (Cronbach &#x03B1;&#x2009;=&#x2009;0.73).</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Scale for subjective decision quality.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Nr.</th>
<th align="left" valign="top">Item</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="left" valign="middle">With the help of the dashboard, I have invited the most suitable applicants for the position of head of the marketing department.</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="left" valign="middle">With the help of the dashboard, I have invited the most suitable applicants for the position of branch manager of a psychosocial facility.</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="left" valign="middle">I considered all the information before making a decision.</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="left" valign="middle">I accepted the dashboard assessments without checking them. (&#x2212;)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Item Nr. 4 is inverted, high agreement means low subjective decision quality. All items are translated from German.</p>
</table-wrap-foot>
</table-wrap>
<p>To control for confounding factors, we measured participants&#x2019; technical affinity and conscientiousness. <italic>Technical affinity</italic> was measured with the Affinity for Technology Interaction Scale (<xref ref-type="bibr" rid="ref23">Franke et al., 2019</xref>). The questionnaire was answered on a 6-point scale ranging from (1) <italic>strongly disagree</italic> to (6) <italic>strongly agree</italic> (Cronbach &#x03B1;&#x2009;=&#x2009;0.93). <italic>Conscientiousness</italic> was measured with the extra-short form of the Big-Five-Inventory-2 (<xref ref-type="bibr" rid="ref59">Rammstedt et al., 2013</xref>). The scale had an acceptable internal consistency (Cronbach &#x03B1;&#x2009;=&#x2009;0.76). In addition, we recorded participants&#x2019; gender (1&#x2009;=&#x2009;female), age (in years), current occupation, highest education, and prior experience in human resource management (1&#x2009;=&#x2009;yes).</p>
</sec>
</sec>
<sec id="sec18" sec-type="results">
<label>3.</label>
<title>Results</title>
<sec id="sec19">
<label>3.1.</label>
<title>Manipulation check</title>
<p>To test whether the experimental manipulation of instruction was effective, we conducted two analyses of variance (ANOVA). The manipulation check showed that there was a significant difference between instruction groups concerning the awareness of system errors (<italic>F</italic>(2,57)&#x2009;=&#x2009;7.47, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.01, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.13). Dunnett&#x2019;s post-hoc tests revealed that participants of the error-awareness group (<italic>M</italic>&#x2009;=&#x2009;4.03, SD&#x2009;=&#x2009;1.10) were more aware of possible system errors than participants of the responsibility group (<italic>M</italic>&#x2009;=&#x2009;3.00, SD&#x2009;=&#x2009;1.39, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;1.03, 95% [0.26, 1.80], <italic>p</italic>&#x2009;=&#x2009;0.01) or the control group (<italic>M</italic>&#x2009;=&#x2009;3.00, SD&#x2009;=&#x2009;1.44, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;1.03, 95% [0.21, 1.85], <italic>p</italic>&#x2009;=&#x2009;0.010).</p>
<p>Moreover, it was shown that there was a significant difference between instruction groups concerning the feeling of responsibility (<italic>F</italic>(2,59)&#x2009;=&#x2009;5.53, <italic>p</italic>&#x2009;=&#x2009;0.02, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.09). Dunnett&#x2019;s post-hoc tests showed, that participants of the responsibility group (<italic>M&#x2009;=</italic> 3.50, SD <italic>=</italic> 1.24) had a stronger feeling of responsibility than participants of the error-awareness group (<italic>M&#x2009;=</italic> 2.70, SD <italic>=</italic> 1.26, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;0.83, 95% [0.04, 1.66], <italic>p&#x2009;=</italic> 0.036) and the control group (<italic>M&#x2009;=</italic> 2.64, SD <italic>=</italic> 1.28, <italic>M</italic><sub>diff</sub> <italic>=</italic> 0.86, 95% [0.05, 1.57], <italic>p&#x2009;=</italic> 0.03).</p>
</sec>
<sec id="sec20">
<label>3.2.</label>
<title>Verification intensity and decision quality</title>
<p>To examine hypothesis 1, stating that verification intensity indicators will be positively associated with decision quality when using an imperfect system, we conducted Pearson correlations. For this purpose, we correlated parameters of verification intensity indicative of automation bias, i.e., time spent, number of clicks, and visited pages with objective and self-rated decision quality.</p>
<p>For objective decision quality, there was a significant correlation between the verification intensity indicators and objective decision quality in both tasks for all level 2 interactions and for some level 3 interactions (see <xref rid="tab3" ref-type="table">Table 3</xref>). In general, the longer the time spent, the greater the number of clicks and the greater the number of pages visited, the better the objective decision quality. Therefore, hypothesis 1 is partially supported in the case of objective decision quality.</p>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Pearson-correlations between verification intensity indicators and objective and subjective decision quality.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Variable</th>
<th align="center" valign="top">Objective decision quality task 1</th>
<th align="center" valign="top">Objective decision quality task 2</th>
<th align="center" valign="top">Subjective decision quality</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><italic>Time</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char=".">0.39&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.35&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.06</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char=".">0.43&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.28&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.23&#x002A;</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char=".">0.13</td>
<td align="char" valign="top" char=".">0.18</td>
<td align="char" valign="top" char=".">0.08</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Clicks</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char=".">0.39&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.34&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.18</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char=".">0.40&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.28&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.21&#x002A;</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char=".">0.12</td>
<td align="char" valign="top" char=".">0.23&#x002A;</td>
<td align="char" valign="top" char=".">0.05</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Pages visited</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char=".">0.42&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.36&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.11</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char=".">0.49&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.34&#x002A;&#x002A;</td>
<td align="char" valign="top" char=".">0.21&#x002A;</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char=".">0.18</td>
<td align="char" valign="top" char=".">0.22&#x002A;</td>
<td align="char" valign="top" char=".">0.10</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>N</italic>&#x2009;=&#x2009;93; &#x002A;<italic>p</italic>&#x2009;&#x003C;&#x2009;0.05, &#x002A;&#x002A;<italic>p</italic>&#x2009;&#x003C;&#x2009;0.01 (two-tailed).</p>
</table-wrap-foot>
</table-wrap>
<p>For subjective decision quality, there were only significant correlations between self-rated decision quality and the time spent (<italic>r</italic>&#x2009;=&#x2009;0.23, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.05), the number of clicks (<italic>r</italic>&#x2009;=&#x2009;0.21, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.05), and the number of pages visited (<italic>r</italic>&#x2009;=&#x2009;0.21, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.05) at the level 2 detail interfaces (see <xref rid="tab3" ref-type="table">Table 3</xref>). Verification intensity indicators at other levels did not have significant associations with self-rated decision quality. Consequently, hypothesis 1 was partially supported for subjective decision quality.</p>
</sec>
<sec id="sec21">
<label>3.3.</label>
<title>Information about system errors, responsibility, and verification intensity</title>
<p>To test whether participants who were made aware of the occurrence of system errors (hypothesis 2) and responsible for the decision (hypothesis 4) show less automation bias in terms of higher verification intensity indicators than participants of the control group, we conducted a multivariate variance analysis (MANOVA) and subsequent ANOVA. We controlled for interactions between instruction and data aggregation conditions, but did not find significant interaction effects.<xref rid="fn0004" ref-type="fn"><sup>1</sup></xref> For this purpose, we assessed the effect of the system instruction treatment on verification intensity indicative of automation bias, again including time spent on each level, the number of clicks, and the number of pages visited. <xref rid="tab4" ref-type="table">Table 4</xref> provides the means and standard deviations of the verification intensity indicators for each instruction group.</p>
<table-wrap position="float" id="tab4">
<label>Table 4</label>
<caption>
<p>Means and standard deviations of verification intensity indicators for each instruction group.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Task 1</th>
<th align="center" valign="top">Control group (<italic>n</italic>&#x2009;=&#x2009;28)</th>
<th align="center" valign="top">Error-awareness group (<italic>n</italic>&#x2009;=&#x2009;33)</th>
<th align="center" valign="top">Responsibility group (<italic>n</italic>&#x2009;=&#x2009;32)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><italic>Time (s)</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">234.35 (185.29)</td>
<td align="char" valign="top" char="(">121.05 (144.65)</td>
<td align="char" valign="top" char="(">206.80 (163.89)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">47.74 (76.67)</td>
<td align="char" valign="top" char="(">73.77 (61.56)</td>
<td align="char" valign="top" char="(">54.51 (66.82)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">27.47 (37.89)</td>
<td align="char" valign="top" char="(">35.95 (41.59)</td>
<td align="char" valign="top" char="(">24.47 (26.82)</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Clicks</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">27.54 (20.57)</td>
<td align="char" valign="top" char="(">33.30 (23.35)</td>
<td align="char" valign="top" char="(">32.00 (26.41)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">12.46 (18.93)</td>
<td align="char" valign="top" char="(">22.09 (17.72)</td>
<td align="char" valign="top" char="(">14.50 (15.19)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">1.71 (2.39)</td>
<td align="char" valign="top" char="(">3.09 (4.03)</td>
<td align="char" valign="top" char="(">1.84 (2.16)</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Pages visited</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">7.43 (4.11)</td>
<td align="char" valign="top" char="(">8.12 (3.73)</td>
<td align="char" valign="top" char="(">7.44 (4.31)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">8.32 (9.05)&#x002A;</td>
<td align="char" valign="top" char="(">15.18 (11.06)&#x002A;</td>
<td align="char" valign="top" char="(">10.47 (10.43)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">1.43 (1.85)</td>
<td align="char" valign="top" char="(">2.76 (2.93)</td>
<td align="char" valign="top" char="(">1.66 (1.86)</td>
</tr>
<tr>
<td align="left" valign="top">Task 2</td>
<td align="char" valign="top" char="(">Control group</td>
<td align="char" valign="top" char="(">Error-awareness group</td>
<td align="char" valign="top" char="(">Responsibility group</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Time (s)</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">284.61 (190.24)</td>
<td align="char" valign="top" char="(">244.29 (135.80)</td>
<td align="char" valign="top" char="(">268.59 (172.96)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">44.25 (62.69)</td>
<td align="char" valign="top" char="(">60.50 (54.45)</td>
<td align="char" valign="top" char="(">48.10 (67.10)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">15.22 (19.64) &#x002A;</td>
<td align="char" valign="top" char="(">32.22 (31.97) &#x002A;</td>
<td align="char" valign="top" char="(">17.77 (20.34)</td>
</tr>
<tr>
<td align="left" valign="top"><italic><italic>Clicks</italic>
</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">28.89 (20.69)</td>
<td align="char" valign="top" char="(">34.82 (23.86)</td>
<td align="char" valign="top" char="(">34.50 (29.84)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">13.71 (20.01)</td>
<td align="char" valign="top" char="(">21.42 (18.28)</td>
<td align="char" valign="top" char="(">12.13 (13.07)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">1.32 (1.83)&#x002A;</td>
<td align="char" valign="top" char="(">2.91 (2.92)&#x002A;</td>
<td align="char" valign="top" char="(">1.91 (2.31)</td>
</tr>
<tr>
<td align="left" valign="top"><italic>Pages visited</italic></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">Level 2</td>
<td align="char" valign="top" char="(">8.25 (3.60)</td>
<td align="char" valign="top" char="(">8.58 (3.33)</td>
<td align="char" valign="top" char="(">8.22 (3.65)</td>
</tr>
<tr>
<td align="left" valign="top">Level 2 detail</td>
<td align="char" valign="top" char="(">8.43 (9.61)</td>
<td align="char" valign="top" char="(">14.76 (12.04)</td>
<td align="char" valign="top" char="(">9.91 (10.13)</td>
</tr>
<tr>
<td align="left" valign="top">Level 3</td>
<td align="char" valign="top" char="(">1.21 (1.62)&#x002A;</td>
<td align="char" valign="top" char="(">2.33 (2.31)&#x002A;</td>
<td align="char" valign="top" char="(">1.75 (2.00)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Time measures are expressed in seconds. &#x002A;Means are significantly different at <italic>p</italic>&#x2009;&#x003C;&#x2009;0.05.</p>
</table-wrap-foot>
</table-wrap>
<p>For the first task, time spent at each level (<italic>F</italic>(2,87)&#x2009;=&#x2009;1.20, <italic>p</italic>&#x2009;=&#x2009;0.18) and the number of clicks (<italic>F</italic>(2,87)&#x2009;=&#x2009;2.69, <italic>p</italic>&#x2009;=&#x2009;0.07) did not significantly differ between the instruction groups. However, a significant difference between the groups was found for the number of pages visited (<italic>F</italic>(2,60)&#x2009;=&#x2009;3.59, <italic>p</italic>&#x2009;=&#x2009;0.03, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.07). The error-awareness group visited a larger number of pages at the level 2 detail interface (<italic>M</italic>&#x2009;=&#x2009;15.18, SD&#x2009;=&#x2009;11.06) than the control group (<italic>M</italic>&#x2009;=&#x2009;8.32, SD&#x2009;=&#x2009;9.05, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;6.86, 95% [0.67, 13.05], <italic>p</italic>&#x2009;=&#x2009;0.03, <italic>d</italic>&#x2009;=&#x2009;0.67). There was no significant difference between the responsibility group and the control group (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;2.15, 95% [&#x2212;3.90, 8.20], <italic>p</italic>&#x2009;=&#x2009;0.67).</p>
<p>For the second task, significant differences between the instruction groups were found for time spent at each level (<italic>F</italic>(2,59)&#x2009;=&#x2009;3.35, <italic>p</italic>&#x2009;=&#x2009;0.04, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.09), the number of clicks (<italic>F</italic>(2,60)&#x2009;=&#x2009;3.32, <italic>p</italic>&#x2009;=&#x2009;0.04, <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.07), and the number of pages visited (<italic>F</italic>(6, 172)&#x2009;=&#x2009;2.03, <italic>p</italic>&#x2009;=&#x2009;0.06, <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.07). The error-awareness group spent significantly more time at level 3 (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;17.00, 95% [0.88, 33.12], <italic>p</italic>&#x2009;=&#x2009;0.04, <italic>d</italic>&#x2009;=&#x2009;0.63), had significantly more clicks at level 3 (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;1.59, 95% [0.11, 3.07], <italic>p</italic>&#x2009;=&#x2009;0.04, <italic>d</italic>&#x2009;=&#x2009;0.64) and visited significantly more pages at level 3 (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;1.12, 95% [0.07, 3.10], <italic>p</italic>&#x2009;=&#x2009;0.04, <italic>d</italic>&#x2009;=&#x2009;0.55) than the control group. These effects remained significant after adjusting for technical affinity and conscientiousness through analysis of covariance (<italic>F</italic>(2,88)&#x2009;=&#x2009;3.48, <italic>p</italic>&#x2009;=&#x2009;0.04, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.07). However, there were no significant differences between the responsibility group and the control group with regard to time spent at each level (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;2.55, 95% [&#x2212;9.88, 14.98], <italic>p</italic>&#x2009;=&#x2009;0.88), clicks at each level (<italic>M</italic>diff&#x2009;=&#x2009;0.59, 95% [&#x2212;0.70, 1.87], <italic>p</italic>&#x2009;=&#x2009;0.52), and number of pages visited (<italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;0.59, 95% [&#x2212;0.70, 1.87], <italic>p</italic>&#x2009;=&#x2009;0.52).</p>
<p>To sum up, decision makers of the error-awareness group tended to score higher on verification intensity indicators which means they expressed less automation bias. While there was a tendency for all indicators at all levels, only pages visited of the detail interface of level 2 during the first task and all indicators of level 3 in the second task differed significantly between the groups. Thus, hypothesis 2 is partly supported. However, the responsibility group did not significantly differ from the control group in their verification behavior. Thus, hypothesis 4 had to be rejected.</p>
</sec>
<sec id="sec22">
<label>3.4.</label>
<title>Information about system errors, responsibility, and decision quality</title>
<p>To test whether participants, who were made aware of the occurrence of system errors (hypothesis 3) and made responsible for the decision (hypothesis 5) show a higher decision quality than the control group when using an imperfect system, we conducted two ANOVAs, one for objective decision quality and another one for subjective decision quality. Again, we controlled for interactions between instruction and data aggregation conditions, which were all not significant.</p>
<p>For objective decision quality, i.e., the number of correctly selected applicants, no significant difference between the system instruction groups was found, neither in the first task (<italic>F</italic>(2,90)&#x2009;=&#x2009;1.22, <italic>p</italic>&#x2009;=&#x2009;0.30), nor in the second task (<italic>F</italic>(2,90)&#x2009;=&#x2009;1.20, <italic>p</italic>&#x2009;=&#x2009;0.32). In both tasks, the error-awareness group (<italic>M</italic><sub>task1</sub>&#x2009;=&#x2009;4.39, SD<italic>
<sub>task1</sub>
</italic>&#x2009;=&#x2009;0.86; <italic>M</italic><sub>task2</sub>&#x2009;=&#x2009;4.39, SD<sub>task2</sub>&#x2009;=&#x2009;0.86) and the responsibility group (<italic>M<sub>task1</sub></italic>&#x2009;=&#x2009;4.16, SD<italic>
<sub>task1</sub>
</italic>&#x2009;=&#x2009;0.72; <italic>M</italic><sub>task2</sub>&#x2009;=&#x2009;4.19, SD<sub>task2</sub>&#x2009;=&#x2009;0.64) selected as many correct applicants as the control group (<italic>M<sub>task1</sub></italic>&#x2009;=&#x2009;4.10, SD<italic>
<sub>task1</sub>
</italic>&#x2009;=&#x2009;0.74; <italic>M</italic><sub>task2</sub>&#x2009;=&#x2009;4.46, SD<sub>task2</sub>&#x2009;=&#x2009;0.64). Thus, hypotheses 3 and 5 had to be rejected for objective decision quality.</p>
<p>However, for subjective decision quality, i.e., self-rated decision quality assessed at the end of the experiment, a significant difference between the instruction groups was found (<italic>F</italic>(2,90)&#x2009;=&#x2009;4.08, <italic>p</italic>&#x2009;=&#x2009;0.02, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.08). Post-hoc testing revealed that the error-awareness group rated their decision quality significantly higher (<italic>M</italic>&#x2009;=&#x2009;4.26 points, SD&#x2009;=&#x2009;0.67) than the control group (<italic>M</italic>&#x2009;=&#x2009;3.72 points, SD&#x2009;=&#x2009;1.06, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;0.54, 95% [0.03, 1.01], <italic>p</italic>&#x2009;=&#x2009;0.05, <italic>d</italic>&#x2009;=&#x2009;0.62). This effect remained significant after controlling for conscientiousness through an ANCOVA (<italic>F</italic>(2,89)&#x2009;=&#x2009;4.63, <italic>p</italic>&#x2009;=&#x2009;0.01, partial <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.09). Consequently, hypothesis 3 was supported for subjective decision quality. Again, there was no significant difference between the responsibility group (<italic>M</italic>&#x2009;=&#x2009;4.16 points, SD&#x2009;=&#x2009;0.51) and the control group (<italic>M</italic>&#x2009;=&#x2009;3.72 points, SD&#x2009;=&#x2009;1.06, <italic>M</italic><sub>diff</sub>&#x2009;=&#x2009;0.44, 95% [&#x2212;0.10, 0.97], <italic>p</italic>&#x2009;=&#x2009;0.13). Consequently, hypothesis 5 had to be rejected for subjective decision quality.</p>
</sec>
<sec id="sec23">
<label>3.5.</label>
<title>Data aggregation and verification intensity</title>
<p>To test hypothesis 6, postulating that participants who receive a more aggregated data visualization will show a stronger automation bias in terms of lower scores on verification intensity indicators than participants who are presented with a less aggregated data visualization, we conducted a MANOVA and subsequent <italic>t</italic>-tests for independent samples. For this, we examined differences in the verification intensity indicators, including time spent at each level, the number of clicks, and the number of pages visited between the data visualization groups.</p>
<p>For time spent at level 1, significant differences between the data visualization groups were found in the first task (<italic>t</italic>(78)&#x2009;=&#x2009;2.74, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.01, <italic>d</italic>&#x2009;=&#x2009;0.57) and second task (<italic>t</italic>(69)&#x2009;=&#x2009;2.10, <italic>p</italic>&#x2009;=&#x2009;0.04, <italic>d</italic>&#x2009;=&#x2009;0.44). The group, with a highly aggregated data visualization, spent significantly less time inspecting the level 1 interface (<italic>M</italic><sub>task1</sub>&#x2009;=&#x2009;106.28&#x2009;s, SD<sub>task1</sub>&#x2009;=&#x2009;99.97; <italic>M</italic><sub>task2</sub>&#x2009;=&#x2009;62.96&#x2009;s, SD<sub>task2</sub>&#x2009;=&#x2009;63.58) than the group with a 5-point rating of the key indicators (<italic>M</italic><sub>task1</sub>&#x2009;=&#x2009;180.93&#x2009;s, SD<sub>task1</sub>&#x2009;=&#x2009;157.43; <italic>M</italic><sub>task2</sub>&#x2009;=&#x2009;105.78&#x2009;s, SD<sub>task2</sub>&#x2009;=&#x2009;123.27). However, no significant differences in time spent at other levels, the number of clicks and the number of visited pages were found between the data visualization groups (<italic>F</italic>(3,85)&#x2009;=&#x2009;0.23, <italic>p</italic>&#x2009;=&#x2009;0.87), indicating no difference in verification intensity and thus the tendency of automation bias. Thus, hypothesis 6 was not supported.</p>
</sec>
<sec id="sec24">
<label>3.6.</label>
<title>Data aggregation and decision quality</title>
<p>To examine hypothesis 7 stating that participants who were presented with a more aggregated data visualization will have a lower decision quality than participants who were presented with a less aggregated data visualization when using an imperfect system, we conducted three <italic>t</italic>-tests for independent samples.</p>
<p>For objective decision quality, i.e., the number of correctly selected applicants, no significant difference between the data visualization groups was found in the first task (<italic>t</italic>(91)&#x2009;=&#x2009;&#x2212;0.63, <italic>p</italic>&#x2009;=&#x2009;0.53). The group with the overall matching score selected as many correct applicants (<italic>M</italic>&#x2009;=&#x2009;4.17, SD&#x2009;=&#x2009;0.74) as the group with a 5-point rating of the key indicators (<italic>M</italic>&#x2009;=&#x2009;4.28, SD&#x2009;=&#x2009;0.83). Similarly, no significant difference in the number of correctly selected applicants between the data visualization groups was found in the second task (<italic>t</italic>(91)&#x2009;=&#x2009;1.09, <italic>p</italic>&#x2009;=&#x2009;0.28). Again, the group with the overall matching score (<italic>M</italic>&#x2009;=&#x2009;4.26, SD&#x2009;=&#x2009;0.77) selected as many correct applicants as the group with a 5-point rating of the key indicators (<italic>M</italic>&#x2009;=&#x2009;4.43, SD&#x2009;=&#x2009;0.68). Thus, hypothesis 7 had to be rejected with regard to objective decision quality.</p>
<p>For subjective decision quality, i.e., self-rated decision quality assessed at the end of the experiment, no significant difference between the data visualization groups was found (<italic>t</italic>(91)&#x2009;=&#x2009;&#x2212;0.03, <italic>p</italic>&#x2009;=&#x2009;0.98). Participants who were presented with an overall matching score (<italic>M</italic>&#x2009;=&#x2009;4.06, SD&#x2009;=&#x2009;0.77) rated their decision quality equally well as participants assigned to the group with the 5-point rating of the key indicators (<italic>M</italic>&#x2009;=&#x2009;4.05, SD&#x2009;=&#x2009;0.82). Thus, hypothesis 7 also had to be rejected with regard to subjective decision quality.</p>
</sec>
</sec>
<sec id="sec25" sec-type="discussions">
<label>4.</label>
<title>Discussion</title>
<p>Given the importance of human oversight in AI-supported decision-making in high-risk use cases, this study focused on counteracting automation bias in the context of AI-based personnel preselection. We investigated how different organizational and technological design factors of an AI-based dashboard for personnel preselection influenced decision makers&#x2019; behavior concerning different verification intensity indicators and decision quality. Our experimental study showed that decision makers who scored lower on verification intensity indicators (i.e., less time spent on pages, lower number of clicks and pages visited), and thus had higher automation bias, selected fewer correct applicants. Lower scores on verification intensity indicators were associated with lower subjective decision quality. Organizational factors partially influenced verification intensity and decision quality: Information about system errors led in part to higher scores on verification intensity indicators and higher subjective decision quality, but unexpectedly not to higher objective decision quality. Contrary to our expectations, responsibility for the decision did not lead to higher scores on verification intensity indicators or higher objective and subjective decision quality. Data aggregation, as a design factor, did influence verification intensity at level 1 of the dashboard. Decision makers who viewed the more aggregated dashboard design spent less time at level 1 than those who viewed more detailed information at level 1. However, no differences in other verification intensity indicators and in objective and subjective decision quality were found.</p>
<p>Our study contributes to the literature on AI-based decision support systems by demonstrating the risk of automation bias in the context of AI-based personnel preselection. Automation bias has been found to lead to adverse effects on decision outcomes in several other contexts before (e.g., <xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>). This underscores the importance of identifying strategies to avoid this bias also in AI-based personnel preselection. To the best of our knowledge, our study is the first to explore strategies, that have previously been investigated in other application areas, such as raising responsibility and raising awareness about system errors (<xref ref-type="bibr" rid="ref75">Zerilli et al., 2019</xref>), in the context of personnel preselection. If decision makers do not verify a system recommendation sufficiently, they might exclude suitable candidates from the personnel selection process, which is not only a loss for the organization, but also seriously affects candidates&#x2019; professional lives. From a legal perspective, these candidates could claim that they are being screened out by automated profiling due to insufficient human oversight.</p>
<p>Moreover, less verification intensity is also partially connected with lower subjective decision quality. This means, decision makers who do not check detailed candidate information and follow system recommendations, thus following heuristic information processing, do not believe in their own good performance, i.e., decision quality. <xref ref-type="bibr" rid="ref34">Langer et al. (2021)</xref> also found, that decision makers who received an automated ranking of candidates before they even could process candidate information themselves, were less satisfied with their decision and had a lower feeling of self-efficacy compared to those who first processed candidate information and received an automated ranking later on. Possibly, decision makers who do not engage in thorough information processing along the central route, but rather engage the peripheral, heuristic route and follow system recommendations, do not feel they have contributed to the decision which could be reflected in dissatisfaction with decision quality. Our study thus indicates possible detrimental effects on decision makers supported by AI-based systems, that have been previously described in literature on AI-based system use. <xref ref-type="bibr" rid="ref11">Burton et al. (2020)</xref> attribute the misuse of AI-based systems partly to unaddressed psychological needs of decision makers, like agency, autonomy or control.</p>
<p>Furthermore, our study contributes to research on automation bias by using the ELM (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>) to provide a solid theoretical foundation for understanding automation bias avoidance strategies. We found evidence that information about system errors influences decision makers&#x2019; verification intensity in the expected direction, with decision makers knowing about possible system errors seeking out more detailed information about candidates. Knowing that the system is not 100% reliable encourages users to critically check recommendations and to use the central, systematic route of information processing instead of the peripheral, heuristic route (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>). Previous studies demonstrated that decision makers who encountered system errors subsequently showed more verification behavior (<xref ref-type="bibr" rid="ref40">Manzey et al., 2006</xref>; <xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>). Accordingly, the same experimental group, rated their subjective decision quality higher, which reflects the increased effort they put into decision-making. However, this group did not select more suitable applicants than the control group.</p>
<p>Contrary to our expectations, we did not find an effect of the responsibility condition on verification intensity indicators, objective and subjective decision quality. One potential explanation comes from prior research showing that performance improves when decision makers are responsible for the decision-making process, but not when they are responsible for the decision-making outcome (<xref ref-type="bibr" rid="ref17">Doney and Armstrong, 1995</xref>; <xref ref-type="bibr" rid="ref63">Siegel-Jacobs and Yates, 1996</xref>). Additionally, <xref ref-type="bibr" rid="ref65">Skitka et al. (2000a)</xref> found in their studies that it is difficult to manipulate responsibility in experiments, as participants expect to be evaluated in experimental settings, partly due to instructions that are designed to encourage participants to take the experimental task seriously. Such evaluation concerns might have raised feelings of responsibility in addition to those elicited in the experimental group (that was informed about legal requirements due to the GDPR) and that were thus not captured by our manipulation check.</p>
<p>Lastly, we contribute to the literature on visual analytics (<xref ref-type="bibr" rid="ref14">Cui, 2019</xref>) by providing an evaluation of different dashboard visualizations concerning the effect of data aggregation on automation bias in terms of verification intensity and decision quality. Decision makers who received a highly aggregated matching score spent significantly less time on the first level of the dashboard than the group who received the low aggregated 5-point rating of three key indicators. This finding suggests that the highly aggregated score did not convey sufficient information to fulfil the tasks, because participants of this group quickly switched to the other levels that presented more detailed information. However, this result could also reflect the cognitive effort required by decision makers to process more information in the less aggregated group compared to the highly aggregated group. We found no differences in other verification intensity indicators (i.e., on other system levels). This is in line with previous research, where users in a simulated process control task reduced the verification of additional parameters over time, but further controlled for necessary parameters (<xref ref-type="bibr" rid="ref41">Manzey et al., 2012</xref>). In our study, decision makers of both groups were able to change levels and actively access more detailed information, so both groups were able to achieve the same decision quality. Therefore, no differences in objective and subjective decision quality were found between the high and low aggregated design. However, according to <xref ref-type="bibr" rid="ref74">Yigitbasioglu and Velcu (2012)</xref> a good fit between data visualization and users&#x2019; information needs, as well as a balance between complexity and utility of the information visualization, are required to enable effective information processing by dashboard users. Our findings help to understand the needs of decision makers regarding the level of data aggregation in AI-based decision support systems. They suggest that highly aggregated information does not provide added value to decision makers and thus should be avoided.</p>
<sec id="sec26">
<label>4.1.</label>
<title>Practical implications</title>
<p>As described above, the highly aggregated design did not lead to peripheral, heuristic information processing and, thus, less verification intensity. Instead, we observed that decision makers ignored level 1 of the dashboard with the aggregation score and searched for further information on other levels. These results emphasize that highly aggregated data alone are not enough for decision-making and that AI-based systems should give decision makers the option of accessing detailed information. To present information as parsimoniously as possible, we suggest that highly aggregated data should be avoided because they oftentimes do not convey sufficient information to reach a decision and can lead to oversimplification and automation bias.</p>
<p>In addition to technological design factors of AI-based personnel preselection tools, companies can adopt organizational strategies to reduce automation bias and promote high verification intensity. <xref ref-type="bibr" rid="ref6">Bankins (2021)</xref> in her ethical framework for AI in human resource management proposes that organizations must align actual AI use with its intended use by instructing employees on how to interact and rely on AI. Based on our results we suggest that organizations inform decision makers about the actual capabilities of AI-based systems and raise their awareness of system errors to encourage high verification intensity. Since automation bias tends to occur especially when the system is perceived as highly reliable (<xref ref-type="bibr" rid="ref75">Zerilli et al., 2019</xref>) and has been working error-free for a long time, i.e., has low failure rates (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>), companies should make decision makers aware of possible system failures not once, but on a regular base. However, only informing users of possible errors might not reduce automation bias sufficiently. Experiencing system failures can have a stronger impact on user behavior (<xref ref-type="bibr" rid="ref5">Bahner et al., 2008</xref>). Thus, an alternative strategy is to deliberately program errors into AI-based decision support systems and point them out when they are overlooked. This way, the design of the system can support the attention of decision makers by varying reliability over time (<xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref75">Zerilli et al., 2019</xref>).</p>
</sec>
<sec id="sec27">
<label>4.2.</label>
<title>Limitations and future research</title>
<p>As with other research, this study is not without limitations. First, participants were students and not actual HR professionals. HR professionals might utilize a system for decision support differently, as prior experience with personnel selection tasks is related to higher confidence in one&#x2019;s own decisions, resulting in a lower reliance on the system and less automation bias (<xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). However, systems to aid decision-making might especially be considered for novice HR professionals as these systems tend to improve the decision-making quality of less experienced decision makers (<xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref34">Langer et al., 2021</xref>). Therefore, we think that students are a suitable sample to reflect the target group of inexperienced HR professionals.</p>
<p>Second, the task was an isolated experimental task and not integrated into the stressful work situation of HR professionals. Automation bias often occurs in a multitasking setting and under a high workload, as it serves as a decision-making heuristic, which saves time and cognitive effort (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>; <xref ref-type="bibr" rid="ref15">Cummings, 2017</xref>). We have tried to simulate these conditions by limiting the available time for processing the tasks. In practice, the impact of interventions, i.e., the instruction and the data visualization, might affect the mitigation of automation bias more strongly. Future studies should investigate these interventions in real work settings.</p>
<p>Third, the personnel preselection task appeared to be rather simple, as the mean for objective decision quality, i.e., correctly selected applicants, was more than four points out of five possible points in all experimental groups. Our personnel preselection task only contained ten applicants per task. In a real-life personnel preselection task, a higher number of applicants can be expected, which increases the effort for information processing and makes decision-making more difficult (<xref ref-type="bibr" rid="ref7">Black and van Esch, 2020</xref>). Possible impacts on the decision quality due to an unreflected use of the system as well as the uncritical acceptance of its recommendations might only arise under a higher workload (<xref ref-type="bibr" rid="ref53">Parasuraman and Manzey, 2010</xref>). More complex tasks need to be explored in future studies.</p>
<p>In addition, future research should also have a closer look at individual differences between decision makers. We could observe high standard deviations for verification intensity indicators within experimental groups (see <xref rid="tab4" ref-type="table">Table 4</xref>). Underlying differences at the individual level, like personality traits, individual information seeking styles or visualization preferences, could have influenced the verification intensity in addition to our experimental conditions. We controlled all analyses for consciousness and technical affinity, but found no differences. ELM also points to individual differences, such as the need for cognition, that influence information processing (<xref ref-type="bibr" rid="ref55">Petty and Cacioppo, 1986</xref>). Future studies should thus consider other individual variables that could explain decision makers&#x00B4; interaction with the dashboard.</p>
<p>When it comes to our finding that decision makers of the highly aggregated data group spent less time on level 1 than the less aggregated data group, it is yet unclear how this can be explained. The result could mean that highly aggregated data conveys too little information to be a support to the decision makers. However, it could also simply reflect the cognitive effort required by individuals to process the information presented. It is conceivable that people from the highly aggregated data group moved more quickly to other levels because they had less presented information to process than the other group. This open question should be considered by future research.</p>
</sec>
</sec>
<sec id="sec28" sec-type="conclusions">
<label>5.</label>
<title>Conclusion</title>
<p>Automation bias has been found to be a serious problem in contexts of AI-based decision support systems (<xref ref-type="bibr" rid="ref46">Mosier and Skitka, 1999</xref>; <xref ref-type="bibr" rid="ref24">Goddard et al., 2012</xref>; <xref ref-type="bibr" rid="ref38">Lyell et al., 2018</xref>; <xref ref-type="bibr" rid="ref16">Davis et al., 2020</xref>), and violates ethical recommendations (<xref ref-type="bibr" rid="ref28">Hunkenschroer and Luetge, 2022</xref>) as well as legal requirements like Article 22 of the GDPR (<xref ref-type="bibr" rid="ref70">The European Parliament and the Council of the European Union, 2016</xref>) or the EU AI act (<xref ref-type="bibr" rid="ref20">European Commission, 2021</xref>) that call for human oversight. Studies that previously examined AI-based personnel preselection tools from the perspective of decision makers have not yet addressed automation bias. An empirical investigation of automation bias in AI-based personnel preselection, and moreover strategies to avoid automation bias, is thus overdue. Our study confirmed that automation bias in terms of verification intensity influences decision quality in AI-based personnel preselection. Furthermore, we provide a first exploration of possible strategies to avoid automation bias in personnel preselection and provide first evidence that both organizational and technological design factors need to be considered when mitigating automation bias. Our study contributes to the literature by extending existing ELM and automation bias research to the context of AI-based personnel preselection tools.</p>
</sec>
<sec id="sec29" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="sec30">
<title>Ethics statement</title>
<p>Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec31">
<title>Author contributions</title>
<p>CK, RP, JF, CM, ST, and BK contributed to the conception and design of the study. RP collected data and performed the statistical analysis. CK and RP wrote the first draft of the manuscript. BK wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.</p>
</sec>
<sec id="sec32" sec-type="funding-information">
<title>Funding</title>
<p>This research was partly funded by the Field of Excellence Smart Regulation of the University of Graz. The authors acknowledge the financial support for open-access publication by the University of Graz.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="sec34" sec-type="supplementary-material">
<title>Supplementary material</title>
<p>The supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1118723/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1118723/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acikgoz</surname> <given-names>Y.</given-names></name> <name><surname>Davison</surname> <given-names>K. H.</given-names></name> <name><surname>Compagnone</surname> <given-names>M.</given-names></name> <name><surname>Laske</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Justice perceptions of artificial intelligence in selection</article-title>. <source>Int. J. Sel. Assess.</source> <volume>28</volume>, <fpage>399</fpage>&#x2013;<lpage>416</lpage>. doi: <pub-id pub-id-type="doi">10.1111/ijsa.12306</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adensamer</surname> <given-names>A.</given-names></name> <name><surname>Gsenger</surname> <given-names>R.</given-names></name> <name><surname>Klausner</surname> <given-names>L. D.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x201C;Computer says no&#x201D;: algorithmic decision support and organisational responsibility</article-title>. <source>J. Respons. Technol.</source> <volume>7-8</volume>:<fpage>100014</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jrt.2021.100014</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Alberdi</surname> <given-names>E.</given-names></name> <name><surname>Strigini</surname> <given-names>L.</given-names></name> <name><surname>Povyakalo</surname> <given-names>A. A.</given-names></name> <name><surname>Ayton</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). &#x201C;<article-title>Why are People&#x2019;s decisions sometimes worse with computer support?</article-title>&#x201D; in <source>Computer Safety, Reliability, and Security. SAFECOMP 2009. Lecture Notes in Computer Science</source>. eds. <person-group person-group-type="editor"><name><surname>Buth</surname> <given-names>B.</given-names></name> <name><surname>Rabe</surname> <given-names>G.</given-names></name> <name><surname>Seyfarth</surname> <given-names>T.</given-names></name></person-group>, vol. <volume>5775</volume> (<publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>18</fpage>&#x2013;<lpage>31</lpage>.</citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bahner</surname> <given-names>J. E.</given-names></name> <name><surname>H&#x00FC;per</surname> <given-names>A.-D.</given-names></name> <name><surname>Manzey</surname> <given-names>D.</given-names></name></person-group> (<year>2008</year>). <article-title>Misuse of automated decision aids: complacency, automation bias and the impact of training experience</article-title>. <source>Int. J. Human Comput. Stud.</source> <volume>66</volume>, <fpage>688</fpage>&#x2013;<lpage>699</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijhcs.2008.06.001</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bankins</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>The ethical use of artificial intelligence in human resource management: a decision-making framework</article-title>. <source>Ethics Inf. Technol.</source> <volume>23</volume>, <fpage>841</fpage>&#x2013;<lpage>854</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10676-021-09619-6</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Black</surname> <given-names>J. S.</given-names></name> <name><surname>van Esch</surname> <given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>AI-enabled recruiting: what is it and how should a manager use it?</article-title> <source>Bus. Horiz.</source> <volume>63</volume>, <fpage>215</fpage>&#x2013;<lpage>226</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bushor.2019.12.001</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Brauner</surname> <given-names>P.</given-names></name> <name><surname>Calero Valdez</surname> <given-names>A.</given-names></name> <name><surname>Philipsen</surname> <given-names>R.</given-names></name> <name><surname>Ziefle</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Defective still deflective&#x2013;how correctness of decision support systems influences User&#x2019;s performance in production environments</article-title>&#x201D; in <source>HCI in Business, Government, and Organizations: Information Systems. HCIBGO 2016. Lecture Notes in Computer Science</source>. eds. <person-group person-group-type="editor"><name><surname>Nah</surname> <given-names>F. H.</given-names></name> <name><surname>Tan</surname> <given-names>C. H.</given-names></name></person-group>, vol. <volume>9752</volume> (<publisher-loc>New York</publisher-loc>: <publisher-name>Springer Cham</publisher-name>), <fpage>16</fpage>&#x2013;<lpage>27</lpage>.</citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brauner</surname> <given-names>P.</given-names></name> <name><surname>Philipsen</surname> <given-names>R.</given-names></name> <name><surname>Calero Valdez</surname> <given-names>A.</given-names></name> <name><surname>Ziefle</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>What happens when decision support systems fail? &#x2014; the importance of usability on performance in erroneous systems</article-title>. <source>Behav. Inform. Technol.</source> <volume>38</volume>, <fpage>1225</fpage>&#x2013;<lpage>1242</lpage>. doi: <pub-id pub-id-type="doi">10.1080/0144929X.2019.1581258</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bu&#x00E7;inca</surname> <given-names>Z.</given-names></name> <name><surname>Malaya</surname> <given-names>M. B.</given-names></name> <name><surname>Gajos</surname> <given-names>K. Z.</given-names></name></person-group> (<year>2021</year>). <article-title>To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making</article-title>. <source>Proc. ACM Human Comput. Interact.</source> <volume>5</volume>, <fpage>1</fpage>&#x2013;<lpage>21</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3449287</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burton</surname> <given-names>J. W.</given-names></name> <name><surname>Stein</surname> <given-names>M.-K.</given-names></name> <name><surname>Jensen</surname> <given-names>T. B.</given-names></name></person-group> (<year>2020</year>). <article-title>A systematic review of algorithm aversion in augmented decision making</article-title>. <source>J. Behav. Decis. Mak.</source> <volume>33</volume>, <fpage>220</fpage>&#x2013;<lpage>239</lpage>. doi: <pub-id pub-id-type="doi">10.1002/bdm.2155</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campion</surname> <given-names>M. C.</given-names></name> <name><surname>Campion</surname> <given-names>M. A.</given-names></name> <name><surname>Campion</surname> <given-names>E. D.</given-names></name> <name><surname>Reider</surname> <given-names>M. H.</given-names></name></person-group> (<year>2016</year>). <article-title>Initial investigation into computer scoring of candidate essays for personnel selection</article-title>. <source>J. Appl. Psychol.</source> <volume>101</volume>, <fpage>958</fpage>&#x2013;<lpage>975</lpage>. doi: <pub-id pub-id-type="doi">10.1037/apl0000108</pub-id>, PMID: <pub-id pub-id-type="pmid">27077525</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Casta&#x00F1;o</surname> <given-names>A. M.</given-names></name> <name><surname>Fontanil</surname> <given-names>Y.</given-names></name> <name><surname>Garc&#x00ED;a-Izquierdo</surname> <given-names>A. L.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x201C;Why Can&#x2019;t I become a manager?&#x201D; &#x2013; a systematic review of gender stereotypes and organizational discrimination</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>16</volume>:<fpage>1813</fpage>. doi: <pub-id pub-id-type="doi">10.3390/ijerph16101813</pub-id>, PMID: <pub-id pub-id-type="pmid">31121842</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname> <given-names>W.</given-names></name></person-group> (<year>2019</year>). <article-title>Visual analytics: a comprehensive overview</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>81555</fpage>&#x2013;<lpage>81573</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2923736</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Cummings</surname> <given-names>M. L.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>Automation bias in intelligent time critical decision support systems</article-title>&#x201D; in <source>Decision Making in Aviation</source>. eds. <person-group person-group-type="editor"><name><surname>Harris</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>W.-C.</given-names></name></person-group> (<publisher-loc>London</publisher-loc>: <publisher-name>Routledge</publisher-name>), <fpage>289</fpage>&#x2013;<lpage>294</lpage>.</citation></ref>
<ref id="ref16"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>J.</given-names></name> <name><surname>Atchley</surname> <given-names>A.</given-names></name> <name><surname>Smitherman</surname> <given-names>H.</given-names></name> <name><surname>Simon</surname> <given-names>H.</given-names></name> <name><surname>Tenhundfeld</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <source>Measuring Automation Bias and Complacency in an X-ray Screening Task. 2020 Systems and Information Engineering Design Symposium (SIEDS)</source>. <publisher-name>IEEE</publisher-name>: <publisher-loc>Charlottesville, VA</publisher-loc></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doney</surname> <given-names>P. M.</given-names></name> <name><surname>Armstrong</surname> <given-names>G. M.</given-names></name></person-group> (<year>1995</year>). <article-title>Effects of accountability on symbolic information search and information analysis by organizational buyers</article-title>. <source>J. Acad. Mark. Sci.</source> <volume>24</volume>, <fpage>57</fpage>&#x2013;<lpage>65</lpage>. doi: <pub-id pub-id-type="doi">10.1177/009207039602400105</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Ei&#x00DF;er</surname> <given-names>J.</given-names></name> <name><surname>Torrini</surname> <given-names>M.</given-names></name> <name><surname>B&#x00F6;hm</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Automation anxiety as a barrier to workplace automation: an empirical analysis of the example of recruiting Chatbots in Germany</article-title>. <conf-name>Proceedings of the 2020 ACM SIGMIS on Computers and People Research Conference</conf-name>, <fpage>47</fpage>&#x2013;<lpage>51</lpage></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Endsley</surname> <given-names>M. R.</given-names></name></person-group> (<year>2017</year>). <article-title>From here to autonomy: lessons learned from human&#x2013;automation research</article-title>. <source>Hum. Factors</source> <volume>59</volume>, <fpage>5</fpage>&#x2013;<lpage>27</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720816681350</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="other"><person-group person-group-type="author"><collab id="coll1">European Commission</collab></person-group> (<year>2021</year>). Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (Articifical intelligence act) and amending certain union legislative acts. COM (2021) 206 final. Brussels. Available at: <ext-link xlink:href="https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence" ext-link-type="uri">https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence</ext-link> (Accessed November 30, 2022).</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Evans</surname> <given-names>J. S. B. T.</given-names></name></person-group> (<year>2011</year>). <article-title>Dual-process theories of reasoning: contemporary issues and developmental applications</article-title>. <source>Dev. Rev.</source> <volume>31</volume>, <fpage>86</fpage>&#x2013;<lpage>102</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.dr.2011.07.007</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Faul</surname> <given-names>F.</given-names></name> <name><surname>Erdfelder</surname> <given-names>E.</given-names></name> <name><surname>Buchner</surname> <given-names>A.</given-names></name> <name><surname>Lang</surname> <given-names>A.-G.</given-names></name></person-group> (<year>2009</year>). <article-title>Statistical power analyses using G&#x002A;power 3.1: tests for correlation and regression analyses</article-title>. <source>Behav. Res. Methods</source> <volume>41</volume>, <fpage>1149</fpage>&#x2013;<lpage>1160</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BRM.41.4.1149</pub-id>, PMID: <pub-id pub-id-type="pmid">19897823</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Franke</surname> <given-names>T.</given-names></name> <name><surname>Attig</surname> <given-names>C.</given-names></name> <name><surname>Wessel</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale</article-title>. <source>Int. J. Human Comput. Interact.</source> <volume>35</volume>, <fpage>456</fpage>&#x2013;<lpage>467</lpage>. doi: <pub-id pub-id-type="doi">10.1080/10447318.2018.1456150</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goddard</surname> <given-names>K.</given-names></name> <name><surname>Roudsari</surname> <given-names>A.</given-names></name> <name><surname>Wyatt</surname> <given-names>J. C.</given-names></name></person-group> (<year>2012</year>). <article-title>Automation bias: a systematic review of frequency, effect mediators, and mitigators</article-title>. <source>J. Am. Med. Inform. Assoc.</source> <volume>19</volume>, <fpage>121</fpage>&#x2013;<lpage>127</lpage>. doi: <pub-id pub-id-type="doi">10.1136/amiajnl-2011-000089</pub-id>, PMID: <pub-id pub-id-type="pmid">21685142</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gonzalez</surname> <given-names>M. F.</given-names></name> <name><surname>Capman</surname> <given-names>J. F.</given-names></name> <name><surname>Oswald</surname> <given-names>F. L.</given-names></name> <name><surname>Theys</surname> <given-names>E. R.</given-names></name> <name><surname>Tomczak</surname> <given-names>D. L.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x201C;Where&#x2019;s the I-O?&#x201D; artificial intelligence and machine learning in talent management systems</article-title>. <source>Pers. Assess. Decis.</source> <volume>5</volume>, <fpage>33</fpage>&#x2013;<lpage>44</lpage>. doi: <pub-id pub-id-type="doi">10.25035/pad.2019.03.005</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Highhouse</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Stubborn reliance on intuition and subjectivity in employee selection</article-title>. <source>Ind. Organ. Psychol.</source> <volume>1</volume>, <fpage>333</fpage>&#x2013;<lpage>342</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1754-9434.2008.00058.x</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hilliard</surname> <given-names>A.</given-names></name> <name><surname>Guenole</surname> <given-names>N.</given-names></name> <name><surname>Leutner</surname> <given-names>F.</given-names></name></person-group> (<year>2022</year>). <article-title>Robots are judging me: perceived fairness of algorithmic recruitment tools</article-title>. <source>Front. Psychol.</source> <volume>13</volume>:<fpage>940456</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2022.940456</pub-id>, PMID: <pub-id pub-id-type="pmid">35959005</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hunkenschroer</surname> <given-names>A. L.</given-names></name> <name><surname>Luetge</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>Ethics of AI-enabled recruiting and selection: a review and research agenda</article-title>. <source>J. Bus. Ethics</source> <volume>178</volume>, <fpage>977</fpage>&#x2013;<lpage>1007</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10551-022-05049-6</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>J.-Y.</given-names></name> <name><surname>Heo</surname> <given-names>W.</given-names></name></person-group> (<year>2022</year>). <article-title>Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians</article-title>. <source>Inf. Technol. People</source> <volume>35</volume>, <fpage>861</fpage>&#x2013;<lpage>878</lpage>. doi: <pub-id pub-id-type="doi">10.1108/ITP-04-2019-0173</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Kloker</surname> <given-names>A.</given-names></name> <name><surname>Flei&#x00DF;</surname> <given-names>J.</given-names></name> <name><surname>Koeth</surname> <given-names>C.</given-names></name> <name><surname>Kloiber</surname> <given-names>T.</given-names></name> <name><surname>Ratheiser</surname> <given-names>P.</given-names></name> <name><surname>Thalmann</surname> <given-names>S.</given-names></name></person-group>. (<year>2022</year>). <article-title>Caution or trust in AI? How to design XAI in sensitive use cases?</article-title> <conf-name>AMCIS 2022 Proceedings</conf-name>. <fpage>16</fpage>.</citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kowalczyk</surname> <given-names>M.</given-names></name> <name><surname>Gerlach</surname> <given-names>J. P.</given-names></name></person-group> (<year>2015</year>). <article-title>Business Intelligence &#x0026; Analytics and decision quality&#x2013;insights on analytics specialization and information processing modes</article-title>. <source>Europ. Conf. Inform. Syst.</source> <volume>110</volume>, <fpage>1</fpage>&#x2013;<lpage>18</lpage>. doi: <pub-id pub-id-type="doi">10.18151/7217398</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuncel</surname> <given-names>N. R.</given-names></name> <name><surname>Klieger</surname> <given-names>D. M.</given-names></name> <name><surname>Connelly</surname> <given-names>B. S.</given-names></name> <name><surname>Ones</surname> <given-names>D. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Mechanical versus clinical data combination in selection and admissions decisions: a meta-analysis</article-title>. <source>J. Appl. Psychol.</source> <volume>98</volume>, <fpage>1060</fpage>&#x2013;<lpage>1072</lpage>. doi: <pub-id pub-id-type="doi">10.1037/a0034156</pub-id>, PMID: <pub-id pub-id-type="pmid">24041118</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lacroux</surname> <given-names>A.</given-names></name> <name><surname>Martin-Lacroux</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>Should I trust the artificial intelligence to recruit? Recruiters&#x2019; perceptions and behavior when faced with algorithm-based recommendation systems during resume screening</article-title>. <source>Front. Psychol.</source> <volume>13</volume>:<fpage>895997</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2022.895997</pub-id>, PMID: <pub-id pub-id-type="pmid">35874355</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Langer</surname> <given-names>M.</given-names></name> <name><surname>K&#x00F6;nig</surname> <given-names>C. J.</given-names></name> <name><surname>Busch</surname> <given-names>V.</given-names></name></person-group> (<year>2021</year>). <article-title>Changing the means of managerial work: effects of automated decision support systems on personnel selection tasks</article-title>. <source>J. Bus. Psychol.</source> <volume>36</volume>, <fpage>751</fpage>&#x2013;<lpage>769</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10869-020-09711-6</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Langer</surname> <given-names>M.</given-names></name> <name><surname>K&#x00F6;nig</surname> <given-names>C. J.</given-names></name> <name><surname>Papathanasiou</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Highly automated job interviews: acceptance under the influence of stakes</article-title>. <source>Int. J. Sel. Assess.</source> <volume>27</volume>, <fpage>217</fpage>&#x2013;<lpage>234</lpage>. doi: <pub-id pub-id-type="doi">10.1111/ijsa.12246</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Langer</surname> <given-names>M.</given-names></name> <name><surname>Landers</surname> <given-names>R. N.</given-names></name></person-group> (<year>2021</year>). <article-title>The future of artificial intelligence at work: a review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers</article-title>. <source>Comput. Hum. Behav.</source> <volume>123</volume>, <fpage>106878</fpage>&#x2013;<lpage>106820</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.chb.2021.106878</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Lassiter</surname> <given-names>T.</given-names></name> <name><surname>Oh</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>M. K.</given-names></name></person-group>. (<year>2021</year>). <article-title>Algorithmic hiring in practice: recruiter and HR Professional&#x2019;s perspectives on AI use in hiring</article-title>. <italic>Proceedings of the 2021 AAAI/ACM conference on AI, Ethics, and Society</italic>, <fpage>166</fpage>&#x2013;<lpage>176</lpage>.</citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyell</surname> <given-names>D.</given-names></name> <name><surname>Magrabi</surname> <given-names>F.</given-names></name> <name><surname>Coiera</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>The effect of cognitive load and task complexity on automation bias in electronic prescribing</article-title>. <source>Hum. Factors</source> <volume>60</volume>, <fpage>1008</fpage>&#x2013;<lpage>1021</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720818781224</pub-id>, PMID: <pub-id pub-id-type="pmid">29939764</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malgieri</surname> <given-names>G.</given-names></name> <name><surname>Comand&#x00E9;</surname> <given-names>G.</given-names></name></person-group> (<year>2017</year>). <article-title>Why a right to legibility of automated decision-making exists in the general data protection regulation</article-title>. <source>Int. Data Privacy Law</source> <volume>7</volume>, <fpage>243</fpage>&#x2013;<lpage>265</lpage>. doi: <pub-id pub-id-type="doi">10.1093/idpl/ipx019</pub-id></citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Manzey</surname> <given-names>D.</given-names></name> <name><surname>Bahner</surname> <given-names>J. E.</given-names></name> <name><surname>Hueper</surname> <given-names>A.-D.</given-names></name></person-group> (<year>2006</year>). <article-title>Misuse of automated aids in process control: complacency, automation bias and possible training interventions</article-title>. <source>Proc. Human Fact. Ergonom. Soc. Ann. Meet.</source> <volume>50</volume>, <fpage>220</fpage>&#x2013;<lpage>224</lpage>. doi: <pub-id pub-id-type="doi">10.1177/154193120605000303</pub-id></citation></ref>
<ref id="ref41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Manzey</surname> <given-names>D.</given-names></name> <name><surname>Reichenbach</surname> <given-names>J.</given-names></name> <name><surname>Onnasch</surname> <given-names>L.</given-names></name></person-group> (<year>2012</year>). <article-title>Human performance consequences of automated decision aids: the impact of degree of automation and system experience</article-title>. <source>J. Cogn. Eng. Decis. Making</source> <volume>6</volume>, <fpage>57</fpage>&#x2013;<lpage>87</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1555343411433844</pub-id></citation></ref>
<ref id="ref42"><citation citation-type="other"><person-group person-group-type="author"><name><surname>McCarthy</surname> <given-names>J.</given-names></name> <name><surname>Minsky</surname> <given-names>M. L.</given-names></name> <name><surname>Rochester</surname> <given-names>N.</given-names></name> <name><surname>Shannon</surname> <given-names>C. E.</given-names></name></person-group> (<year>1955</year>). A proposal for the Dartmouth summer research project on artificial intelligence. Available at: <ext-link xlink:href="https://web.archive.org/web/20070826230310/http:/www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html" ext-link-type="uri">https://web.archive.org/web/20070826230310/http:/www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html</ext-link> (Accessed February 15, 2023).</citation></ref>
<ref id="ref43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meijer</surname> <given-names>R. R.</given-names></name> <name><surname>Neumann</surname> <given-names>M.</given-names></name> <name><surname>Hemker</surname> <given-names>B. T.</given-names></name> <name><surname>Niessen</surname> <given-names>A. S. M.</given-names></name></person-group> (<year>2020</year>). <article-title>A tutorial on mechanical decision-making for personnel and educational selection</article-title>. <source>Front. Psychol.</source> <volume>10</volume>:<fpage>3002</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2019.03002</pub-id>, PMID: <pub-id pub-id-type="pmid">32038385</pub-id></citation></ref>
<ref id="ref44"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Michels</surname> <given-names>L.</given-names></name> <name><surname>Ochmann</surname> <given-names>J.</given-names></name> <name><surname>Tiefenbeck</surname> <given-names>V.</given-names></name> <name><surname>Laumer</surname> <given-names>S.</given-names></name></person-group>. (<year>2022</year>). <article-title>The acceptance of AI-based recommendations: an elaboration likelihood perspective</article-title>. <conf-name>Wirtschaftsinformatik 2022 Proceedings</conf-name>. <fpage>1</fpage>.</citation></ref>
<ref id="ref45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moor</surname> <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>The Dartmouth College artificial intelligence conference: the next fifty years</article-title>. <source>AI Mag.</source> <volume>27</volume>, <fpage>87</fpage>&#x2013;<lpage>91</lpage>. doi: <pub-id pub-id-type="doi">10.1609/aimag.v27i4.1911</pub-id></citation></ref>
<ref id="ref46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mosier</surname> <given-names>K. L.</given-names></name> <name><surname>Skitka</surname> <given-names>L. J.</given-names></name></person-group> (<year>1999</year>). <article-title>Automation use and automation bias</article-title>. <source>Proc. Human Fact. Ergonom. Soc. Ann. Meet.</source> <volume>43</volume>, <fpage>344</fpage>&#x2013;<lpage>348</lpage>. doi: <pub-id pub-id-type="doi">10.1177/154193129904300346</pub-id></citation></ref>
<ref id="ref47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mosier</surname> <given-names>K. L.</given-names></name> <name><surname>Skitka</surname> <given-names>L. J.</given-names></name> <name><surname>Burdick</surname> <given-names>M. D.</given-names></name> <name><surname>Heers</surname> <given-names>S. T.</given-names></name></person-group> (<year>1996</year>). <article-title>Automation bias, accountability, and verification behaviors</article-title>. <source>Proc. Human Fact. Ergonom. Soc. Ann. Meet.</source> <volume>40</volume>, <fpage>204</fpage>&#x2013;<lpage>208</lpage>. doi: <pub-id pub-id-type="doi">10.1177/154193129604000413</pub-id></citation></ref>
<ref id="ref48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noble</surname> <given-names>S. M.</given-names></name> <name><surname>Foster</surname> <given-names>L. L.</given-names></name> <name><surname>Craig</surname> <given-names>S. B.</given-names></name></person-group> (<year>2021</year>). <article-title>The procedural and interpersonal justice of automated application and resume screening</article-title>. <source>Int. J. Sel. Assess.</source> <volume>29</volume>, <fpage>139</fpage>&#x2013;<lpage>153</lpage>. doi: <pub-id pub-id-type="doi">10.1111/ijsa.12320</pub-id></citation></ref>
<ref id="ref49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nolan</surname> <given-names>K. P.</given-names></name> <name><surname>Carter</surname> <given-names>N. T.</given-names></name> <name><surname>Dalal</surname> <given-names>D. K.</given-names></name></person-group> (<year>2016</year>). <article-title>Threat of technological unemployment: are hiring managers discounted for using standardized employee selection practices?</article-title> <source>Pers. Assess. Decis.</source> <volume>2</volume>:<fpage>4</fpage>. doi: <pub-id pub-id-type="doi">10.25035/pad.2016.004</pub-id></citation></ref>
<ref id="ref50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oberst</surname> <given-names>U.</given-names></name> <name><surname>De Quintana</surname> <given-names>M.</given-names></name> <name><surname>Del Cerro</surname> <given-names>S.</given-names></name> <name><surname>Chamarro</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Recruiters prefer expert recommendations over digital hiring algorithm: a choice-based conjoint study in a pre-employment screening scenario</article-title>. <source>Manag. Res. Rev.</source> <volume>44</volume>, <fpage>625</fpage>&#x2013;<lpage>641</lpage>. doi: <pub-id pub-id-type="doi">10.1108/MRR-06-2020-0356</pub-id></citation></ref>
<ref id="ref51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onnasch</surname> <given-names>L.</given-names></name> <name><surname>Wickens</surname> <given-names>C. D.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Manzey</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Human performance consequences of stages and levels of automation: an integrated meta-analysis</article-title>. <source>Hum. Factors</source> <volume>56</volume>, <fpage>476</fpage>&#x2013;<lpage>488</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720813501549</pub-id>, PMID: <pub-id pub-id-type="pmid">24930170</pub-id></citation></ref>
<ref id="ref52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pan</surname> <given-names>Y.</given-names></name> <name><surname>Froese</surname> <given-names>F.</given-names></name> <name><surname>Liu</surname> <given-names>N.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Ye</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>The adoption of artificial intelligence in employee recruitment: the influence of contextual factors</article-title>. <source>Int. J. Hum. Resour. Manag.</source> <volume>33</volume>, <fpage>1125</fpage>&#x2013;<lpage>1147</lpage>. doi: <pub-id pub-id-type="doi">10.1080/09585192.2021.1879206</pub-id></citation></ref>
<ref id="ref53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parasuraman</surname> <given-names>R.</given-names></name> <name><surname>Manzey</surname> <given-names>D. H.</given-names></name></person-group> (<year>2010</year>). <article-title>Complacency and bias in human use of automation: an attentional integration</article-title>. <source>Hum. Factors</source> <volume>52</volume>, <fpage>381</fpage>&#x2013;<lpage>410</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720810376055</pub-id>, PMID: <pub-id pub-id-type="pmid">21077562</pub-id></citation></ref>
<ref id="ref54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parasuraman</surname> <given-names>R.</given-names></name> <name><surname>Sheridan</surname> <given-names>T. B.</given-names></name> <name><surname>Wickens</surname> <given-names>C. D.</given-names></name></person-group> (<year>2000</year>). <article-title>A model for types and levels of human interaction with automation</article-title>. <source>IEEE Trans. Syst. Man Cybernet A</source> <volume>30</volume>, <fpage>286</fpage>&#x2013;<lpage>297</lpage>. doi: <pub-id pub-id-type="doi">10.1109/3468.844354</pub-id></citation></ref>
<ref id="ref55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petty</surname> <given-names>R. E.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name></person-group> (<year>1986</year>). <article-title>The elaboration likelihood model of persuasion</article-title>. <source>Adv. Exp. Soc. Psychol.</source> <volume>19</volume>, <fpage>123</fpage>&#x2013;<lpage>205</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0065-2601(08)60214-2</pub-id></citation></ref>
<ref id="ref56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pillai</surname> <given-names>R.</given-names></name> <name><surname>Sivathanu</surname> <given-names>B.</given-names></name></person-group> (<year>2020</year>). <article-title>Adoption of artificial intelligence (AI) for talent acquisition in IT/IteS organizations</article-title>. <source>BIJ</source> <volume>27</volume>, <fpage>2599</fpage>&#x2013;<lpage>2629</lpage>. doi: <pub-id pub-id-type="doi">10.1108/BIJ-04-2020-0186</pub-id></citation></ref>
<ref id="ref57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quillian</surname> <given-names>L.</given-names></name> <name><surname>Pager</surname> <given-names>D.</given-names></name> <name><surname>Hexel</surname> <given-names>O.</given-names></name> <name><surname>Midtb&#x00F8;en</surname> <given-names>A. H.</given-names></name></person-group> (<year>2017</year>). <article-title>Meta-analysis of field experiments shows no change in racial discrimination in hiring over time</article-title>. <source>PNAS</source> <volume>114</volume>, <fpage>10870</fpage>&#x2013;<lpage>10875</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.1706255114</pub-id>, PMID: <pub-id pub-id-type="pmid">28900012</pub-id></citation></ref>
<ref id="ref58"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Raghavan</surname> <given-names>M.</given-names></name> <name><surname>Barocas</surname> <given-names>S.</given-names></name> <name><surname>Kleinberg</surname> <given-names>J.</given-names></name> <name><surname>Levy</surname> <given-names>K.</given-names></name></person-group>. (<year>2020</year>). <article-title>Mitigating bias in algorithmic hiring: evaluating claims and practices</article-title>. <italic>Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</italic>. <fpage>469</fpage>&#x2013;<lpage>481</lpage>.</citation></ref>
<ref id="ref59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rammstedt</surname> <given-names>B.</given-names></name> <name><surname>Kemper</surname> <given-names>C. J.</given-names></name> <name><surname>Klein</surname> <given-names>M. C.</given-names></name> <name><surname>Beierlein</surname> <given-names>C.</given-names></name> <name><surname>Kovaleva</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Eine kurze Skala zur Messung der f&#x00FC;nf Dimensionen der Pers&#x00F6;nlichkeit: Big-Five-Inventory-10 (BFI-10)</article-title>. <source>Methoden Daten Analysen</source> <volume>7</volume>, <fpage>233</fpage>&#x2013;<lpage>249</lpage>. doi: <pub-id pub-id-type="doi">10.12758/mda.2013.013</pub-id></citation></ref>
<ref id="ref60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarikaya</surname> <given-names>A.</given-names></name> <name><surname>Correll</surname> <given-names>M.</given-names></name> <name><surname>Bartram</surname> <given-names>L.</given-names></name> <name><surname>Tory</surname> <given-names>M.</given-names></name> <name><surname>Fisher</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>What do we talk about when we talk about dashboards?</article-title> <source>IEEE Trans. Vis. Comput. Graph.</source> <volume>25</volume>, <fpage>682</fpage>&#x2013;<lpage>692</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TVCG.2018.2864903</pub-id>, PMID: <pub-id pub-id-type="pmid">30136958</pub-id></citation></ref>
<ref id="ref61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sauer</surname> <given-names>J.</given-names></name> <name><surname>Chavaillaz</surname> <given-names>A.</given-names></name> <name><surname>Wastell</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Experience of automation failures in training: effects on trust, automation bias, complacency and performance</article-title>. <source>Ergonomics</source> <volume>59</volume>, <fpage>767</fpage>&#x2013;<lpage>780</lpage>. doi: <pub-id pub-id-type="doi">10.1080/00140139.2015.1094577</pub-id>, PMID: <pub-id pub-id-type="pmid">26374396</pub-id></citation></ref>
<ref id="ref62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schick</surname> <given-names>J.</given-names></name> <name><surname>Fischer</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Dear computer on my desk, which candidate fits best? An assessment of candidates&#x2019; perception of assessment quality when using AI in personnel selection</article-title>. <source>Front. Psychol.</source> <volume>12</volume>:<fpage>739711</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2021.739711</pub-id>, PMID: <pub-id pub-id-type="pmid">34777128</pub-id></citation></ref>
<ref id="ref63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Siegel-Jacobs</surname> <given-names>K.</given-names></name> <name><surname>Yates</surname> <given-names>J. F.</given-names></name></person-group> (<year>1996</year>). <article-title>Effects of procedural and outcome accountability on judgment quality</article-title>. <source>Organ. Behav. Hum. Decis. Process.</source> <volume>65</volume>, <fpage>1</fpage>&#x2013;<lpage>17</lpage>. doi: <pub-id pub-id-type="doi">10.1006/obhd.1996.0001</pub-id></citation></ref>
<ref id="ref64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skitka</surname> <given-names>L. J.</given-names></name> <name><surname>Mosier</surname> <given-names>K. L.</given-names></name> <name><surname>Burdick</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <article-title>Does automation bias decision-making?</article-title> <source>Int. J. Human Comput. Stud.</source> <volume>51</volume>, <fpage>991</fpage>&#x2013;<lpage>1006</lpage>. doi: <pub-id pub-id-type="doi">10.1006/ijhc.1999.0252</pub-id></citation></ref>
<ref id="ref65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skitka</surname> <given-names>L. J.</given-names></name> <name><surname>Mosier</surname> <given-names>K.</given-names></name> <name><surname>Burdick</surname> <given-names>M. D.</given-names></name></person-group> (<year>2000a</year>). <article-title>Accountability and automation bias</article-title>. <source>Int. J. Human Comput. Stud.</source> <volume>52</volume>, <fpage>701</fpage>&#x2013;<lpage>717</lpage>. doi: <pub-id pub-id-type="doi">10.1006/ijhc.1999.0349</pub-id></citation></ref>
<ref id="ref66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skitka</surname> <given-names>L. J.</given-names></name> <name><surname>Mosier</surname> <given-names>K.</given-names></name> <name><surname>Burdick</surname> <given-names>M. D.</given-names></name> <name><surname>Rosenblatt</surname> <given-names>B.</given-names></name></person-group> (<year>2000b</year>). <article-title>Automation bias and errors: are crews better than individuals?</article-title> <source>Int. J. Aviat. Psychol.</source> <volume>10</volume>, <fpage>85</fpage>&#x2013;<lpage>97</lpage>. doi: <pub-id pub-id-type="doi">10.1207/S15327108IJAP1001_5</pub-id>, PMID: <pub-id pub-id-type="pmid">11543300</pub-id></citation></ref>
<ref id="ref67"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Sosulski</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <source>Data Visualization Made Simple: Insights Into Becoming Visual</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Routledge</publisher-name>.</citation></ref>
<ref id="ref68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suen</surname> <given-names>H.-Y.</given-names></name> <name><surname>Chen</surname> <given-names>M. Y.-C.</given-names></name> <name><surname>Lu</surname> <given-names>S.-H.</given-names></name></person-group> (<year>2019</year>). <article-title>Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes?</article-title> <source>Comput. Hum. Behav.</source> <volume>98</volume>, <fpage>93</fpage>&#x2013;<lpage>101</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.chb.2019.04.012</pub-id></citation></ref>
<ref id="ref69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suen</surname> <given-names>H.-Y.</given-names></name> <name><surname>Hung</surname> <given-names>K.-E.</given-names></name> <name><surname>Lin</surname> <given-names>C.-L.</given-names></name></person-group> (<year>2020</year>). <article-title>Intelligent video interview agent used to predict communication skill and perceived personality traits</article-title>. <source>HCIS</source> <volume>10</volume>, <fpage>1</fpage>&#x2013;<lpage>12</lpage>. doi: <pub-id pub-id-type="doi">10.1186/s13673-020-0208-3</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="other"><person-group person-group-type="author"><collab id="coll111">Testlab ApS</collab></person-group> (<year>2020</year>). Preely. Available at: <ext-link xlink:href="https://preely.com" ext-link-type="uri">https://preely.com</ext-link> (Accessed November 30, 2022).</citation></ref>
<ref id="ref70"><citation citation-type="other"><person-group person-group-type="author"><collab id="coll2">The European Parliament and the Council of the European Union</collab></person-group> (<year>2016</year>). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation). Available at: <ext-link xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R0679-20160504" ext-link-type="uri">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R0679-20160504</ext-link> (Accessed November 30, 2022).</citation></ref>
<ref id="ref71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Esch</surname> <given-names>P.</given-names></name> <name><surname>Black</surname> <given-names>J. S.</given-names></name> <name><surname>Arli</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Job candidates&#x2019; reactions to AI-enabled job application processes</article-title>. <source>AI Ethics</source> <volume>1</volume>, <fpage>119</fpage>&#x2013;<lpage>130</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s43681-020-00025-0</pub-id></citation></ref>
<ref id="ref72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrontis</surname> <given-names>D.</given-names></name> <name><surname>Christofi</surname> <given-names>M.</given-names></name> <name><surname>Pereira</surname> <given-names>V.</given-names></name> <name><surname>Tarba</surname> <given-names>S.</given-names></name> <name><surname>Makrides</surname> <given-names>A.</given-names></name> <name><surname>Trichina</surname> <given-names>E.</given-names></name></person-group> (<year>2022</year>). <article-title>Artificial intelligence, robotics, advanced technologies and human resource management: a systematic review</article-title>. <source>Int. J. Hum. Resour. Manag.</source> <volume>33</volume>, <fpage>1237</fpage>&#x2013;<lpage>1266</lpage>. doi: <pub-id pub-id-type="doi">10.1080/09585192.2020.1871398</pub-id></citation></ref>
<ref id="ref73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wickens</surname> <given-names>C. D.</given-names></name> <name><surname>Clegg</surname> <given-names>B. A.</given-names></name> <name><surname>Vieane</surname> <given-names>A. Z.</given-names></name> <name><surname>Sebok</surname> <given-names>A. L.</given-names></name></person-group> (<year>2015</year>). <article-title>Complacency and automation bias in the use of imperfect automation</article-title>. <source>Hum. Factors</source> <volume>57</volume>, <fpage>728</fpage>&#x2013;<lpage>739</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720815581940</pub-id>, PMID: <pub-id pub-id-type="pmid">25886768</pub-id></citation></ref>
<ref id="ref74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yigitbasioglu</surname> <given-names>O. M.</given-names></name> <name><surname>Velcu</surname> <given-names>O.</given-names></name></person-group> (<year>2012</year>). <article-title>A review of dashboards in performance management: implications for design and research</article-title>. <source>Int. J. Account. Inf. Syst.</source> <volume>13</volume>, <fpage>41</fpage>&#x2013;<lpage>59</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.accinf.2011.08.002</pub-id></citation></ref>
<ref id="ref75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zerilli</surname> <given-names>J.</given-names></name> <name><surname>Knott</surname> <given-names>A.</given-names></name> <name><surname>Maclaurin</surname> <given-names>J.</given-names></name> <name><surname>Gavaghan</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Algorithmic decision-making and the control problem</article-title>. <source>Mind. Mach.</source> <volume>29</volume>, <fpage>555</fpage>&#x2013;<lpage>578</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11023-019-09513-7</pub-id></citation></ref>
</ref-list>
<fn-group><fn id="fn0004"><p><sup>1</sup>There was only one significant interaction between instruction and data aggregation in task 1 on level 2 (<italic>F</italic>(6,172)&#x2009;=&#x2009;2.16, <italic>p</italic>&#x2009;=&#x2009;0.049, <italic>&#x03B7;</italic><sup>2</sup>&#x2009;=&#x2009;0.04). However, as main effects were not significant and the effect size is small, we would be cautious to interpret this result.</p></fn>
</fn-group>
</back>
</article>