<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.859534</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Fake news zealots: Effect of perception of news on online sharing behavior</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>t&#x00027;Serstevens</surname> <given-names>Fran&#x000E7;ois</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1641010/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Piccillo</surname> <given-names>Giulia</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1491709/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Grigoriev</surname> <given-names>Alexander</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1814627/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Data Analytics and Digitilisation, Maastricht University</institution>, <addr-line>Maastricht</addr-line>, <country>Netherlands</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Economics, Maastricht University</institution>, <addr-line>Maastricht</addr-line>, <country>Netherlands</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jens Koed Madsen, London School of Economics and Political Science, United Kingdom</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ozen Bas, Kadir Has University, Turkey; Concetta Papapicco, University of Bari Aldo Moro, Italy</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Fran&#x000E7;ois t&#x00027;Serstevens <email>f.tserstevens&#x00040;maastrichtuniversity.nl</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Cognition, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>07</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>859534</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>01</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>07</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 t&#x00027;Serstevens, Piccillo and Grigoriev.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>t&#x00027;Serstevens, Piccillo and Grigoriev</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Why do we share fake news? Despite a growing body of freely-available knowledge and information fake news has managed to spread more widely and deeply than before. This paper seeks to understand why this is the case. More specifically, using an experimental setting we aim to quantify the effect of veracity and perception on reaction likelihood. To examine the nature of this relationship, we set up an experiment that mimics the mechanics of Twitter, allowing us to observe the user perception, their reaction in the face of shown claims and the factual veracity of those claims. We find that perceived veracity significantly predicts how likely a user is to react, with higher perceived veracity leading to higher reaction rates. Additionally, we confirm that fake news is inherently more likely to be shared than other types of news. Lastly, we identify an activist-type behavior, meaning that belief in fake news is associated with significantly disproportionate spreading (compared to belief in true news).</p></abstract>
<kwd-group>
<kwd>social media</kwd>
<kwd>veracity assessment</kwd>
<kwd>sharing behavior</kwd>
<kwd>fake news</kwd>
<kwd>perceived veracity</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="6"/>
<equation-count count="0"/>
<ref-count count="42"/>
<page-count count="9"/>
<word-count count="5979"/>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Highlights</title>
<list list-type="simple">
<list-item><p>&#x02013; The veracity of a tweet negatively impacts its reaction likelihood.</p></list-item>
<list-item><p>&#x02013; A higher perceived veracity leads to an increased reaction likelihood.</p></list-item>
<list-item><p>&#x02013; We find a dichotomy: fake news is more likely to be shared, but users primarily share tweets they perceive as true.</p></list-item>
<list-item><p>&#x02013; We find evidence of an activist-type behavior associated with belief in fake news. The effect of belief on reaction likelihood (liking, retweeting, or commenting) being amplified for false tweets.</p></list-item>
</list>
</sec>
<sec sec-type="intro" id="s2">
<title>1. Introduction</title>
<p>The fake news controversy has become an increasingly central societal problem, with false and misleading information increasingly circulating on online media (Albright, <xref ref-type="bibr" rid="B4">2017</xref>; Lazer et al., <xref ref-type="bibr" rid="B16">2018</xref>; Allen et al., <xref ref-type="bibr" rid="B6">2020</xref>). Recently, false information caused hyper partisans to riot the capitol in the wake of the United States 2020 presidential elections (Pennycook and Rand, <xref ref-type="bibr" rid="B32">2021a</xref>), and United Nations secretary-general Antonio Guterres has labeled misinformation as the &#x0201C;enemy&#x0201D; in the fight against COVID-19 (Lederer, <xref ref-type="bibr" rid="B17">2020</xref>; Papapicco, <xref ref-type="bibr" rid="B26">2020</xref>). The term &#x0201C;fake news&#x0201D; term itself has been subject to considerable debate both in the academic and political communities. To ensure common understanding of the term, this paper uses, Lazer et al. (<xref ref-type="bibr" rid="B16">2018</xref>)&#x00027;s definition of fake news: &#x0201C;&#x02026; fabricated information that mimics news media content in form but not in organizational process or intent.&#x0201D; Within this context, the literature further specifies fake news as either mis- or disinformation. The difference of the terms lies with the intention of the original creator to deceive his audience. Spreading falsehoods by design is disinformation, whereas doing so by mistake is misinformation (Wardle, <xref ref-type="bibr" rid="B42">2018</xref>).</p>
<p>Despite the recent rise of the fake news phenomenon, false and inaccurate information has always been a part of our political landscape. The rise of social media over the last decade and the 2016 political events (Brexit referendum and US presidential elections) contributed to the recognition and scale of the matter (Allcott and Gentzkow, <xref ref-type="bibr" rid="B5">2017</xref>; Rose, <xref ref-type="bibr" rid="B35">2017</xref>; Guess et al., <xref ref-type="bibr" rid="B13">2018b</xref>). Misinformation and its newfound scope have caused partisans to support increasingly polarized political views (Vicario et al., <xref ref-type="bibr" rid="B40">2019</xref>; Osmundsen et al., <xref ref-type="bibr" rid="B25">2021</xref>), with partisan disagreement being magnified on even basic facts (e.g., facemasks reducing COVID-19 transmission). Before the US presidential election of 2016 a Gallup survey found the top 20 false news stories on Facebook were more likely to be shared than the top 20 real news stories (Silverman et al., <xref ref-type="bibr" rid="B36">2016</xref>). Further analysis revealed that the spread of fake news was unlikely to be primarily caused by bots, but rather caused by the users themselves (Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>). In this digital era, where the veracity of most notable political controversies can be readily and freely verified on fact-checking websites, it is startling that misinformation spreads more effectively than real news.</p>
<p>Though the accuracy of a claim is central to a user&#x00027;s decision when deciding to share this claim (Pennycook et al., <xref ref-type="bibr" rid="B29">2021</xref>), falsehoods and outlandish claims are known to spread more broadly than their true counterparts (Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>). Therefore, this paper seeks to explain why fake news spreads more deeply on social media. More specifically, we aim to understand the effects of veracity and perceived veracity on reaction likelihood to political (fake) news. We include both likes and comments as a part of our analysis as they bolster a tweet&#x00027;s popularity, indirectly promoting it. To do so, we design an experiment that mimics the mechanics of Twitter, additionally asking participants their perceived veracity on every given claim.</p>
</sec>
<sec id="s3">
<title>2. Hypothesis definition</title>
<p><xref ref-type="fig" rid="F1">Figure 1</xref> provides an overview of the hypotheses outlined in this section. Pennycook et al. (<xref ref-type="bibr" rid="B29">2021</xref>) assessed the importance of veracity for social media users when deciding to share a piece of content on social media. The authors find that accuracy, a close substitute for veracity, is a central factor for content sharing. Because of this latent uncertainty aversion, undermining or nudging the perceived veracity of an online claim often leads to a lesser number of shares (Pennycook et al., <xref ref-type="bibr" rid="B30">2020</xref>; Park et al., <xref ref-type="bibr" rid="B27">2021</xref>; Pennycook and Rand, <xref ref-type="bibr" rid="B32">2021a</xref>). This is again confirmed by the facts that headlines and deepfakes perceived as distrustful are shared less often (Ahmed, <xref ref-type="bibr" rid="B3">2021</xref>) and that retweets, or sharing actions themselves are indicators of trust (Metaxas et al., <xref ref-type="bibr" rid="B22">2014</xref>). (Altay et al., <xref ref-type="bibr" rid="B7">2022</xref>) argue that this aversion to claims perceived as inaccurate is also caused by possible reputational damage, with fake news shares diminishing the online reputation of the sharer. Therefore, we predict that higher levels of perceived accuracy of political tweets will result in higher user reaction rates. We define this mechanism as an activist-type behavior where the belief of a claim leads to a greater chance of sharing.</p>
<list list-type="simple">
<list-item><p><bold>Hypothesis 1</bold>: Higher levels of perceived accuracy of political tweets<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> result in higher reactions rates.</p></list-item>
</list>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Hypothesis summary.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-859534-g0001.tif"/>
</fig>
<p>Fake news spreads more widely than real news on social media (Silverman et al., <xref ref-type="bibr" rid="B36">2016</xref>; Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>; Lee et al., <xref ref-type="bibr" rid="B18">2019</xref>). This difference in spread could be partly attributed to social media network effects, i.e., the algorithms that place a post in a user&#x00027;s claim (Quattrociocchi et al., <xref ref-type="bibr" rid="B34">2016</xref>). However, social media data does not allow researchers to control for such effects. Although social media amplified the spread of fake news, misinformation has existed for a long time (Burkhardt, <xref ref-type="bibr" rid="B9">2017</xref>). This suggests that fake news spread is not exclusive to social media and its network effects. We expect that even within the setting of a controlled experiment, people react more to fake news than to other types of news.</p>
<list list-type="simple">
<list-item><p><bold>Hypothesis 2</bold>: People react to fake news more often than to other types of news, independently of perceived veracity.</p></list-item>
</list>
<p>Fake news primarily spreads through small user groups on social media and is mostly absent from individuals&#x00027; feeds (Allcott and Gentzkow, <xref ref-type="bibr" rid="B5">2017</xref>; Grinberg et al., <xref ref-type="bibr" rid="B10">2019</xref>; Tandoc, <xref ref-type="bibr" rid="B38">2019</xref>). Yet, despite an initial smaller user basis, fake news spreads more widely across social media than its real news counterpart (Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>; Acemoglu et al., <xref ref-type="bibr" rid="B2">2021</xref>). One proposed rationale is the existence of echo chambers. Quattrociocchi et al. (<xref ref-type="bibr" rid="B34">2016</xref>) have shown that political polarization and echo chambers have played a role in the rise of fake news. However, the true extent of echo chambers&#x00027; effect on political polarization is uncertain (Spohr, <xref ref-type="bibr" rid="B37">2017</xref>; Guess et al., <xref ref-type="bibr" rid="B12">2018a</xref>). We suggest that behavioral reasons coexist with network effects and contribute significantly to the wider spread of fake news. We hypothesize that the previously defined activist behavior (Hypothesis 1) is reinforced for fake claims.</p>
<list list-type="simple">
<list-item><p><bold>Hypothesis 3</bold>: Factual veracity (i.e., whether the claim is fake news) moderates the relationship between perceived veracity and reaction rate. Specifically, when fake news is perceived as real news, it is disproportionally more shared than real news.</p></list-item>
</list>
</sec>
<sec id="s4">
<title>3. Experimental design</title>
<p>The experiment compiled 32 claims that were ideologically varied (neutral, republican leaning, and democrat leaning), and with varying degrees of veracity (true, misleading, and fake). As other studies on perception and veracity, we distinguish misleading news as another from of mis- or disinformation, one that is not false but incorporates bias and inaccuracies (Pennycook and Rand, <xref ref-type="bibr" rid="B33">2021b</xref>). Claims were shown in rounds of four tweets on each page, with all tweets in the same round being related to the same subject (e.g., hydroxychloroquine export in India). Participants were first asked to react to all claims as they would on Twitter, being given the option to ignore, like, retweet, and comment the claims. They were subsequently asked to rate the veracity of all 32 claims. All materials necessary to the analysis are available online (<ext-link ext-link-type="uri" xlink:href="https://osf.io/2k5tm/?view_only=cf12258ff2744c95ba074869e7244cd6">https://osf.io/2k5tm/?view_only=cf12258ff2744c95ba074869e7244cd6</ext-link>).</p>
<sec>
<title>3.1. Participants</title>
<p>The experiment featured a representative sample of 150 participants recruited through Prolific, an online participant recruiting platform. In total 121 entries were used for the analysis, with failed attention checks and extraordinary rapid completion times excluded from the analysis. Though we used online sampling methods, previous works have shown that results from similar platforms (e.g., MTurk) have wide external validity (Krupnikov and Levine, <xref ref-type="bibr" rid="B15">2014</xref>; Mullinix et al., <xref ref-type="bibr" rid="B24">2015</xref>). The filtered sample featured 61 females, 59 males and 1 other gender, with an average age of 45.81 (&#x003C3; &#x0003D; 15.89). The experiment was rolled out in July 2020 on a UK-based sample<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>. Participants were paid a fixed fee for participating in the experiment and could earn additional monetary rewards during the experiment.</p>
</sec>
<sec>
<title>3.2. Materials and procedure</title>
<p>All news items were initially tweets posted by well-known and trusted media outlets (e.g., Bloomberg, Reuters, the Economist). To remove any plausible residual bias the tweets were translated into several languages and back to English using DeepL. This was done with the aim of preventing any existent wording bias. Several tweets remained unchanged in this process. All selected tweets were in relation to American politics both internal and external and were factual depictions of reality. From the original selected tweets, we then derived a shorter version, this version though shorter and less information-rich remained an accurate depiction of the original tweet and thus true. Both the original and short Tweets represented true tweets in the experiment. Besides the short version, we additionally created misleading versions of the original tweet, one for each political bias (Democrat- and Republican-leaning). These misleading versions, though correct, presented the information in the favor of their political alignment. Lastly, we derived fake versions of the original tweet. As fake news heavily favored a political party and the information they featured was factually incorrect. <xref ref-type="table" rid="T1">Table 1</xref> summarizes this transformation and creation process; <xref ref-type="table" rid="T2">Table 2</xref> shows the result of this process from an original claim to a fake version.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Tweet types.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Veracity</bold></th>
<th valign="top" align="left"><bold>Tweet type</bold></th>
<th valign="top" align="left"><bold>Tweet construction method</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">True</td>
<td valign="top" align="left">Original</td>
<td valign="top" align="left">The first tweet is the original after it has gone through the bias removal process. In the experiment, this tweet is considered the truth.</td>
</tr>
<tr>
<td valign="top" align="left">True</td>
<td valign="top" align="left">Short</td>
<td valign="top" align="left">This tweet is similar to the original tweet but omits a piece of information and/or re-rewrites the tweets using fewer characters.</td>
</tr>
<tr>
<td valign="top" align="left">Misleading</td>
<td valign="top" align="left">Misleading</td>
<td valign="top" align="left">This category of tweets holds two different tweets, left- and right-biased. <break/> The tweets are written in such a way that it presents the information in a favorable way for its bias and intentionally misleads the participants.</td>
</tr>
<tr>
<td valign="top" align="left">False</td>
<td valign="top" align="left">Fake</td>
<td valign="top" align="left">As the biased tweets, the extreme tweets exist in left- and right-biased versions.<break/> These tweets are extremely misleading and fundamentally false. <break/> In the context of this experiment they represent the strongest form of fake news.</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Tweet examples.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Tweet type</bold></th>
<th valign="top" align="left"><bold>Tweet example</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Original</td>
<td valign="top" align="left">India put a total ban on exports of hydroxychloroquine, a malaria drug that President Trump has touted as a &#x0201C;game changer&#x0201D; in the fight against COVID-19</td>
</tr>
<tr>
<td valign="top" align="left">Translated</td>
<td valign="top" align="left">India has imposed a total ban on the export of hydroxychloroquine, an anti-malarial drug that President Trump has described as a &#x0201C;turning point&#x0201D; in the fight against COVID-19</td>
</tr>
<tr>
<td valign="top" align="left">Short</td>
<td valign="top" align="left">India banned export of anti-malarial drug</td>
</tr>
<tr>
<td valign="top" align="left">Misleading (Democrat&#x02013;Biased)</td>
<td valign="top" align="left">India banned export of hydroxychloroquine, an anti-malarial drug to prevent country wide shortage</td>
</tr>
<tr>
<td valign="top" align="left">Misleading (Republican&#x02013;Biased)</td>
<td valign="top" align="left">India banned US from importing anti-malarial drug crucial in the fight against COVID-19</td>
</tr>
<tr>
<td valign="top" align="left">Fake (Democrat&#x02013;Biased)</td>
<td valign="top" align="left">Because Trump described the anti-malarial drug as a &#x0201C;turning point&#x0201D;,<break/> India has banned exportation to US</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The experiment was made of 2 main phases. In the first phase, the reaction phase, participants were asked to react to the tweets as they would on Twitter under normal circumstances. To ensure that participants engaged with all of the items, for those that they did not want to respond to, they had to click an onscreen &#x0201C;ignore&#x0201D; option. Additionally, participants could react with a combination of like retweet and comment, as they are able to on Twitter.</p>
<p>In the second phase, the veracity phase, the participants were tasked and incentivized to assess the veracity of each claim. Correctly identifying the veracity would lead to additional monetary rewards. In order to assess the veracity, participants were given the option of classifying each claim as either true, misleading, or fake. They were shown basic definitions of these terms at the start of the veracity phase. Correct identification lead to higher monetary rewards. Participants were informed of their accuracy at the end of the study.</p>
<p>After completing both phases participants were asked demographic information along with questions assessing the effect of the COVID-19 pandemic on their mental state, their risk and ambiguity aversion and their self-reported political leaning<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>.</p>
<p>The reaction and veracity phase each featured 8 rounds of 4 tweets. The tweets remained the same across both phases. Every round featured an original tweet and a short tweet, both of which were verifiably true. Additional to those true claims, participants were also shown one or two misleading tweets (correct but politically-biased claims). In three out of eight rounds, only one misleading claim was shown, the initial second misleading claim was replaced by a homologue fake version. That is, if a round did not contain a republican-biased misleading tweet, it would have a republican biased fake tweet. All tweets within a round across both phases were shown in random order. The full experiment and list of tweets is found in Appendix 1. It is worth noting that the experiment features an equal number of true and false tweets, following the tradition of lab-based experiments on fake news (Bond and DePaulo, <xref ref-type="bibr" rid="B8">2006</xref>; Pennycook et al., <xref ref-type="bibr" rid="B28">2017</xref>; Luo et al., <xref ref-type="bibr" rid="B20">2022</xref>). Considering partisanship is a crucial determinant in reaction type and reaction rate (Mour&#x000E3;o and Robertson, <xref ref-type="bibr" rid="B23">2019</xref>), an equal number of democrat and republican leaning tweets are selected.</p>
</sec>
</sec>
<sec id="s5">
<title>4. Results and discussion</title>
<sec>
<title>4.1. Dataset structure</title>
<p>Using a method similar to Park et al. (<xref ref-type="bibr" rid="B27">2021</xref>), the experiment data frame was structured on a tweet-participant basis instead of a participant basis. That is, every data entry represents a participant&#x00027;s decisions on a given tweet multiplying the total entries by the number of tweets in the experiment. We use a set of parametric statistical tests to verify the outlined hypothesis. To account for the dataset transformation, subsequent regression analyses control for participant and tweet fixed effects. This restructuration allows for a more understandable representation of the variability.</p>
<p>The dataset featured three main variables. (i) The reaction binary which was activated if a participant did not ignore a tweet in the reaction phase. This variable is used as the dependent variables in the subsequent models. (ii) The (factual) veracity as a categorical variable which indicated the veracity of the tweet participants reacted to (either true, misleading, or fake). (iii) Lastly, the perceived veracity of the participants on tweets, each claim being rated as either real, misleading, or fake news. The dataset also featured general demographic information such as age, gender, nationality, etc. as well as political leaning. In total the dataset featured 3,872 decisions (<italic>N</italic> = 3,872) for 121 participants. <xref ref-type="table" rid="T3">Tables 3</xref>, <xref ref-type="table" rid="T4">4</xref> provide an overview of the descriptive statistics of the sample.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Tweet perception and reaction rates.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="3"><bold>Perceived as:</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Tweet</bold></th>
<th valign="top" align="center"><bold>Real</bold></th>
<th valign="top" align="center"><bold>Misleading</bold></th>
<th valign="top" align="center"><bold>Fake</bold></th>
<th valign="top" align="center"><bold>Reaction</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>type</bold></th>
<th valign="top" align="center"><bold>news (%)</bold></th>
<th valign="top" align="center"><bold>news (%)</bold></th>
<th valign="top" align="center"><bold>news (%)</bold></th>
<th valign="top" align="center"><bold>rate (%)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">All tweets</td>
<td valign="top" align="center">42.51</td>
<td valign="top" align="center">20.87</td>
<td valign="top" align="center">36.62</td>
<td valign="top" align="center">40.70</td>
</tr>
<tr>
<td valign="top" align="left">True tweets</td>
<td valign="top" align="center">49.02</td>
<td valign="top" align="center">35.74</td>
<td valign="top" align="center">15.24</td>
<td valign="top" align="center">37.50</td>
</tr>
<tr>
<td valign="top" align="left">Misleading tweets</td>
<td valign="top" align="center">41.62</td>
<td valign="top" align="center">38.17</td>
<td valign="top" align="center">20.21</td>
<td valign="top" align="center">40.20</td>
</tr>
<tr>
<td valign="top" align="left">Fake tweets</td>
<td valign="top" align="center">23.64</td>
<td valign="top" align="center">36.03</td>
<td valign="top" align="center">40.33</td>
<td valign="top" align="center">52.07</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Observations = 3,872, Participants = 121</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Descriptive statistics.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Statistic</bold></th>
<th valign="top" align="center"><bold>Mean</bold></th>
<th valign="top" align="center"><bold>St. dev</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Political leaning<sup>&#x0002A;</sup></td>
<td valign="top" align="center">4.53</td>
<td valign="top" align="center">2.12</td>
</tr>
<tr>
<td valign="top" align="left">Age</td>
<td valign="top" align="center">45.81</td>
<td valign="top" align="center">15.90</td>
</tr>
<tr>
<td valign="top" align="left">Hours per day on social media</td>
<td valign="top" align="center">2.21</td>
<td valign="top" align="center">2.14</td>
</tr>
<tr>
<td valign="top" align="left">Familiarity with American politics</td>
<td valign="top" align="center">36.86%</td>
<td valign="top" align="center">24.97%</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Participants = 121; &#x0002A;A range from 0 to 10 from left to right leaning</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>4.2. Results</title>
<p>Correlation analysis reveals a significant positive correlation between perceived veracity and reaction likelihood [correlation: <italic>r</italic><sub>(3,871)</sub> = 0.067, <italic>p</italic> &#x0003C; 0.001]. Though the aforementioned analyses hint at the confirmation of hypothesis 1, they fail to account for participants and tweet characteristics. <xref ref-type="table" rid="T5">Table 5</xref> presents multiple logit models testing for the hypothesis, unlike the correlation analysis the logit models account for the fixed effects of both tweets and participants. In all models the perceived veracity significantly predicted the reaction likelihood. This supports our first hypothesis and is in line with current academic literature (Metaxas et al., <xref ref-type="bibr" rid="B22">2014</xref>; Pennycook et al., <xref ref-type="bibr" rid="B30">2020</xref>).</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Logit regression: reaction likelihood.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="6"><bold>Dependent variable: Reaction likelihood&#x0002A;</bold></th>
</tr>
<tr>
<th/>
<th valign="top" align="center"><bold>Model A</bold></th>
<th valign="top" align="center"><bold>Model B</bold></th>
<th valign="top" align="center"><bold>Model C</bold></th>
<th valign="top" align="center"><bold>Model D</bold></th>
<th valign="top" align="center"><bold>Model E</bold></th>
<th valign="top" align="center"><bold>Model F</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Perceived veracity: misleading</td>
<td valign="top" align="center">&#x02212;0.42<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.43<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.40<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.40<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.37<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.43<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="center">(0.08)</td>
<td valign="top" align="center">(0.09)</td>
<td valign="top" align="center">(0.11)</td>
<td valign="top" align="center">(0.11)</td>
<td valign="top" align="center">(0.09)</td>
<td valign="top" align="center">(0.10)</td>
</tr>
<tr>
<td valign="top" align="left">Perceived veracity: fake</td>
<td valign="top" align="center">&#x02212;0.46<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.54<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.35<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.35<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.27<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.33<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="center">(0.09)</td>
<td valign="top" align="center">(0.12)</td>
<td valign="top" align="center">(0.17)</td>
<td valign="top" align="center">(0.17)</td>
<td valign="top" align="center">(0.14)</td>
<td valign="top" align="center">(0.16)</td>
</tr>
<tr>
<td valign="top" align="left">Misleading news</td>
<td valign="top" align="center">0.15<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.51<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td/>
<td valign="top" align="center">0.52</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.01</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">(0.07)</td>
<td valign="top" align="center">(0.31)</td>
<td/>
<td valign="top" align="center">(0.37)</td>
<td valign="top" align="center">(0.15)</td>
<td valign="top" align="center">(0.17)</td>
</tr>
<tr>
<td valign="top" align="left">Fake news</td>
<td valign="top" align="center">0.72<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.12</td>
<td/>
<td valign="top" align="center">&#x02212;0.32</td>
<td valign="top" align="center">0.31<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.38<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="center">(0.10)</td>
<td valign="top" align="center">(0.31)</td>
<td/>
<td valign="top" align="center">(0.35)</td>
<td valign="top" align="center">(0.16)</td>
<td valign="top" align="center">(0.18)</td>
</tr>
<tr>
<td valign="top" align="left">Perceived veracity&#x0002A;misleading news</td>
<td/>
<td/>
<td valign="top" align="center">&#x02212;0.01</td>
<td valign="top" align="center">&#x02212;0.01</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">0.14</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td valign="top" align="center">(0.13)</td>
<td valign="top" align="center">(0.13)</td>
<td valign="top" align="center">(0.10)</td>
<td valign="top" align="center">(0.11)</td>
</tr>
<tr>
<td valign="top" align="left">Perceived veracity&#x0002A; fake news</td>
<td/>
<td/>
<td valign="top" align="center">0.56<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.56<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.46<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.61<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td valign="top" align="center">(0.16)</td>
<td valign="top" align="center">(0.16)</td>
<td valign="top" align="center">(0.13)</td>
<td valign="top" align="center">(0.16)</td>
</tr>
<tr>
<td valign="top" align="left">Participant fixed effects</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">Yes</td>
</tr>
<tr>
<td valign="top" align="left">Tweet fixed effects</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">No</td>
</tr>
<tr>
<td valign="top" align="left">Control variables</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">No</td>
<td valign="top" align="center">Yes</td>
<td valign="top" align="center">No</td>
</tr>
<tr>
<td valign="top" align="left">Constant</td>
<td valign="top" align="center">&#x02212;0.30<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.02<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.07<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.07<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.39<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.33<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="center">(0.06)</td>
<td valign="top" align="center">(0.66)</td>
<td valign="top" align="center">(0.66)</td>
<td valign="top" align="center">(0.66)</td>
<td valign="top" align="center">(0.07)</td>
<td valign="top" align="center">(0.62)</td>
</tr>
<tr>
<td valign="top" align="left">Observations</td>
<td valign="top" align="center">3,872</td>
<td valign="top" align="center">3,872</td>
<td valign="top" align="center">3,872</td>
<td valign="top" align="center">3,872</td>
<td valign="top" align="center">3,872</td>
<td valign="top" align="center">3,872</td>
</tr>
<tr>
<td valign="top" align="left">Log likelihood</td>
<td valign="top" align="center">&#x02212;2576.12</td>
<td valign="top" align="center">&#x02212;1922.59</td>
<td valign="top" align="center">&#x02212;1915.72</td>
<td valign="top" align="center">&#x02212;1915.72</td>
<td valign="top" align="center">&#x02212;2522.41</td>
<td valign="top" align="center">&#x02212;2045.68</td>
</tr>
<tr>
<td valign="top" align="left">Akaike inf. Crit.</td>
<td valign="top" align="center">5162.24</td>
<td valign="top" align="center">4153.18</td>
<td valign="top" align="center">4143.43</td>
<td valign="top" align="center">4143.43</td>
<td valign="top" align="center">5072.82</td>
<td valign="top" align="center">4345.37</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN1"><label>&#x0002A;</label><p><italic>p&#x0003C;0.1</italic>;</p></fn>
<fn id="TN2"><label>&#x0002A;&#x0002A;</label><p><italic>p&#x0003C;0.05</italic>;</p></fn>
<fn id="TN3"><label>&#x0002A;&#x0002A;&#x0002A;</label><p><italic>p&#x0003C;0.01</italic>.</p></fn>
<p><italic><sup>&#x0002A;</sup>A reaction is defined as any combination or use of likes, retweets, and comments</italic>.</p>
<p><italic>Analysis of the data was done using the stargazer package (Hlavac, <xref ref-type="bibr" rid="B14">2018</xref>)</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The second hypothesis analyzed if fake news was intrinsically more likely to be shared than real news. <xref ref-type="fig" rid="F2">Figure 2</xref> shows graphically that this is indeed the case with fake news being reacted to more often than other types of news. We denote that the reaction likelihood for fake news is higher despite its lower perceived accuracy. Moreover, a one-way ANOVA confirmed this difference [<italic>F</italic><sub>(1, 3, 871)</sub> = 17.19, <italic>p</italic> &#x0003C; 0.001]. Modal evidence, without fixed effects, is also in line with Hypothesis 2, as shown in <xref ref-type="table" rid="T5">Table 5</xref>. However, when including fixed effects of tweets, the statistical significance of the effect is reduced. This partial confirmation is in line with a social media platform&#x00027;s reality where fake news spreads more widely than its true counterparts (Silverman et al., <xref ref-type="bibr" rid="B36">2016</xref>; Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>). Because the experiment displayed multiple claims of varied political biases within a round (and thus provided equal information), we note that this difference in reaction likelihood holds even outside possible &#x0201C;echo chambers&#x0201D; and &#x0201C;filter bubbles&#x0201D;.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Hypothesis 1&#x02013;2&#x02014;Reaction rate per perceived and factual veracity.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-859534-g0002.tif"/>
</fig>
<p>Through the confirmation of hypotheses 1 and 2, we note a surprising distinction; whilst participants are most likely to react to tweets that they perceived to be true, fake news remains the most susceptible to gather reactions.</p>
<p>Hypothesis 3 tested for the interaction effect of tweet veracity and perceived veracity on reaction likelihood. <xref ref-type="fig" rid="F3">Figure 3</xref> shows the variables in this hypothesis and their interaction. Across all biases tweets considered to be &#x0201C;Real News&#x0201D; by the participants were the most likely to be reacted to, for fake tweets this effect was further magnified. <xref ref-type="table" rid="T5">Table 5</xref> shows the results of the analysis. The interaction term of tweet type (fake) and perceived veracity (true) is statistically significant across all models, confirming that the effect of perceived veracity is magnified for fake news. This result provides a mechanism for the well-known stylized fact that fake news spreads more widely than real news. Specifically, even when controlling with fixed effects for tweets and individuals, perceived veracity of specifically fake news is associated to higher reaction likelihood.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Hypothesis 3&#x02014;Reaction rate per (categorized) perceived and factual veracity.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-859534-g0003.tif"/>
</fig>
</sec>
<sec>
<title>4.3. Discussion</title>
<p>This paper derives three main findings from its analysis, they are synthesized in <xref ref-type="table" rid="T6">Table 6</xref>. We first confirm that (i) higher perceived veracity of a claim leads to a higher reaction likelihood to said claims. We define such behavior as activist-behavior, where the belief of claim leads to increased reaction likelihood. (ii) Fake news is more likely to be reacted to than real news. Lastly, we find (iii) a statistically significant interaction effect of claim (factual) veracity on perceived veracity, i.e., the activist behavior is amplified for claims that are factually false.</p>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p>Hypothesis summary.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="left"><bold>Hypothesis</bold></th>
<th valign="top" align="left"><bold>Support</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">H1</td>
<td valign="top" align="left">News perceived as true is most likely to be reacted to</td>
<td valign="top" align="left">Yes</td>
</tr>
<tr>
<td valign="top" align="left">H2</td>
<td valign="top" align="left">Fake news has the highest reaction rate</td>
<td valign="top" align="left">Yes <break/> (Excluding tweet fixed effects)</td>
</tr>
<tr>
<td valign="top" align="left">H3</td>
<td valign="top" align="left">Factual veracity negatively moderates the effect of perceived veracity</td>
<td valign="top" align="left">Yes</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Understanding the reasons that drive social media users to share (fake) news is central to limiting the spread of fake news. The literature suggests that veracity is central to a user&#x00027;s decision to share news or not. Yet this finding is often studied by asking users about their own behavior, not the perception they have on a particular news item, i.e., the perceived veracity (Metaxas et al., <xref ref-type="bibr" rid="B22">2014</xref>; Pennycook and Rand, <xref ref-type="bibr" rid="B31">2019</xref>). The experimental setting of this paper allows us to study the perceived veracity of users on all claims. We confirm the initial finding of the literature.</p>
<p>Social media data suggests that fake news spreads more widely than real news (Silverman et al., <xref ref-type="bibr" rid="B36">2016</xref>; Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>). However, this difference could also be explained, at least in part, by social media network effects (i.e., the latent algorithms used to place a posts in a user&#x00027;s feed). We confirm that users react more to fake news even outside the typical social media environment. This entails that the popularity of fake news cannot be solely attributed to the network effect present in social media, rather it has an inherent individual component.</p>
<p>Hypothesis 3 (activist behavior is amplified for fake news) provides a behavioral explanation to the sharing of fake news. It partially explains why fake news spreads more widely than real news despite an initial smaller user base (Grinberg et al., <xref ref-type="bibr" rid="B10">2019</xref>; Guess et al., <xref ref-type="bibr" rid="B11">2019</xref>). This oversharing of fake news is commonly attributed to online echo chambers, which are known to be present across social media platforms (Quattrociocchi et al., <xref ref-type="bibr" rid="B34">2016</xref>). However, the true magnitude of their effect remains uncertain (Spohr, <xref ref-type="bibr" rid="B37">2017</xref>; Guess et al., <xref ref-type="bibr" rid="B12">2018a</xref>). We suggest that these network effects coexist with behavioral reasons and that they simultaneously contribute to the wider spread of fake news. This characterization yields support to the headlines blaming zealotry (i.e., a stronger version of activism) for the role of social media in spreading fake news (Aaronovitch, <xref ref-type="bibr" rid="B1">2017</xref>; Lohr, <xref ref-type="bibr" rid="B19">2018</xref>).</p>
<p>Besides the analysis presented in this section Appendix 2 also finds that hypotheses 1 and 3 hold true using the results of Pennycook and Rand (<xref ref-type="bibr" rid="B31">2019</xref>)&#x00027;s experiment. Pennycook and Rand initially derived from their results that fake news belief was caused rather by lack of thinking than by hyper-partisanship.</p>
<p>Notwithstanding the contributions of this paper to the literature, there are some limitations that should be highlighted. First, this is an experiment rather than a natural study. Therefore, it is attached to the early infodemic and pandemic context of the summer 2020 in which the experiment took place. The information overload present at the time could potentially have affected participant opinions on some of the tweets present in our study (Papapicco, <xref ref-type="bibr" rid="B26">2020</xref>).</p>
<p>Second, the experiment features a UK-based sample whilst the topics cover American politics. Though participants may not have been as informed on the topic as an American public would (though we control for familiarity with US politics), this also allows them to have a more detached opinion with less extreme emotions. To the extent that strong emotions are behind individual reacting decisions, our results could be seen as a conservative benchmark for a US sample.</p>
<p>Third, as noted in the experimental design, the experiment displayed tweets by rounds of four centered on a same topic. Hence, the tweets seen by participant in a same round were diverse both in veracity and political biases. This can affect our results in two ways. On the one hand, this might not be reflective of online echo chambers, where participants would supposedly be shown tweets that fit their profile specifications. On the other hand, the perception of the participants could be affected by the display of diversified tweets. This could be seen as a form of inoculation from fake news (van Der Linden et al., <xref ref-type="bibr" rid="B39">2020</xref>), leading to a reduced impact of the false information used in the experiment.</p>
<p>Furthermore, participants remained uninformed of monetary gains and performance through the veracity-phase. Future studies can look at the effect of informing participants after each trial.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s6">
<title>5. Conclusion</title>
<p>In this era of growing misinformation, it is crucial that we understand why social media users share fake news. This paper seeks to identify the mechanisms through which fake news spread more than real news. We analyze how veracity (both factual and perceived) influences reaction likelihood. We tested three hypotheses on political fake news reaction likelihood. Firstly, we show that perceived veracity significantly influences reaction likelihood, with higher perceived veracity leading to higher reaction rates. This supports the claim that self-assessed accuracy is the most important reason behind the sharing decision made by users (Pennycook et al., <xref ref-type="bibr" rid="B29">2021</xref>). Secondly, we demonstrated that fake news was intrinsically more likely to be shared (Silverman et al., <xref ref-type="bibr" rid="B36">2016</xref>; Vosoughi et al., <xref ref-type="bibr" rid="B41">2018</xref>). Lastly, we found that the effect of perceived veracity was amplified for fake claims.</p>
<p>The present results explains why fake news are more likely to be reacted to, even though users place great importance on the veracity of claims when deciding to react. This is explained by the fact that fake news that are perceived as true are spread more often than real news (perceived as true), pointing to an activist-type behavior in the case of fake news. This work has implications in the fight against fake news. In line with Facebook&#x00027;s current strategy (Lyons, <xref ref-type="bibr" rid="B21">2017</xref>), it suggests that strategies that focus on debunking fake news, instead of hiding it, might prove to be more effective in limiting its spread.</p>
</sec>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s8">
<title>Ethics statement</title>
<p>Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s9">
<title>Author contributions</title>
<p>Ft&#x00027;S and GP contributed to the research framework, statistical analysis, and manuscript revisions. All authors contributed to the conception and design of the study.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack><p>We acknowledge funding for the online experiment from Maastricht Working on Europe (<ext-link ext-link-type="uri" xlink:href="https://studioeuropamaastricht.nl/">https://studioeuropamaastricht.nl/</ext-link>). We thank the editors and the reviewers for their insightful and useful contributions to this article.</p>
</ack>
<sec sec-type="supplementary-material" id="s11">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.859534/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpsyg.2022.859534/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aaronovitch</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <source>Social Media Zealots Are Waging War on Truth</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Times</publisher-name>.</citation>
</ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Acemoglu</surname> <given-names>D.</given-names></name> <name><surname>Ozdaglar</surname> <given-names>A.</given-names></name> <name><surname>Siderius</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <source>Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers</source>. Technical report, National Bureau of Economic Research. <pub-id pub-id-type="doi">10.2139/ssrn.3861413</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Fooled by the fakes: cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes</article-title>. <source>Pers. Individ. Diff</source>. 182, 111074. <pub-id pub-id-type="doi">10.1016/j.paid.2021.111074</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Albright</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Welcome to the era of fake news</article-title>. <source>Media Commun</source>. <volume>5</volume>, <fpage>87</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.17645/mac.v5i2.977</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allcott</surname> <given-names>H.</given-names></name> <name><surname>Gentzkow</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Social media and fake news in the 2016 election</article-title>. <source>J. Econ. Perspect</source>. <volume>31</volume>, <fpage>211</fpage>&#x02013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.3386/w23089</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Allen</surname> <given-names>J.</given-names></name> <name><surname>Howland</surname> <given-names>B.</given-names></name> <name><surname>Mobius</surname> <given-names>M.</given-names></name> <name><surname>Rothschild</surname> <given-names>D.</given-names></name> <name><surname>Watts</surname> <given-names>D. J.</given-names></name></person-group> (<year>2020</year>). <article-title>Evaluating the fake news problem at the scale of the information ecosystem</article-title>. <source>Sci. Adv</source>. 6, eaay3539. <pub-id pub-id-type="doi">10.1126/sciadv.aay3539</pub-id><pub-id pub-id-type="pmid">32284969</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Altay</surname> <given-names>S.</given-names></name> <name><surname>Hacquin</surname> <given-names>A. -S.</given-names></name> <name><surname>Mercier</surname> <given-names>H.</given-names></name></person-group> (<year>2022</year>). <article-title>Why do so few people share fake news? It hurts their reputation</article-title>. <source>New Med. Soc</source>. <volume>24</volume>, <fpage>1303</fpage>&#x02013;<lpage>1324</lpage>. <pub-id pub-id-type="doi">10.1177/1461444820969893</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bond</surname> <given-names>C. F.</given-names> <suffix>Jr.</suffix></name> <name><surname>DePaulo</surname> <given-names>B. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Accuracy of deception judgments</article-title>. <source>Pers. Soc. Psychol. Rev</source>. <volume>10</volume>, <fpage>214</fpage>&#x02013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1207/s15327957pspr1003_2</pub-id><pub-id pub-id-type="pmid">16859438</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burkhardt</surname> <given-names>J. M.</given-names></name></person-group> (<year>2017</year>). <article-title>History of fake news</article-title>. <source>Library Technol. Rep</source>. <volume>53</volume>, <fpage>5</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.5860/ltr.53n8</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grinberg</surname> <given-names>N.</given-names></name> <name><surname>Joseph</surname> <given-names>K.</given-names></name> <name><surname>Friedland</surname> <given-names>L.</given-names></name> <name><surname>Swire-Thompson</surname> <given-names>B.</given-names></name> <name><surname>Lazer</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Fake news on twitter during the 2016 US presidential election</article-title>. <source>Science</source> <volume>363</volume>, <fpage>374</fpage>&#x02013;<lpage>378</lpage>. <pub-id pub-id-type="doi">10.1126/science.aau2706</pub-id><pub-id pub-id-type="pmid">30602729</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guess</surname> <given-names>A.</given-names></name> <name><surname>Nagler</surname> <given-names>J.</given-names></name> <name><surname>Tucker</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Less than you think: prevalence and predictors of fake news dissemination on facebook</article-title>. <source>Sci. Adv</source>. 5, eaau4586. <pub-id pub-id-type="doi">10.1126/sciadv.aau4586</pub-id><pub-id pub-id-type="pmid">30662946</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guess</surname> <given-names>A.</given-names></name> <name><surname>Nyhan</surname> <given-names>B.</given-names></name> <name><surname>Lyons</surname> <given-names>B.</given-names></name> <name><surname>Reifler</surname> <given-names>J.</given-names></name></person-group> (<year>2018a</year>). <article-title>Avoiding the echo chamber about echo chambers</article-title>. <source>Knight Found</source>. <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>25</lpage>.</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guess</surname> <given-names>A.</given-names></name> <name><surname>Nyhan</surname> <given-names>B.</given-names></name> <name><surname>Reifler</surname> <given-names>J.</given-names></name></person-group> (<year>2018b</year>). <article-title>Selective exposure to misinformation: evidence from the consumption of fake news during the 2016 US presidential campaign</article-title>. <source>Eur. Res. Council</source> <volume>9</volume>, <fpage>4</fpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Hlavac</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <source>stargazer: Well-Formatted Regression and Summary Statistics Tables. Central European Labour Studies Institute (CELSI). Bratislava: R package</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://CRAN.R-project.org/package=stargazer">https://CRAN.R-project.org/package=stargazer</ext-link>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krupnikov</surname> <given-names>Y.</given-names></name> <name><surname>Levine</surname> <given-names>A. S.</given-names></name></person-group> (<year>2014</year>). <article-title>Cross-sample comparisons and external validity</article-title>. <source>J. Exp. Polit. Sci</source>. <volume>1</volume>, <fpage>59</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1017/xps.2014.7</pub-id><pub-id pub-id-type="pmid">27921348</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lazer</surname> <given-names>D. M.</given-names></name> <name><surname>Baum</surname> <given-names>M. A.</given-names></name> <name><surname>Benkler</surname> <given-names>Y.</given-names></name> <name><surname>Berinsky</surname> <given-names>A. J.</given-names></name> <name><surname>Greenhill</surname> <given-names>K. M.</given-names></name> <name><surname>Menczer</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>The science of fake news</article-title>. <source>Science</source> <volume>359</volume>, <fpage>1094</fpage>&#x02013;<lpage>1096</lpage>. <pub-id pub-id-type="doi">10.1126/science.aao2998</pub-id><pub-id pub-id-type="pmid">29590025</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lederer</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <source>UN Chief Says Misinformation About COVID-19 Is New Enemy</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>ABC News</publisher-name>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>C. L.</given-names></name> <name><surname>Wong</surname> <given-names>J.-D. J.</given-names></name> <name><surname>Lim</surname> <given-names>Z. Y.</given-names></name> <name><surname>Tho</surname> <given-names>B. S.</given-names></name> <name><surname>Kwek</surname> <given-names>S. S.</given-names></name> <name><surname>Shim</surname> <given-names>K. J.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;How does fake news spread: raising awareness &#x00026; educating the public with a simulation tool,&#x0201D;</article-title> in <source>2019 IEEE International Conference on Big Data (Big Data)</source> (<publisher-loc>IEEE</publisher-loc>), Los <volume>Angeles</volume>, <fpage>6119</fpage>&#x02013;<lpage>6121</lpage>. <pub-id pub-id-type="doi">10.1109/BigData47090.2019.9005953</pub-id><pub-id pub-id-type="pmid">33720841</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lohr</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <source>It&#x00027;s True: False News Spreads Faster and wider. And humans are to Blame</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>New York Times</publisher-name>.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>M.</given-names></name> <name><surname>Hancock</surname> <given-names>J. T.</given-names></name> <name><surname>Markowitz</surname> <given-names>D. M.</given-names></name></person-group> (<year>2022</year>). <article-title>Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues</article-title>. <source>Commun. Res</source>. <volume>49</volume>, <fpage>171</fpage>&#x02013;<lpage>295</lpage>. <pub-id pub-id-type="doi">10.1177/0093650220921321</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lyons</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <source>Replacing Disputed Flags With Related Articles</source>. <publisher-loc>Menlo Park, CA</publisher-loc>: <publisher-name>Meta</publisher-name>.</citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Metaxas</surname> <given-names>P. T.</given-names></name> <name><surname>Mustafaraj</surname> <given-names>E.</given-names></name> <name><surname>Wong</surname> <given-names>K.</given-names></name> <name><surname>Zeng</surname> <given-names>L.</given-names></name> <name><surname>O&#x00027;Keefe</surname> <given-names>M.</given-names></name> <name><surname>Finn</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). Do retweets indicate interest, trust, agreement? <italic>arXiv preprint arXiv:1411.3555</italic>. Menlo Park, CA: Meta.</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mour&#x000E3;o</surname> <given-names>R. R.</given-names></name> <name><surname>Robertson</surname> <given-names>C. T.</given-names></name></person-group> (<year>2019</year>). <article-title>Fake news as discursive integration: an analysis of sites that publish false, misleading, hyperpartisan and sensational information</article-title>. <source>J. Stud</source>. <volume>20</volume>, <fpage>2077</fpage>&#x02013;<lpage>2095</lpage>. <pub-id pub-id-type="doi">10.1080/1461670X.2019.1566871</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mullinix</surname> <given-names>K. J.</given-names></name> <name><surname>Leeper</surname> <given-names>T. J.</given-names></name> <name><surname>Druckman</surname> <given-names>J. N.</given-names></name> <name><surname>Freese</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>The generalizability of survey experiments</article-title>. <source>J. Exp. Polit. Sci</source>. <volume>2</volume>, <fpage>109</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.1017/XPS.2015.19</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Osmundsen</surname> <given-names>M.</given-names></name> <name><surname>Bor</surname> <given-names>A.</given-names></name> <name><surname>Vahlstrup</surname> <given-names>P. B.</given-names></name> <name><surname>Bechmann</surname> <given-names>A.</given-names></name> <name><surname>Petersen</surname> <given-names>M. B.</given-names></name></person-group> (<year>2021</year>). <article-title>Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter</article-title>. <source>Am. Polit. Sci. Rev. 115</source>, 999&#x02013;1015. <pub-id pub-id-type="doi">10.1017/S0003055421000290</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Papapicco</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Informative contagion: the coronavirus (COVID-19) in Italian journalism</article-title>. <source>Online J. Commun. Media Technol</source>. 10, e202014. <pub-id pub-id-type="doi">10.29333/ojcmt/7938</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Park</surname> <given-names>J. Y.</given-names></name> <name><surname>Chin</surname> <given-names>H.</given-names></name> <name><surname>Kang</surname> <given-names>J.-H.</given-names></name> <name><surname>Cha</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;An experimental study to understand user experience and perception bias occurred by fact-checking messages,&#x0201D;</article-title> in> <source>Proceedings of the Web Conference 2021</source>, <volume>Ljubljana</volume>, <fpage>2769</fpage>&#x02013;<lpage>2780</lpage>. <pub-id pub-id-type="doi">10.1145/3442381.3450121</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Bear</surname> <given-names>A.</given-names></name> <name><surname>Collins</surname> <given-names>E. T.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2017</year>). <article-title>The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings</article-title>. <source>Manage. Sci</source>. <volume>66</volume>, <fpage>4944</fpage>&#x02013;<lpage>4957</lpage>. <pub-id pub-id-type="doi">10.1287/mnsc.2019.3478</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Epstein</surname> <given-names>Z.</given-names></name> <name><surname>Mosleh</surname> <given-names>M.</given-names></name> <name><surname>Arechar</surname> <given-names>A. A.</given-names></name> <name><surname>Eckles</surname> <given-names>D.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2021</year>). <article-title>Shifting attention to accuracy can reduce misinformation online</article-title>. <source>Nature</source> <volume>592</volume>, <fpage>590</fpage>&#x02013;<lpage>595</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-021-03344-2</pub-id><pub-id pub-id-type="pmid">33731933</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>McPhetres</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>J. G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2020</year>). <article-title>Fighting covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention</article-title>. <source>Psychol. Sci</source>. <volume>31</volume>, <fpage>770</fpage>&#x02013;<lpage>780</lpage>. <pub-id pub-id-type="doi">10.1177/0956797620939054</pub-id><pub-id pub-id-type="pmid">32603243</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning</article-title>. <source>Cognition</source> <volume>188</volume>, <fpage>39</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2018.06.011</pub-id><pub-id pub-id-type="pmid">29935897</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2021a</year>). <source>Examining False Beliefs About Voter Fraud in the Wake of the 2020 Presidential Election</source>. <publisher-loc>The Harvard Kennedy School Misinformation Review</publisher-loc>. <pub-id pub-id-type="doi">10.37016/mr-2020-51</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2021b</year>). <article-title>The psychology of fake news</article-title>. <source>Trends Cogn. Sci</source>. <volume>25</volume>, <fpage>388</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2021.02.007</pub-id><pub-id pub-id-type="pmid">33736957</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Quattrociocchi</surname> <given-names>W.</given-names></name> <name><surname>Scala</surname> <given-names>A.</given-names></name> <name><surname>Sunstein</surname> <given-names>C. R.</given-names></name></person-group> (<year>2016</year>). Echo chambers on facebook. <pub-id pub-id-type="doi">10.2139/ssrn.2795110</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rose</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Brexit, trump, and post-truth politics</article-title>. <source>Public Integr</source>. <volume>19</volume>, <fpage>555</fpage>&#x02013;<lpage>558</lpage>. <pub-id pub-id-type="doi">10.1080/10999922.2017.1285540</pub-id><pub-id pub-id-type="pmid">28812811</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Silverman</surname> <given-names>C.</given-names></name> <name><surname>Lauren</surname> <given-names>S.</given-names></name> <name><surname>Shaban</surname> <given-names>H.</given-names></name> <name><surname>Hall</surname> <given-names>E.</given-names></name> <name><surname>Singer-Vine</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <source>Hyperpartisan Facebook Pages Are Publishing False and Misleading Information at an Alarming Rate</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>BuzzFeed</publisher-name>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spohr</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>Fake news and ideological polarization: filter bubbles and selective exposure on social media</article-title>. <source>Bus. Inform. Rev</source>. <volume>34</volume>, <fpage>150</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1177/0266382117722446</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tandoc</surname> <given-names>E. C. Jr.</given-names></name></person-group> (<year>2019</year>). <article-title>The facts of fake news: a research review</article-title>. <source>Sociol. Compass</source> <volume>13</volume>, <fpage>e12724</fpage>. <pub-id pub-id-type="doi">10.1111/soc4.12724</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>van Der Linden</surname> <given-names>S.</given-names></name> <name><surname>Roozenbeek</surname> <given-names>J.</given-names></name> <name><surname>Compton</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Inoculating against fake news about COVID-19</article-title>. <source>Front. Psychol</source>. 11, 2928. <pub-id pub-id-type="doi">10.3389/fpsyg.2020.566790</pub-id><pub-id pub-id-type="pmid">33192844</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vicario</surname> <given-names>M. D.</given-names></name> <name><surname>Quattrociocchi</surname> <given-names>W.</given-names></name> <name><surname>Scala</surname> <given-names>A.</given-names></name> <name><surname>Zollo</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <article-title>Polarization and fake news: early warning of potential misinformation targets</article-title>. <source>ACM Trans. Web</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1145/3316809</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vosoughi</surname> <given-names>S.</given-names></name> <name><surname>Roy</surname> <given-names>D.</given-names></name> <name><surname>Aral</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>The spread of true and false news online</article-title>. <source>Science</source> <volume>359</volume>, <fpage>1146</fpage>&#x02013;<lpage>1151</lpage>. <pub-id pub-id-type="doi">10.1126/science.aap9559</pub-id><pub-id pub-id-type="pmid">29590045</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wardle</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <source>Information Disorder: The Essential Glossary</source>. <publisher-loc>Harvard, MA</publisher-loc>: <publisher-name>Shorenstein Center on Media, Politics, and Public Policy, Harvard Kennedy School</publisher-name>.</citation>
</ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>We define political tweets as any tweet that mentions a political entity or an ongoing news story related to it.</p></fn>
<fn id="fn0002"><p><sup>2</sup>This was part of a larger experiment, involving a total of 301 participants and where only 150 participants encountered the setting as described here.</p></fn>
<fn id="fn0003"><p><sup>3</sup>Political leaning was assessed on a 0 to 10 scale 5 representing the center, 0 and 10 representing extreme left and right bias respectively, all experiment related materials can be found on the OSF page.</p></fn>
</fn-group>
</back>
</article>