<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2024.1478176</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Readable and neutral? Reliability of crowdsourced misinformation debunking through linguistic and psycholinguistic cues</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Yao</surname> <given-names>Mengni</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2230012/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tian</surname> <given-names>Sha</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1875981/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Zhong</surname> <given-names>Wenming</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2231311/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Foreign Languages, Nankai University</institution>, <addr-line>Tianjin</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Foreign Languages, Central South University, Changsha</institution>, <addr-line>Hunan</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Andrew K. F. Cheung, Hong Kong Polytechnic University, Hong Kong SAR, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Feiwen Xiao, The Pennsylvania State University (PSU), United States</p>
<p>Huaqing Hong, Nanyang Technological University, Singapore</p>
<p>Jie Shi, The University of Electro-Communications, Japan</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Wenming Zhong <email>zhongwenming&#x00040;csu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>11</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>15</volume>
<elocation-id>1478176</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>08</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>10</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2024 Yao, Tian and Zhong.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Yao, Tian and Zhong</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<sec>
<title>Background</title>
<p>In the face of the proliferation of misinformation during the COVID-19 pandemic, crowdsourced debunking has surfaced as a counter-infodemic measure to complement efforts from professionals and regular individuals. In 2021, X (formerly Twitter) initiated its community-driven fact-checking program, named Community Notes (formerly Birdwatch). This program allows users to create contextual and corrective notes for misleading posts and rate the helpfulness of others&#x00027; contributions. The effectiveness of the platform has been preliminarily verified, but mixed findings on reliability indicate the need for further research.</p></sec>
<sec>
<title>Objective</title>
<p>The study aims to assess the reliability of Community Notes by comparing the readability and language neutrality of helpful and unhelpful notes.</p></sec>
<sec>
<title>Methods</title>
<p>A total of 7,705 helpful notes and 2,091 unhelpful notes spanning from January 20, 2021, to May 30, 2023 were collected. Measures of reading ease, analytical thinking, affect and authenticity were derived by means of Wordless and Linguistic Inquiry and Word Count (LIWC). Subsequently, the non-parametric Mann&#x02013;Whitney <italic>U</italic>-test was employed to evaluate the differences between the helpful and unhelpful groups.</p></sec>
<sec>
<title>Results</title>
<p>Both groups of notes are easy to read with no notable difference. Helpful notes show significantly greater logical thinking, authenticity, and emotional restraint than unhelpful ones. As such, the reliability of Community Notes is validated in terms of readability and neutrality. Nevertheless, the prevalence of prepared, negative and swear language in unhelpful notes indicates the manipulative and abusive attempts on the platform. The wide value range in the unhelpful group and overall limited consensus on note helpfulness also suggest the complex information ecology within the crowdsourced platform, highlighting the necessity of further guidance and management.</p></sec>
<sec>
<title>Conclusion</title>
<p>Based on the statistical analysis of the linguistic and psycholinguistic characteristics, the study validated the reliability of Community Notes and identified room for improvement. Future endeavors could explore the psychological motivations underlying volunteering, gaming, or even manipulative behaviors, enhance the crowdsourced debunking system and integrate it with broader efforts in infodemic management.</p></sec></abstract>
<kwd-group>
<kwd>misinformation debunking</kwd>
<kwd>crowdsourcing</kwd>
<kwd>Community Notes</kwd>
<kwd>LIWC</kwd>
<kwd>linguistic features</kwd>
</kwd-group>
<contract-sponsor id="cn001">National Social Science Fund of China<named-content content-type="fundref-id">10.13039/501100012456</named-content></contract-sponsor>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="82"/>
<page-count count="11"/>
<word-count count="9430"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Psychology of Language</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Misinformation pervades a multitude of topical domains, spanning from health discourses to political narratives, and rapidly disseminates through diverse media channels (Southwell et al., <xref ref-type="bibr" rid="B62">2018</xref>). Individuals, due to psychological and sociological predispositions, are susceptible to misleading information (Ecker et al., <xref ref-type="bibr" rid="B19">2022</xref>) and easily affected by the inflammatory and sensational language (Rashkin et al., <xref ref-type="bibr" rid="B57">2017</xref>). During the COVID-19 pandemic, the rampant dissemination of heterogeneous and unverified information impeded interpersonal and intercultural communication, further exacerbating societal divisions (Chong et al., <xref ref-type="bibr" rid="B13">2022</xref>; Liu and Cheung, <xref ref-type="bibr" rid="B39">2023</xref>). With the recent advance of generative artificial intelligence (AI), large language models enable the rapid and extensive generation of human-like and personalized misinformation, exacerbating the complexity of the issue (Kreps et al., <xref ref-type="bibr" rid="B35">2022</xref>). Given this, misinformation debunking, the pillar of infodemic management (Eysenbach, <xref ref-type="bibr" rid="B20">2020</xref>), has emerged as a critical focus within the academic circle.</p>
<p>Three prevalent fact-checking practices can be identified from the perspective of implementation timing. Firstly, proactive measures such as early warnings and educational interventions (Guess et al., <xref ref-type="bibr" rid="B25">2020</xref>), rooted in the psychological theory of inoculation, can preemptively cultivate and fortify users&#x00027; resilience to misinformation (Jiang et al., <xref ref-type="bibr" rid="B32">2022</xref>; Lewandowsky and Van Der Linden, <xref ref-type="bibr" rid="B36">2021</xref>). Nevertheless, prebunking, as a tricky and long-term task, has shown to be less efficacious than reactive debunking (Tay et al., <xref ref-type="bibr" rid="B67">2022</xref>). The second approach involves training a classification model by distinct characteristics of fake information and subsequently applying it to real scenarios, so as to monitor, label, down-rank or even remove false claims and suppress their proliferation if the situation permits. The misinformation classification methods are hindered by the scarcity of fine-grained, pre-annotated and up-to-date training data (Carrasco-Farr&#x000E9;, <xref ref-type="bibr" rid="B9">2022</xref>) and experience a performance drop when identifying human-generated misinformation compared to AI-generated misinformation (Zhou et al., <xref ref-type="bibr" rid="B82">2023</xref>), indicating room for improvement (A&#x000EF;meur et al., <xref ref-type="bibr" rid="B2">2023</xref>). The last line of work addresses misinformation after its emergence, utilizing professional, layperson-based and crowdsourced efforts. News personnel and domain experts can provide informative and authoritative content. However, there are inherent limitations of professional fact-checking, particularly regarding coverage and speed (Martel et al., <xref ref-type="bibr" rid="B41">2024</xref>). In contrast, the feasibility of layperson-based debunking has been preliminarily validated (Bhuiyan et al., <xref ref-type="bibr" rid="B6">2020</xref>; Pennycook and Rand, <xref ref-type="bibr" rid="B50">2019</xref>; Wineburg and McGrew, <xref ref-type="bibr" rid="B75">2019</xref>), implying the promise of organized public engagement as a supplementary strategy.</p>
<p>Community Notes represents X&#x00027;s innovative fact-checking initiative, designed to swiftly and properly combat misinformation through extensive public participation. At the beginning of 2021, X introduced Community Notes, previously branded as Birdwatch. The platform was initially accessible solely to pilot users within the U.S., and gradually expanded its reach to moderators from other regions after December 2022. Within this community-driven framework, a set of rules has been designed to build a well-structured and healthy information ecosystem, ensuring that informative notes contributed by users can be attached to suspicious tweets. Despite Community Notes&#x00027; efforts in platform governance and the impressive claims for its transparency, challenges and risks such as data poisoning, algorithmic exploitation, and coordinated malicious attacks persist (Benjamin, <xref ref-type="bibr" rid="B5">2021</xref>). It is necessary to assess the reliability of Community Notes and explore ways to enhance crowdsourced debunking.</p>
<p>The paper aims to evaluate the reliability of misinformation debunking on Community Notes in terms of readability and neutrality. &#x0201C;Easy to understand&#x0201D; and &#x0201C;neutral language&#x0201D; are outlined as helpful attributes according to the official guideline,<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> and also recognized as effective language patterns in similar contexts. Considering that language use has been demonstrated to be informative by linguistics research, particularly in psychology (Pennebaker and King, <xref ref-type="bibr" rid="B49">1999</xref>), these two variables are adopted to examine platform priorities in regulating online content and scrutinize whether helpful voices are amplified or marginalized. The study poses two research questions (RQs):</p>
<p>RQ1: How is the reliability of Community Notes in terms of readability?</p>
<p>RQ2: How is the reliability of Community Notes in terms of neutrality?</p>
<p>To address the research questions, the study collected helpful and unhelpful notes from the open Community Notes dataset, and analyzed linguistic and psycholinguistic characteristics of two groups using Wordless and Linguistic Inquiry and Word Count (LIWC). The helpful and unhelpful groups display equal levels of readability; moreover, the former demonstrates significantly greater logical thinking, authenticity, and emotional restraint. These findings validate the reliability of Community Notes. However, the unhelpful group shows a notable presence of prepared, negative, and swear language, along with a wide range of values in the measures. Additionally, the overall consensus on note helpfulness is limited. These indicate areas for improvement in the crowdsourcing management system. The study contributes to the understanding of reliability and potential challenges of crowdsourced debunking and provides insight to platform management and its integration into broader efforts.</p></sec>
<sec id="s2">
<title>2 Literature review</title>
<sec>
<title>2.1 Crowdsourced misinformation debunking</title>
<p>Professional and non-professional debunkers have employed various methods to dismantle and mitigate the impact of fake information, achieving certain degrees of success. Debunking refers to the provision of corrective information to establish that the previous message is incorrect or misleading. This is a complex process where different cognitive frames compete and collide with each other. Debunking practices from professionals, such as those in the governmental sector, public health, journalism and specialized fact-checking organizations, have long been an integral part of infodemic management on social media. Authorities and experts are considered effective in enhancing the public&#x00027;s awareness of crisis severity (Van der Meer and Jin, <xref ref-type="bibr" rid="B69">2020</xref>) and maintaining the overall stability of the public opinion (Zhong and Yao, <xref ref-type="bibr" rid="B81">2023</xref>). However, studies have indicated that official sources are also criticized for being slow, obsolescent, invisible, thereby leading to limited and delayed dissemination and even fostering mistrust (Micallef et al., <xref ref-type="bibr" rid="B42">2020</xref>). In view of this, recent studies on online misinformation have highlighted the potential for regular people to leverage their advantages in countering misleading information. The capability of non-experts to discern between highly credible and less credible news sources (Bhuiyan et al., <xref ref-type="bibr" rid="B6">2020</xref>; Pennycook and Rand, <xref ref-type="bibr" rid="B50">2019</xref>), as well as the verification procedures employed by individuals with different educational backgrounds and identities (Wineburg and McGrew, <xref ref-type="bibr" rid="B75">2019</xref>) has been validated as prerequisite. Additionally, individuals are willing to share the information which they have personally searched for and verified (Li and Xiao, <xref ref-type="bibr" rid="B37">2022</xref>).</p>
<p>Crowdsourced debunking, a new form of non-professional debunking, has emerged and is expected to play a complementary role with faster speed, greater volume and more systematic management. Crowdsourcing allows individuals or organizations to outsource tasks to specific population of actors, akin to the operational mode of Wikipedia and Stack Overflow. Its advantages stem from low cost, high efficiency, anonymity, and a strong user-platform connection. Particularly, the accumulation of user knowledge could potentially be closer to the truth than individual efforts and even those of experts (Bhuiyan et al., <xref ref-type="bibr" rid="B6">2020</xref>; Woolley et al., <xref ref-type="bibr" rid="B77">2010</xref>). There are two types of crowdsourced fact-checking: One involves recruiting ordinary individuals from crowdsourcing marketplaces such as Amazon Mechanical Turk to evaluate and annotate the accuracy of content, commonly used as an experimental method in academic research (Saeed et al., <xref ref-type="bibr" rid="B59">2022</xref>); the second one motivates the public to collaboratively and voluntarily generate novel knowledge and insights in the form of fact-checks, which is the focus of this paper.</p>
<p>There are some attempts in this regard. In the experimental context, Pinto et al. (<xref ref-type="bibr" rid="B55">2019</xref>) proposed the fact-checking workflow, which can be sustained and overseen by the crowd itself, and advocated for the utilization of a diverse workforce and resources to increase the volume and reach of refutation efforts. In practical settings, Cofacts, a community-driven fact-checking platform in Taiwan, China, has captivated researchers. Zhao and Naaman (<xref ref-type="bibr" rid="B79">2023a</xref>) found that it performed on par with two professional fact-checking sites in terms of veracity and viability, while surpassing them in velocity. Zhao and Naaman (<xref ref-type="bibr" rid="B80">2023b</xref>) further observed that Cofacts&#x00027; sustainability was intrinsically linked to Taiwan&#x00027;s dynamic civic tech culture and longstanding tradition of crowdsourcing activism. The findings indicate that while crowdsourced debunking holds substantial promise, it demands considerable labor and continuous engagement.</p>
</sec>
<sec>
<title>2.2 Operating mechanism of Community Notes</title>
<p>During the COVID-19 pandemic, X leveraged its advantages of large user base and well-established interactive frameworks to launch the crowdsourcing platform, marking a fresh attempt to combat misinformation.</p>
<p>On Community Notes, users are encouraged to assess the veracity of suspicious tweets and provide contextual evidence, termed notes. Individuals who engage in the process are referred to as contributors or debunkers. They constitute a voluntary community in which a stringent mechanism regulates user participation. As far as a user is concerned, newcomers start with an initial Rating Impact score of zero and must consistently rate submitted notes on the level of helpfulness to gain the eligibility for writing notes themselves. Subsequently, contributors can accumulate their Writing and Rating Impact scores by producing helpful notes and evaluating ones written by others. However, their writing privileges may be temporarily suspended by the system once their notes are frequently deemed unhelpful. That is to say, the dynamic system generates the reputation impact based on users&#x00027; track records, and in turn influences the qualification in subsequent periods (Pr&#x000F6;llochs, <xref ref-type="bibr" rid="B56">2022</xref>).</p>
<p>Regarding notes, if a consensus can be reached among a broad and diverse group of contributors, the note will be transferred to X and displayed directly below the suspicious tweet for all X users. The note status would be updated as new ratings are received until it is locked after 2 weeks. This bridging-based ranking system, designed to make it more difficult for accounts to spam the system with low-quality ratings, allows for the better identification of higher-quality content (Wojcik et al., <xref ref-type="bibr" rid="B76">2022</xref>).</p>
<p>Additionally, Community Notes has implemented several measures to enhance the system. For instance, it encourages individuals with diverse perspectives to participate in rating. When establishing what constitutes different perspectives, Community Notes does not consider demographics such as location, gender, or political affiliation, nor does it use data from X as indicators. Instead, it objectively focuses on how individuals rated notes in the past. All of these operational mechanisms are supported by rigorous and complex algorithms and Community Notes is continuously updating rules.</p>
<p>The emergence of Community Notes has raised concerns about its effectiveness and reliability. Effectiveness, a common issue in the misinformation field, focuses on the final outcome of rebuttal. It aims to examine the influence of corrective messages on receivers, such as spread curve of misinformation and changes in receivers&#x00027; conception or behaviors (Pr&#x000F6;llochs, <xref ref-type="bibr" rid="B56">2022</xref>). Since people often fall for misinformation due to a lack of careful reasoning, relevant knowledge, and reliance on heuristics such as familiarity, corrective notes are expected to help them discern truth (Pennycook and Rand, <xref ref-type="bibr" rid="B51">2021</xref>). As for reliability, in the context of expert and layperson-based debunking, it is often associated with terms such as accuracy, quality, credibility and trustworthiness (Adams, <xref ref-type="bibr" rid="B1">2006</xref>). However, social media platforms as black boxes are often suspected of manipulation and abusive use (Ferrara et al., <xref ref-type="bibr" rid="B22">2020</xref>). Hence, in this study, the reliability of crowdsourced misinformation debunking is defined as platform&#x00027;s ability to foster a transparent and healthy information environment while providing information beneficial for debunking as much as possible. Crowdsourcing requires proper management; otherwise, each step in the process may jeopardize reliability. Benjamin (<xref ref-type="bibr" rid="B5">2021</xref>) outlined a set of potential risks on Community Notes. To name a few, are there instances of fake or sock puppet accounts? Is there any coordinated manipulation attempted to oversee, filter, and regulate user access to notes? Is there indication that contributors&#x00027; political party affiliations might impact their personal opinions and value judgments, consequently contributing to polarization?</p>
<p>In general, Community Notes represents a new effort in crowdsourced debunking, and there is limited research on it.</p>
</sec>
<sec>
<title>2.3 Evaluation of Community Notes</title>
<p>For this emerging platform, some studies have conducted preliminary research on its effectiveness and reliability. Regarding its effectiveness, research indicated that misleading tweets accompanied by notes tended to spread less virally compared with ones without such annotations (Drolsbach and Pr&#x000F6;llochs, <xref ref-type="bibr" rid="B17">2023</xref>). Furthermore, individuals exposed to corrective notes exhibited a 25&#x02013;34% lower likelihood of liking, replying to or resharing misinformation compared to those who were not, suggesting observable changes in user behavior (Wojcik et al., <xref ref-type="bibr" rid="B76">2022</xref>). Compared with professional fact-checking, Community Notes demonstrated relatively good performance as well. Pilarski et al. (<xref ref-type="bibr" rid="B54">2024</xref>) analyzed the differences between Community Notes and Snoping, a conversational fact-checking approach primarily built upon professional judgments. Their study revealed that note contributors and Snopers paid attention to different tweets, thereby facilitating the fact-checking coverage across a broad spectrum of social media posts. Meanwhile, those overlapping also demonstrated a notable level of consensus in the veracity. Nevertheless, Chuai et al. (<xref ref-type="bibr" rid="B14">2023</xref>) also pointed out that Community Notes may not act swiftly enough to curb the dissemination of misinformation during its initial and highly contagious phase. Overall, the platform&#x00027;s effectiveness appears promising at present, albeit with some response delay.</p>
<p>As for reliability, few concerns have been addressed in this regard. The quality and relevance of evidence presented in notes have received significant academic attention. Evidence like URLs and citations is a crucial component frequently integrated into corrective messages (He et al., <xref ref-type="bibr" rid="B27">2023</xref>), proving valuable in rectifying misperceptions across social media platforms (Vraga and Bode, <xref ref-type="bibr" rid="B71">2018</xref>). Saeed et al. (<xref ref-type="bibr" rid="B59">2022</xref>) delved into the sources of evidence mentioned in the notes and assessed their reliability. The study collected 12,909 links from the Community Notes dataset and extracted a total of 2,014 domains. Through manual review by journalists, it was found that note links upvoted as high quality by Community Notes users, consistently garnered high journalist scores. Allen et al. (<xref ref-type="bibr" rid="B4">2024</xref>) also focused on the quality of citations. They double rated the credibility of sources based on three tiers, high, moderate and low. It is found that only 7% notes cited low credibility sources, such as blogs or tabloids. In addition to manual review of evidence credibility by professionals, Simpson (<xref ref-type="bibr" rid="B61">2022</xref>) adopted Kullback&#x02013;Leibler divergence and the document probability distributions to investigate the relevance of notes and tweets. There was a significant topic overlap between tweets and notes with higher note ratings. Therefore, the reliability of Community Notes has been preliminarily verified through evidence use.</p>
<p>Additionally, some scholars evaluated the reliability of Community Notes based on its built-in voting and ranking system. Ovadya (<xref ref-type="bibr" rid="B48">2022</xref>) held that the platform surpassed many engagement-based ranking systems. However, Allen et al. (<xref ref-type="bibr" rid="B3">2022</xref>) investigated the influence of partisanship among participants and discovered that they exhibited a tendency to assign negative annotations to tweets from those with opposing political affiliations and perceive their annotations as less helpful. Mujumdar and Kumar (<xref ref-type="bibr" rid="B44">2021</xref>) also identified the loophole, that is a small number of fake accounts could elevate any random note to a top-ranked status. To address this, they introduced a novel reputation system called HawkEye. The system incorporates a cold-start-aware graph-based recursive algorithm and evaluates the intrinsic quality of user trust, note credibility, and tweet accuracy, in order to mitigate the vulnerability of Community Notes to adversarial attacks.</p>
<p>The effectiveness of Community Notes has received certain agreement. However, the ongoing controversy regarding its reliability underscores the urgent need for further research.</p>
</sec>
<sec>
<title>2.4 Readability and neutrality as helpful attributes</title>
<p>Note writing and voting requirements officially outlined by Community Notes provide insight into establishing measures of reliability. Community Notes has delineated note requirements in its user guidelines and instructed all contributors to write and rate notes as helpful or unhelpful accordingly. They list the following helpful attributes:</p>
<list list-type="simple">
<list-item><p>(1) Cites high-quality sources;</p></list-item>
<list-item><p>(2) Easy to understand;</p></list-item>
<list-item><p>(3) Directly addresses the post&#x00027;s claim;</p></list-item>
<list-item><p>(4) Provides important context;</p></list-item>
<list-item><p>(5) Neutral or unbiased language.</p></list-item>
</list>
<p>The above requirements guide the entire process of note creation and ranking. Hence, these can be adopted as reliability measures to explore whether users write and vote helpful notes as required and whether Community Notes amplify the helpful voice on X. The indicators include two aspects. One pertains to what notes convey, which corresponds to the first, third and fourth attributes, dealing with the credibility, relevance and coverage of notes, respectively; another concerns how notes are conveyed, reflected in the second and fourth attributes. These refer to the readability and neutrality of notes. Given that the former aspect has been extensively studied, as summarized above, this paper specifically examines the reliability of Community Notes in terms of readability and neutrality.</p>
<sec>
<title>2.4.1 Readability</title>
<p>Readability refers to &#x0201C;the ease of understanding or comprehension due to the style of writing&#x0201D; (Klare, <xref ref-type="bibr" rid="B34">1963</xref>), which can be derived by readability formulas with various purposes and settings (DuBay, <xref ref-type="bibr" rid="B18">2004</xref>). Reading ease is the determinant of whether receivers process a debunking message with the central route. Receivers must possess the necessary cognitive capacity and linguistic comprehension. Once the language or message complexity exceeds their cognitive capabilities, individuals are less inclined to engage in extensive elaboration (Petty et al., <xref ref-type="bibr" rid="B52">1986</xref>) and are likely to generate negative judgments toward corrective messages (Schwarz, <xref ref-type="bibr" rid="B60">1998</xref>). Wang et al. (<xref ref-type="bibr" rid="B73">2022</xref>) examined the impact of the readability on the acceptance of rebuttal texts on Sina Weibo, often called &#x0201C;Chinese Twitter&#x0201D;. Using the frequency of common characters in the Chinese dictionary to evaluate readability, the study indicated that greater readability had a positive influence on the public&#x00027;s acceptance of the rebuttal. Furthermore, corrective messages often involve specialized terms and knowledge. The manner in which new scientific and technological advancements, and evolving epidemiological information, are presented is significant (Daraz et al., <xref ref-type="bibr" rid="B16">2018</xref>). A digestible format not only builds trust among recipients but also facilitates the dissemination on social media, especially supporting highly vulnerable refugee, immigrant, and migrant communities with limited language proficiency (Feinberg et al., <xref ref-type="bibr" rid="B21">2023</xref>).</p>
<p>This underscores the importance of readability in effective persuasion and refutation. Therefore, to address the first research question (RQ1), the study hypothesizes the following,</p>
<p><bold>H1:</bold> Helpful notes are more readable than the unhelpful ones.</p></sec>
<sec>
<title>2.4.2 Neutrality</title>
<p>Another crucial attribute is neutral language, which focuses on the way users presenting note here. Content neutrality, like no selection, omission or exaggeration of facts (Hamborg et al., <xref ref-type="bibr" rid="B26">2019</xref>), are conscious, controllable, and easy to report (Wilson et al., <xref ref-type="bibr" rid="B74">2000</xref>). By contrast, language bias are implicit and unconscious. It is frequently associated with specific linguistic features, such as the abstraction level of words based on the linguistic category model (Maass et al., <xref ref-type="bibr" rid="B40">1989</xref>), hedges, subjective intensifiers (Recasens et al., <xref ref-type="bibr" rid="B58">2013</xref>), referring expressions (Cheung, <xref ref-type="bibr" rid="B12">2014</xref>), direct and reported speech (Cheung, <xref ref-type="bibr" rid="B11">2012</xref>), lack of logical and analytical thinking (Huang and Wang, <xref ref-type="bibr" rid="B28">2022</xref>; Vraga et al., <xref ref-type="bibr" rid="B72">2019</xref>), as well as praising, selling, inflammatory, or hateful expressions (Recasens et al., <xref ref-type="bibr" rid="B58">2013</xref>) and so on. Bias detection can be achieved by natural language processing like LIWC (Hube and Fetahu, <xref ref-type="bibr" rid="B29">2018</xref>; Niven and Kao, <xref ref-type="bibr" rid="B47">2020</xref>), and machine learning techniques (Spinde et al., <xref ref-type="bibr" rid="B63">2023</xref>; Vallejo et al., <xref ref-type="bibr" rid="B68">2024</xref>).</p>
<p>The importance of neutral language has been emphasized in complex information dissemination settings. It is found that neutrally phrased language is crucial to avoid stirring disagreement among parties in Wikipedia, news media and political debates (Hamborg et al., <xref ref-type="bibr" rid="B26">2019</xref>; Hube and Fetahu, <xref ref-type="bibr" rid="B30">2019</xref>; Iyyer et al., <xref ref-type="bibr" rid="B31">2014</xref>). Similarly, the issue has also been examined in the field of misinformation debunking from different angles. Since there are a lot of causal explanations in debunking, logic-based corrections can effectively reduce the credibility of misinformation (Vraga et al., <xref ref-type="bibr" rid="B72">2019</xref>) and wield greater influence in changing attitudes and behavioral intentions when compared with the narrative-based approach (Huang and Wang, <xref ref-type="bibr" rid="B28">2022</xref>). Furthermore, studies also validated the association between emotion and bias. Although Cappella et al. (<xref ref-type="bibr" rid="B8">2015</xref>) looked for the possibilities of using emotional messages to counteract the emotional aspect of belief echoes, emotionally charged statements especially swear words have been proven unsuitable for social media platforms (Vo and Lee, <xref ref-type="bibr" rid="B70">2019</xref>), due to their tendency to provoke stronger emotional contagion and conflicts (Clore and Huntsinger, <xref ref-type="bibr" rid="B15">2007</xref>).</p>
<p>There is still a vacancy in the language neutrality of corrective messages on Community Notes. Only Pr&#x000F6;llochs (<xref ref-type="bibr" rid="B56">2022</xref>) found notes were more negative toward misleading tweets than not misleading ones, necessitating further studies. Given the expectation for neutral notes on the platform and the observed gap, the study proposes the following hypothesis,</p>
<p><bold>H2:</bold> Helpful notes are more neutral than the unhelpful ones.</p></sec></sec>
</sec>
<sec id="s3">
<title>3 Methodology</title>
<p>The study gathered the open-sourced notes voted as helpful and unhelpful by users and evaluated the reliability of Community Notes through quantitative features grounded in linguistic and psychological sciences.</p>
<sec>
<title>3.1 Data collection and preprocessing</title>
<sec>
<title>3.1.1 Data collection</title>
<p>First, four separate files were downloaded, i.e., Notes, Ratings, Note Status History, and User Enrollment, from the Community Notes&#x00027; public data page<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> on June 25, 2023. Second, these tables were merged into a unified dataset that encompasses note ID, creation time, note content, and locked status. Since the focus is on notes that reached a consensus among a sufficient number of raters and were assigned locked statuses after a period of 2 weeks, other information like rating history, rating reasons were not taken into consideration. Third, hundreds of thousands of notes labeled as NEEDS_MORE_RATINGS and a few written in languages other than English were removed. Consequently, a total of 7,705 helpful notes and 2,091 not helpful notes were collected, spanning from January 20, 2021 to May 30, 2023.</p>
<p>It is noteworthy that if writers delete notes and ratings, the metadata would be documented in the file of Note Status History, but the textual content of the notes is no longer officially available. Moreover, Community Notes invites public and scholarly scrutiny of its performance by making all of the data accessible and downloadable online. Therefore, the study using public data was exempted from ethical review.</p></sec>
<sec>
<title>3.1.2 Data preprocessing</title>
<p>External links and converted escape characters were excluded, such as &#x00026;quot; and &#x00026;amp into normal ones, because these would influence the results of linguistic features and citation sources are not the focus in this paper.</p>
</sec>
</sec>
<sec>
<title>3.2 Measures</title>
<p>The assessment of Flesch Reading Ease for readability was conducted using Wordless. LIWC was also employed to identify three relevant characteristics of neutrality: analytical thinking, authenticity, and affect.</p>
<sec>
<title>3.2.1 Readability</title>
<sec>
<title>3.2.1.1 Wordless</title>
<p>Wordless is an integrated corpus tool that allows users to explore prevalent linguistic features within textual data, such as readability, counts, lengths, keywords, concordance and collocation (Ye, <xref ref-type="bibr" rid="B78">2024</xref>). The 3.4.0 version was adopted.</p></sec>
<sec>
<title>3.2.1.2 Flesch Reading Ease</title>
<p>Wordless was utilized to obtain the Flesch Reading Ease score and assess the readability of the note. Compared with readability measures that are tailored for specific domains, impose basic thresholds for word count, or rely on fixed dictionaries, the Flesch Reading Ease is flexible and comprehensive. Therefore, it is highly recommended across all sectors and disciplines (DuBay, <xref ref-type="bibr" rid="B18">2004</xref>). Flesch scores primarily consider two factors: the average number of syllables per word and the average number of words per sentence (Flesch, <xref ref-type="bibr" rid="B23">1949</xref>). For the Flesch Reading Ease, a higher value indicates easier readability, contrary to the majority of readability formulas where lower value signifies easier readability. Generally, readability values fall within the 0&#x02013;100 range under normal circumstances. However, due to the computational mechanism of the formula, the values may exceed this range if a text is either too simple or too complex.</p></sec></sec>
<sec>
<title>3.2.2 Neutrality</title>
<sec>
<title>3.2.2.1 LIWC</title>
<p>Linguistic Inquiry and Word Count is a lexicon and rule-based software designed to analyze psychological and emotional constructs in texts. Language patterns have the strong diagnostic power for style and people&#x00027;s underlying social and psychological world (Tausczik and Pennebaker, <xref ref-type="bibr" rid="B66">2010</xref>). Based on this, LIWC builds up an internally consistent language dictionary with enhanced psychometric properties. It functions by searching each word in a text with the dictionary, and quantifying the percentage of matched words assigned for different features (Boyd et al., <xref ref-type="bibr" rid="B7">2022</xref>).</p>
<p>In terms of applicability, the software has demonstrated effectiveness in quantifying, understanding, and elucidating the biased statement in news media (Niven and Kao, <xref ref-type="bibr" rid="B47">2020</xref>), crowdsourced knowledge generation (Hube and Fetahu, <xref ref-type="bibr" rid="B29">2018</xref>) and professional misinformation debunking (Vo and Lee, <xref ref-type="bibr" rid="B70">2019</xref>). Furthermore, the current iteration of LIWC is no longer constrained by text length. With the inclusion of emoticons, short phrases, and netspeak language, LIWC can generate reliable and accurate results when analyzing tweets, Facebook posts, and SMS-like modes of communication (Boyd et al., <xref ref-type="bibr" rid="B7">2022</xref>).</p>
<p>LIWC-22 was employed to calculate scores for analytical thinking, authenticity, and affect in both helpful and unhelpful notes.</p></sec>
<sec>
<title>3.2.2.2 Analytical thinking</title>
<p>Logic-based corrections are found effective in reducing the credibility of misinformation and changing the attitudes and behavioral intentions of recipients (Vraga et al., <xref ref-type="bibr" rid="B72">2019</xref>; Huang and Wang, <xref ref-type="bibr" rid="B28">2022</xref>). Hence, <italic>Analytic</italic>, the summary feature in LIWC-22 was adopted to capture the extent to which individuals employ words indicative of formal, logical, and hierarchical thinking patterns. The analytical thinking formula encompasses various categories of words, including articles, prepositions, pronouns, auxiliary verbs, conjunctions, adverbs, and negations (Ta et al., <xref ref-type="bibr" rid="B65">2022</xref>). For instance, connectives are vital for conveying implicit interclausal relations and the underlying logic (Cheung, <xref ref-type="bibr" rid="B10">2009</xref>; Li et al., <xref ref-type="bibr" rid="B38">2022</xref>).</p></sec>
<sec>
<title>3.2.2.3 Authenticity</title>
<p><italic>Authentic</italic>, also a summary feature in LIWC-22, refers to the extent to which individuals communicate in alignment with their true selves (Newman and Dhar, <xref ref-type="bibr" rid="B46">2014</xref>). That is to say, authenticity are irrelevant with the exact content or whether it is true or false, but rather with perceived genuineness. Specifically, the authenticity formula incorporates some elements common in sincere speech, such as first-person pronouns and relativity words and present tense (Fox and Royne Stafford, <xref ref-type="bibr" rid="B24">2021</xref>). This definition was applied in the study. Authenticity can examine the extent to which users on Community Notes freely and naturally express their beliefs and values. This is particularly crucial for identifying whether notes have been prepared, filtered, or manipulated due to political and social inhibitions (Allen et al., <xref ref-type="bibr" rid="B3">2022</xref>; Benjamin, <xref ref-type="bibr" rid="B5">2021</xref>).</p></sec>
<sec>
<title>3.2.2.4 Affect</title>
<p>In light of the fact that emotionally charged statements readily provoke stronger emotional contagion on social media (Clore and Huntsinger, <xref ref-type="bibr" rid="B15">2007</xref>; Vo and Lee, <xref ref-type="bibr" rid="B70">2019</xref>), <italic>Affect</italic> was adopted to investigate whether helpful notes exhibit greater affective restraint. Unlike the aforementioned summary features, <italic>Affect</italic> comprises several subordinate categories: <italic>tone</italic> (emotional tone), <italic>emotion_pos</italic> (positive emotion), <italic>emo_neg</italic> (negative emotion), <italic>emo_anx</italic> (anxiety), <italic>emo_anger</italic> (anger), <italic>emo_sad</italic> (sadness), and <italic>swear</italic> (swear words), among others. Good, love, happy, hope and other emotion-related words, word stems, phrases, and emoticons are included in the LIWC dictionary for calculation (Boyd et al., <xref ref-type="bibr" rid="B7">2022</xref>).</p>
<p>SPSSAU was used to conduct the statistical analysis. Given the non-normal distribution of the data, the non-parametric Mann&#x02013;Whitney <italic>U</italic>-test was employed to examine the statistical differences in the above measures between the helpful and unhelpful groups.</p></sec></sec></sec>
</sec>
<sec id="s4">
<title>4 Results</title>
<sec>
<title>4.1 Reading ease for both groups</title>
<p>In what follows, results of the non-parametric Mann&#x02013;Whitney <italic>U</italic>-test are presented with median, 1st quartile, 3rd quartile, <italic>z</italic>-value and <italic>p</italic>-value. The study initially investigates the difference in readability between helpful and unhelpful notes for RQ1. <xref ref-type="fig" rid="F1">Figure 1</xref> illustrates the distribution of Flesch reading ease values for two distinct groups. The median readability value is 73.483 (IQR = 62.6&#x02013;83.4) for helpful notes and 73.172 (IQR = 59.2&#x02013;87.9) for the unhelpful, indicating that helpful notes are slightly easier to understand than unhelpful ones. A reading score of 70&#x02013;80 corresponds to a 7th-grade reading level, which means notes from both groups were easy enough for 12&#x02013;13-year-olds to process. However, the Mann-Whitney <italic>U</italic>-test yields a <italic>z</italic>-value of &#x02212;0.827, suggesting no significant difference in readability between two groups (<italic>p</italic> = 0.408). Thus, H1 is not supported.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Readability of helpful and unhelpful notes.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-15-1478176-g0001.tif"/>
</fig>
</sec>
<sec>
<title>4.2 Unbiased language in the helpful group</title>
<p>For RQ2, the results of three neutrality measures are shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. Regarding analytical thinking, the median value for helpful notes stands at 86.153 (IQR = 62.1&#x02013;96.4), while for unhelpful notes it is 66.040 (IQR = 26.1&#x02013;89.5), as depicted in <xref ref-type="fig" rid="F2">Figure 2A</xref>. The difference observed is statistically significant (<italic>z</italic> = &#x02212;20.685, <italic>p</italic> &#x0003C; 0.000), emphasizing that helpful notes involve more analytical thinking than not helpful notes. Furthermore, two groups also vary in authenticity, which is supposed to reflect perceived honesty and genuineness. Helpful notes (Med =13.332, IQR = 2.4&#x02013;46.6) are far more authentic than not helpful ones (Med =10.181, IQR = 1.0&#x02013;50.4) with a statistically significant difference from each other (<italic>z</italic> = &#x02212;3.976, <italic>p</italic> &#x0003C; 0.000). In terms of affect, notable differences are observed, as shown by the boxplot in <xref ref-type="fig" rid="F2">Figure 2C</xref>. Helpful notes (Med = 0.000, IQR = 0.0&#x02013;3.6) show less affect, while another group (Med =2.174, IQR = 0.0&#x02013;6.7) exhibits stronger sentiment and emotion (<italic>z</italic> = &#x02212;13.07, <italic>p</italic> &#x0003C; 0.000).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A&#x02013;C)</bold> Neutrality of helpful and unhelpful notes.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-15-1478176-g0002.tif"/>
</fig>
<p><xref ref-type="fig" rid="F3">Figure 3</xref> illustrates the closer examination of sub-categories of affect. Two groups show no statistically significant difference in tone (<italic>p</italic> = 0.397), emo_pos (<italic>p</italic> = 0.063), emo_anx (<italic>p</italic> = 0.612), emo_anger (<italic>p</italic> = 0.068) and emo_sad (<italic>p</italic> = 0.725). In contrast, they differ in emo_neg (<italic>p</italic> &#x0003C; 0.000) and swear (<italic>p</italic> = 0.014), indicating that unhelpful notes contain more negative emotion and swear words. Overall, analytical thinking, authenticity and affect are crucial predictors of system reliability within the context of misinformation debunking. H2 is supported.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>P-values for the sub-categories of affect between helpful and unhelpful notes.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-15-1478176-g0003.tif"/>
</fig>
</sec></sec>
<sec id="s5">
<title>5 Discussion</title>
<p>The study investigates the reliability of crowdsourced debunking in terms of readability and neutrality. The results indicate that both helpful and unhelpful group exhibit ease of comprehension, yet the former distinguish itself through its stronger logical thinking, enhanced authenticity and diminished emotion relative to the latter. The user-endorsed helpful notes align with the note writing and voting guidelines established by Community Notes, underscoring the reliability of crowdsourced debunking in these two aspects. The analysis and insights behind the results are further elaborated below.</p>
<p>Firstly, the reliability of the platform is validated as the helpful notes are easily understood and more neutral than the unhelpful group. With respect to readability, the statistic analysis reveals that there is no discernible difference in the Flesch Reading Ease scores between the helpful and unhelpful notes. Regardless of this, both are readable enough for 12&#x02013;13-year-olds to understand, in accordance with the official requirements, thereby still verifying the reliability. This suggests that, for one thing, both groups adhere to the linguistic norms of Community Notes. For another, this may also be attributed to the plain language conventions on the Internet. In any case, this ensures that information is accessible to a wide population with different levels of language proficiency (Feinberg et al., <xref ref-type="bibr" rid="B21">2023</xref>).</p>
<p>In terms of neutrality, there is the disparity between the two groups. To begin with, helpful notes demonstrate a higher degree of analytical language compared with unhelpful notes, displaying greater logical coherence and a less narrative style. Notably, the research team of LIWC analyzed test corpora of blogs, X and <italic>New York Times</italic>, presenting the mean value of analytical thinking as 38.70, 42.86, and 87.62 respectively (Boyd et al., <xref ref-type="bibr" rid="B7">2022</xref>). For helpful notes, the median value of analytical thinking stands at 86.153 (IQR = 62.1&#x02013;96.4). Although the statistical comparison is not feasible due to the difference in the mean and median, this suggests that the level of analytical thinking in helpful notes approaches that of news writing and surpasses most social media discourse. This highlights that non-professionals engage in a slow and deliberate information processing route, thereby maximizing their efficacy in executing the debunking task (Stanovich, <xref ref-type="bibr" rid="B64">2009</xref>). In addition, helpful notes are more embedded with users&#x00027; mental processes in an unconscious and spontaneous manner. The self-representation aligns with the established criteria. Lastly, the relatively low affect value of the helpful notes also indicates a restrained tone and emotional expression. On the whole, the helpful group shows good performance in terms of readability and neutrality, thereby justifying the platform&#x00027;s reliability.</p>
<p>Secondly, some measures pertaining to unhelpful notes indicate a discernible tendency among users to post malicious content. This is consistent with previous studies that show significant concerns over the dishonest and malicious attempts on the platform (Benjamin, <xref ref-type="bibr" rid="B5">2021</xref>). According to the definition proposed by LIWC, lower values in authenticity for unhelpful notes mean a greater degree of preparedness or social caution, thereby implying the presence of guarded positions and malicious intents behind them. In addition, although both groups exhibit restraint in tone and most types of emotions with no significant differences, there is an exception. The unhelpful notes employ a higher frequency of negatively emotional language and swear expressions, which is in line with the observed negative correlation between emotion and analytical thinking (Clore and Huntsinger, <xref ref-type="bibr" rid="B15">2007</xref>). One of the plausible explanations for the prevalence of negative emotions could be unconscious yet harmful behavior. Emotions encompass a subjective array of feelings, cognitive assessments of situations and physiological arousal (Nabi and Oliver, <xref ref-type="bibr" rid="B45">2009</xref>). Jiang and Wilson (<xref ref-type="bibr" rid="B33">2018</xref>) identified that misinformation, particularly when infused with inflammatory content and a sensational writing style, would affect the emotional markers in comments, such as using extensive emoji and swear words. As a result, critically engaging with an abusive tweet might lead to a note being perceived as hateful. This aligns with the extant finding that notes are more negative toward misleading tweets than accurate ones (Pr&#x000F6;llochs, <xref ref-type="bibr" rid="B56">2022</xref>). Alternatively, this phenomenon may also be attributed to the deliberate leverage of negative emotional language to elicit strong public reactions or even systematically target at specific groups. Such behavior parallels the motivational factors behind malicious rating, as both stem from conflicting values or beliefs (Allen et al., <xref ref-type="bibr" rid="B3">2022</xref>). In this way, the abuse and weaponization of language are indeed significant issues on Community Notes.</p>
<p>Thirdly, the value ranges for measures within the unhelpful group are too large and users seldom reach consensus on helpfulness of notes, indicating the need to enhance the efficiency and management standards of the platform. The number of unhelpful notes is fewer than that of helpful ones, but the ranges of values are broader across all measures. Numerous outliers are also evident in the boxplot analyses. On the one hand, the reliability of the writing and ranking system is validated, as evidenced by the tendency for helpful notes to demonstrate superior performance when contrasted with unhelpful ones. On the other hand, it reveals that community-driven content is a mixed bag with varying shades. Users may lack a sufficiently clear understanding of the debunking mission or even harbor undisclosed intentions. Furthermore, from a broader perspective, the platform has been flooded with hundreds of thousands of notes since its pilot launch in the U.S. and subsequent global rollout. This highlights the advantages inherent in a crowdsourced approach over the professional one in terms of volume and velocity (Zhao and Naaman, <xref ref-type="bibr" rid="B79">2023a</xref>). However, fewer than 10,000 reached a consensus on helpfulness, with 7,705 classified helpful. That is to say, most attempts from users failed. This supports the notion that Community Notes is too slow to react in the early stage of misinformation dissemination (Chuai et al., <xref ref-type="bibr" rid="B14">2023</xref>). Given that tweets typically reach half of their total impressions within &#x0007E;80 min (Pfeffer et al., <xref ref-type="bibr" rid="B53">2023</xref>), if notes couldn&#x00027;t be helpful enough to be visible on X in a short time, the effectiveness might be hindered for the time delay. These two phenomena partially support the skepticism regarding the effectiveness of crowdsourcing for dispelling rumors and raise concerns about its managerial competence.</p>
<p>The research contributes to the scant crowdsourced debunking literature by closely examining and comparing four linguistic and psychological measures of upvoted notes on Community Notes. Considering that the coexistence of earnest contributions and malicious attempts on the platform is observed, future studies could delve into the psychological factors shaping crowdsourced debunking, including exploring users&#x00027; motivations to volunteering or gaming the system, and discussing the potential for coordinated campaigns to ideologically or psychologically manipulate Community Notes. At a practical level, the platform can taxonomize and prioritize risks associated with crowdsourced debunking by evaluating factors such as likelihood and severity, and subsequently establish a more specific and rigorous messaging guideline and assessment model. For example, showing respect to others. If users could focus on the false tweets themselves, instead of blaming or attacking tweet posters, the frequencies of negative emotion and swear words in unhelpful notes are hoped to be lower. In addition, the study also demonstrates the potential of integrating the crowdsourced approach into a broader toolkit for mitigating misleading information. For one thing, the experience garnered through Community Notes can offer valuable practical insights to other online platforms, despite the imperfection in the norms and structures for now. It&#x00027;s important to recognize that differences such as user groups and platform mechanisms should also be taken into account when generalizing these insights (Vraga and Bode, <xref ref-type="bibr" rid="B71">2018</xref>). For another, crowdsourced debunking as part of infodemic management, it&#x00027;s necessary to explore its intersection with other efforts. For instance, classification models can identify AI-generated misinformation but exhibit reduced effectiveness when addressing human-generated misinformation (Zhou et al., <xref ref-type="bibr" rid="B82">2023</xref>). These models can preemptively flag AI-generated content, thereby alleviating the burden on crowdsourced debunking efforts.</p>
<p>This study has several limitations that warrant investigation in future research. First, the study solely took notes that were ultimately voted as either helpful or not helpful as examples. However, there are a large number of notes labeled as NEEDS_MORE_RATINGS. Meanwhile, during the 2-week voting and ranking period, notes upvoted as helpful would be temporarily affixed to tweets on X and remain visible until they receive downvotes. This means some may maliciously exploit the window period to influence people&#x00027;s opinions. Constant exposure to debunking attempts of varying shades probably erode the receivers&#x00027; confidence in the platform, which in turn results in less positive reactions (Mourali and Drake, <xref ref-type="bibr" rid="B43">2022</xref>). Therefore, future studies can broaden the scope of the corpus to evaluate reliability and effectiveness at different stages. Second, while we conducted a linguistic and psycholinguistic assessment of the collected notes, actual audience responses to the notes on X were not taken into account, such as their perceived severity of the crisis, emotional reactions and attitudes toward taking preventive actions. Future studies examining user perceptions can help corroborate the findings in this study.</p></sec>
<sec id="s6">
<title>6 Conclusion</title>
<p>In order to assess the reliability of Community Notes from readability and neutrality, the study collected notes voted as helpful or not helpful by users on Community Notes, spanning from its initial pilot phase to the global expansion. The non-parametric Mann-Whitney <italic>U</italic>-test was applied to examine differences between the two groups based on measures of reading ease, analytical thinking, authenticity and affect. Results reveal that both groups exhibit enhanced readability and helpful notes demonstrate greater logical coherence, authenticity and emotional restraint in accordance with the provisions of the user manual, underscoring the reliability of Community Notes. Nevertheless, negative and abusive language as well as A large value range in the unhelpful group imply management challenges faced by Community Notes. Overall, the research enhances the understanding of crowd wisdom in the context of misinformation debunking and infodemic management. Future endeavors could explore the psychological motivations behind volunteering, gaming or manipulating behaviors, investigate strategies to enhance crowdsourced debunking, and consider its intersection with professional efforts and infoveillance from broader perspectives.</p></sec>
</body>
<back>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, upon request.</p>
</sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>MY: Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing. ST: Writing &#x02013; review &#x00026; editing. WZ: Writing &#x02013; review &#x00026; editing.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The research was supported by National Social Science Fund Project: International Discourse Mechanism and Long-term Response Strategies for Major Emergent Events (21BYY086).</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>The Birdwatch website is available at <ext-link ext-link-type="uri" xlink:href="https://communitynotes.x.com/guide/en/contributing/examples">https://communitynotes.x.com/guide/en/contributing/examples</ext-link>.</p></fn>
<fn id="fn0002"><p><sup>2</sup>The data is available at <ext-link ext-link-type="uri" xlink:href="https://communitynotes.x.com/guide/en/under-the-hood/download-data">https://communitynotes.x.com/guide/en/under-the-hood/download-data</ext-link>.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Adams</surname> <given-names>S.</given-names></name></person-group> (<year>2006</year>). <source>Under Construction: Reviewing and Producing Information Reliability on the Web</source>. Available at: <ext-link ext-link-type="uri" xlink:href="http://hdl.handle.net/1765/7841">http://hdl.handle.net/1765/7841</ext-link> (accessed April 2, 2024).</citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>A&#x000EF;meur</surname> <given-names>E.</given-names></name> <name><surname>Amri</surname> <given-names>S.</given-names></name> <name><surname>Brassard</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>Fake news, disinformation and misinformation in social media: a review</article-title>. <source>Soc. Netw. Anal. Mining</source> <volume>13</volume>:<fpage>30</fpage>. <pub-id pub-id-type="doi">10.1007/s13278-023-01028-5</pub-id><pub-id pub-id-type="pmid">36789378</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Allen</surname> <given-names>J.</given-names></name> <name><surname>Martel</surname> <given-names>C.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Birds of a feather don&#x00027;t fact-check each other: partisanship and the evaluation of news in Twitter&#x00027;s Birdwatch crowdsourced fact-checking program,&#x0201D;</article-title> in <source>Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>New York, NY</publisher-loc>).</citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allen</surname> <given-names>M. R.</given-names></name> <name><surname>Desai</surname> <given-names>N.</given-names></name> <name><surname>Namazi</surname> <given-names>A.</given-names></name> <name><surname>Leas</surname> <given-names>E.</given-names></name> <name><surname>Dredze</surname> <given-names>M.</given-names></name> <name><surname>Smith</surname> <given-names>D. M.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>Characteristics of X (Formerly Twitter) community notes addressing COVID-19 vaccine misinformation</article-title>. <source>JAMA</source> <volume>331</volume>, <fpage>1670</fpage>&#x02013;<lpage>1672</lpage>. <pub-id pub-id-type="doi">10.1001/jama.2024.4800</pub-id><pub-id pub-id-type="pmid">38656757</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Benjamin</surname> <given-names>G.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Who watches the Birdwatchers? Sociotechnical vulnerabilities in Twitter&#x00027;s content contextualisation,&#x0201D;</article-title> in <source>International Workshop on Socio-Technical Aspects in Security</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>).</citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bhuiyan</surname> <given-names>M. M.</given-names></name> <name><surname>Zhang</surname> <given-names>A. X.</given-names></name> <name><surname>Sehat</surname> <given-names>C. M.</given-names></name> <name><surname>Mitra</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria</article-title>. <source>Proc. ACM Hum. Comp. Interact</source>. <volume>4</volume>, <fpage>1</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1145/3415164</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Boyd</surname> <given-names>R. L.</given-names></name> <name><surname>Ashokkumar</surname> <given-names>A.</given-names></name> <name><surname>Seraj</surname> <given-names>S.</given-names></name> <name><surname>Pennebaker</surname> <given-names>J. W.</given-names></name></person-group> (<year>2022</year>). <source>The Development and Psychometric Properties of LIWC-22</source>. <publisher-loc>Austin, TX</publisher-loc>: <publisher-name>University of Texas at Austin</publisher-name>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cappella</surname> <given-names>J. N.</given-names></name> <name><surname>Maloney</surname> <given-names>E.</given-names></name> <name><surname>Ophir</surname> <given-names>Y.</given-names></name> <name><surname>Brennan</surname> <given-names>E.</given-names></name></person-group> (<year>2015</year>). <article-title>Interventions to correct misinformation about tobacco products</article-title>. <source>Tobacco Regul. Sci</source>. <volume>1</volume>:<fpage>186</fpage>. <pub-id pub-id-type="doi">10.18001/TRS.1.2.8</pub-id><pub-id pub-id-type="pmid">27135046</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carrasco-Farr&#x000E9;</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions</article-title>. <source>Human. Soc. Sci. Commun</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1057/s41599-022-01174-9</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheung</surname> <given-names>A. K.</given-names></name></person-group> (<year>2009</year>). <article-title>Explicitation in consecutive interpreting from Chinese into English: a case study</article-title>. <source>China Transl. J</source>. <volume>5</volume>, <fpage>77</fpage>&#x02013;<lpage>81</lpage>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheung</surname> <given-names>A. K.</given-names></name></person-group> (<year>2012</year>). <article-title>The use of reported speech by court interpreters in Hong Kong</article-title>. <source>Interpreting</source> <volume>14</volume>, <fpage>73</fpage>&#x02013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1075/intp.14.1.04che</pub-id><pub-id pub-id-type="pmid">33486653</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheung</surname> <given-names>A. K.</given-names></name></person-group> (<year>2014</year>). <article-title>The use of reported speech and the perceived neutrality of court interpreters</article-title>. <source>Interpreting</source> <volume>16</volume>, <fpage>191</fpage>&#x02013;<lpage>208</lpage>. <pub-id pub-id-type="doi">10.1075/intp.16.2.03che</pub-id><pub-id pub-id-type="pmid">33486653</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chong</surname> <given-names>S. K.</given-names></name> <name><surname>Ali</surname> <given-names>S. H.</given-names></name> <name><surname>&#x000D0;*o&#x000E0;n</surname> <given-names>L. N.</given-names></name> <name><surname>Yi</surname> <given-names>S. S.</given-names></name> <name><surname>Trinh-Shevrin</surname> <given-names>C.</given-names></name> <name><surname>Kwon</surname> <given-names>S. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Social media use and misinformation among Asian Americans during COVID-19</article-title>. <source>Front. Public Health</source> <volume>9</volume>:<fpage>764681</fpage>. <pub-id pub-id-type="doi">10.3389/fpubh.2021.764681</pub-id><pub-id pub-id-type="pmid">35096736</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chuai</surname> <given-names>Y.</given-names></name> <name><surname>Tian</surname> <given-names>H.</given-names></name> <name><surname>Pr&#x000F6;llochs</surname> <given-names>N.</given-names></name> <name><surname>Lenzini</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>The roll-out of community notes did not reduce engagement with misinformation on Twitter</article-title>. <source>arXiv</source> [preprint]. <pub-id pub-id-type="doi">10.48550/arXiv.2307.07960</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clore</surname> <given-names>G. L.</given-names></name> <name><surname>Huntsinger</surname> <given-names>J. R.</given-names></name></person-group> (<year>2007</year>). <article-title>How emotions inform judgment and regulate thought</article-title>. <source>Trends Cogn. Sci.</source> <volume>11</volume>, <fpage>393</fpage>&#x02013;<lpage>399</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2007.08.005</pub-id><pub-id pub-id-type="pmid">17698405</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daraz</surname> <given-names>L.</given-names></name> <name><surname>Morrow</surname> <given-names>A. S.</given-names></name> <name><surname>Ponce</surname> <given-names>O. J.</given-names></name> <name><surname>Farah</surname> <given-names>W.</given-names></name> <name><surname>Katabi</surname> <given-names>A.</given-names></name> <name><surname>Majzoub</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Readability of online health information: a meta-narrative systematic review</article-title>. <source>Am. J. Med. Q</source>. <volume>33</volume>, <fpage>487</fpage>&#x02013;<lpage>492</lpage>. <pub-id pub-id-type="doi">10.1177/1062860617751639</pub-id><pub-id pub-id-type="pmid">29345143</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Drolsbach</surname> <given-names>C. P.</given-names></name> <name><surname>Pr&#x000F6;llochs</surname> <given-names>N.</given-names></name></person-group> (<year>2023</year>). <article-title>Diffusion of community fact-checked misinformation on twitter</article-title>. <source>Proc. ACM Hum. Comp. Interact</source>. <volume>7</volume>, <fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1145/3610058</pub-id><pub-id pub-id-type="pmid">33623836</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>DuBay</surname> <given-names>W. H.</given-names></name></person-group> (<year>2004</year>). <source>The Principles of Readability</source>. <publisher-loc>Costa Mesa, CA</publisher-loc>: <publisher-name>Impact Information</publisher-name>.</citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ecker</surname> <given-names>U. K.</given-names></name> <name><surname>Lewandowsky</surname> <given-names>S.</given-names></name> <name><surname>Cook</surname> <given-names>J.</given-names></name> <name><surname>Schmid</surname> <given-names>P.</given-names></name> <name><surname>Fazio</surname> <given-names>L. K.</given-names></name> <name><surname>Brashier</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>The psychological drivers of misinformation belief and its resistance to correction</article-title>. <source>Nat. Rev. Psychol</source>. <volume>1</volume>, <fpage>13</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1038/s44159-021-00006-y</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eysenbach</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>How to fight an infodemic: the four pillars of infodemic management</article-title>. <source>J. Med. Int. Res</source>. <volume>22</volume>:<fpage>e21820</fpage>. <pub-id pub-id-type="doi">10.2196/21820</pub-id><pub-id pub-id-type="pmid">32589589</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feinberg</surname> <given-names>I.</given-names></name> <name><surname>O&#x00027;Connor</surname> <given-names>M. H.</given-names></name> <name><surname>Khader</surname> <given-names>S.</given-names></name> <name><surname>Nyman</surname> <given-names>A. L.</given-names></name> <name><surname>Eriksen</surname> <given-names>M. P.</given-names></name></person-group> (<year>2023</year>). <article-title>Creating understandable and actionable COVID-19 health messaging for refugee, immigrant, and migrant communities</article-title>. <source>Healthcare</source> <volume>11</volume>:<fpage>1098</fpage>. <pub-id pub-id-type="doi">10.3390/healthcare11081098</pub-id><pub-id pub-id-type="pmid">37107932</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferrara</surname> <given-names>E.</given-names></name> <name><surname>Chang</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>E.</given-names></name> <name><surname>Muric</surname> <given-names>G.</given-names></name> <name><surname>Patel</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Characterizing social media manipulation in the 2020 US presidential election</article-title>. <source>First Monday</source> 25. <pub-id pub-id-type="doi">10.5210/fm.v25i11.11431</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Flesch</surname> <given-names>R. F.</given-names></name></person-group> (<year>1949</year>). <source>The Art of Readable Writing</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Harper &#x00026; Row Publishers</publisher-name>.</citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fox</surname> <given-names>A. K.</given-names></name> <name><surname>Royne Stafford</surname> <given-names>M. B.</given-names></name></person-group> (<year>2021</year>). <article-title>Olympians on Twitter: a linguistic perspective of the role of authenticity, clout, and expertise in social media advertising</article-title>. <source>J. Curr. Iss. Res. Advert</source>. <volume>42</volume>, <fpage>294</fpage>&#x02013;<lpage>309</lpage>. <pub-id pub-id-type="doi">10.1080/10641734.2020.1763521</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guess</surname> <given-names>A. M.</given-names></name> <name><surname>Lerner</surname> <given-names>M.</given-names></name> <name><surname>Lyons</surname> <given-names>B.</given-names></name> <name><surname>Montgomery</surname> <given-names>J. M.</given-names></name> <name><surname>Nyhan</surname> <given-names>B.</given-names></name> <name><surname>Reifler</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A digital media literacy intervention increases discernment between mainstream and false news in the United States and India</article-title>. <source>Proc. Natl. Acad. Sci</source>. <volume>117</volume>, <fpage>15536</fpage>&#x02013;<lpage>15545</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1920498117</pub-id><pub-id pub-id-type="pmid">32571950</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamborg</surname> <given-names>F.</given-names></name> <name><surname>Donnay</surname> <given-names>K.</given-names></name> <name><surname>Gipp</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Automated identification of media bias in news articles: an interdisciplinary literature review</article-title>. <source>Int. J. Digit. Libr</source>. <volume>20</volume>, <fpage>391</fpage>&#x02013;<lpage>415</lpage>. <pub-id pub-id-type="doi">10.1007/s00799-018-0261-y</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>B.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Lee</surname> <given-names>Y. C.</given-names></name> <name><surname>Oh</surname> <given-names>S.</given-names></name> <name><surname>Verma</surname> <given-names>G.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <article-title>A survey on the role of crowds in combating online misinformation: annotators, evaluators, and creators</article-title>. <source>arXiv</source> [preprint]. <pub-id pub-id-type="doi">10.1145/3694980</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>W.</given-names></name></person-group> (<year>2022</year>). <article-title>When a story contradicts: correcting health misinformation on social media through different message formats and mechanisms</article-title>. <source>Inf. Commun. Soc.</source> <volume>25</volume>, <fpage>1192</fpage>&#x02013;<lpage>1209</lpage>. <pub-id pub-id-type="doi">10.1080/1369118X.2020.1851390</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hube</surname> <given-names>C.</given-names></name> <name><surname>Fetahu</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Detecting biased statements in wikipedia,&#x0201D;</article-title> in <source>Companion Proceedings of the Web Conference</source> (<publisher-loc>Republic and Canton of Geneva</publisher-loc>).</citation>
</ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hube</surname> <given-names>C.</given-names></name> <name><surname>Fetahu</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Neural based statement classification for biased language,&#x0201D;</article-title> in <source>Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining</source> (<publisher-loc>New York, NY</publisher-loc>).</citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iyyer</surname> <given-names>M.</given-names></name> <name><surname>Enns</surname> <given-names>P.</given-names></name> <name><surname>Boyd-Graber</surname> <given-names>J.</given-names></name> <name><surname>Resnik</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). Political ideology detection using recursive neural networks. In <italic>Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics</italic>. <pub-id pub-id-type="doi">10.3115/v1/P14-1105</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>L. C.</given-names></name> <name><surname>Sun</surname> <given-names>M.</given-names></name> <name><surname>Chu</surname> <given-names>T. H.</given-names></name> <name><surname>Chia</surname> <given-names>S. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Inoculation works and health advocacy backfires: building resistance to COVID-19 vaccine misinformation in a low political trust context</article-title>. <source>Front. Psychol.</source> <volume>13</volume>:<fpage>976091</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2022.976091</pub-id><pub-id pub-id-type="pmid">36389491</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>S.</given-names></name> <name><surname>Wilson</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>Linguistic signals under misinformation and fact-checking: evidence from user comments on social media</article-title>. <source>Proc. ACM Hum. Comp. Interact</source>. <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1145/3274351</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Klare</surname> <given-names>G. R.</given-names></name></person-group> (<year>1963</year>). <source>The Measurement of Readability</source>. <publisher-loc>Ames, IA</publisher-loc>: <publisher-name>Iowa State University Press</publisher-name>.</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kreps</surname> <given-names>S.</given-names></name> <name><surname>McCain</surname> <given-names>R. M.</given-names></name> <name><surname>Brundage</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>All the news that&#x00027;s fit to fabricate: AI-generated text as a tool of media misinformation</article-title>. <source>J. Exp. Polit. Sci.</source> <volume>9</volume>, <fpage>104</fpage>&#x02013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1017/XPS.2020.37</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lewandowsky</surname> <given-names>S.</given-names></name> <name><surname>Van Der Linden</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Countering misinformation and fake news through inoculation and prebunking</article-title>. <source>Eur. Rev. Soc. Psychol.</source> <volume>32</volume>, <fpage>348</fpage>&#x02013;<lpage>384</lpage>. <pub-id pub-id-type="doi">10.1080/10463283.2021.1876983</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Xiao</surname> <given-names>W.</given-names></name></person-group> (<year>2022</year>). <article-title>Who will help to strive against the &#x0201C;infodemic&#x0201D;? Reciprocity norms enforce the information sharing accuracy of the individuals</article-title>. <source>Front. Psychol</source>. <volume>13</volume>:<fpage>919321</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2022.919321</pub-id><pub-id pub-id-type="pmid">35846630</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>R.</given-names></name> <name><surname>Cheung</surname> <given-names>A. K.</given-names></name> <name><surname>Liu</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <article-title>A corpus-based investigation of extra-textual, connective, and emphasizing additions in English-Chinese conference interpreting</article-title>. <source>Front. Psychol.</source> <volume>13</volume>:<fpage>847735</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2022.847735</pub-id><pub-id pub-id-type="pmid">35707653</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>K.</given-names></name> <name><surname>Cheung</surname> <given-names>A. K.</given-names></name></person-group> (<year>2023</year>). <source>Translation and Interpreting in the Age of COVID-19: Challenges and Opportunities</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer Nature Singapore</publisher-name>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maass</surname> <given-names>A.</given-names></name> <name><surname>Salvi</surname> <given-names>D.</given-names></name> <name><surname>Arcuri</surname> <given-names>L.</given-names></name> <name><surname>Semin</surname> <given-names>G. R.</given-names></name></person-group> (<year>1989</year>). <article-title>Language use in intergroup contexts: the linguistic intergroup bias</article-title>. <source>J. Pers. Soc. Psychol</source>. <volume>57</volume>:<fpage>981</fpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.57.6.981</pub-id><pub-id pub-id-type="pmid">2614663</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martel</surname> <given-names>C.</given-names></name> <name><surname>Allen</surname> <given-names>J.</given-names></name> <name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2024</year>). <article-title>Crowds can effectively identify misinformation at scale</article-title>. <source>Perspect. Psychol. Sci</source>. <volume>19</volume>, <fpage>477</fpage>&#x02013;<lpage>488</lpage>. <pub-id pub-id-type="doi">10.1177/17456916231190388</pub-id><pub-id pub-id-type="pmid">37594056</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Micallef</surname> <given-names>N.</given-names></name> <name><surname>He</surname> <given-names>B.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Ahamad</surname> <given-names>M.</given-names></name> <name><surname>Memon</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;The role of the crowd in countering misinformation: a case study of the COVID-19 infodemic,&#x0201D;</article-title> in <source>2020 IEEE international Conference on Big Data</source> (<publisher-loc>Atlanta, GA</publisher-loc>: <publisher-name>IEEE</publisher-name>).</citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mourali</surname> <given-names>M.</given-names></name> <name><surname>Drake</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>The challenge of debunking health misinformation in dynamic social media conversations: online randomized study of public masking during COVID-19</article-title>. <source>J. Med. Internet Res.</source> <volume>24</volume>:<fpage>e34831</fpage>. <pub-id pub-id-type="doi">10.2196/34831</pub-id><pub-id pub-id-type="pmid">35156933</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mujumdar</surname> <given-names>R.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Hawkeye: a robust reputation system for community-based counter-misinformation,&#x0201D;</article-title> in <source>Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining</source> (<publisher-loc>New York, NY</publisher-loc>).</citation>
</ref>
<ref id="B45">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nabi</surname> <given-names>R. L.</given-names></name> <name><surname>Oliver</surname> <given-names>M. B.</given-names></name></person-group> (<year>2009</year>). <source>The SAGE Handbook of Media Processes and Effects</source>. <publisher-loc>Thousand Oaks, CA</publisher-loc>: <publisher-name>Sage</publisher-name>.</citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>G. E.</given-names></name> <name><surname>Dhar</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Authenticity is contagious: brand essence and the original source of production</article-title>. <source>J. Market. Res</source>. <volume>51</volume>, <fpage>371</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1509/jmr.11.0022</pub-id><pub-id pub-id-type="pmid">11670861</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Niven</surname> <given-names>T.</given-names></name> <name><surname>Kao</surname> <given-names>H. Y.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Measuring alignment to authoritarian state media as framing bias,&#x0201D;</article-title> in <source>Proceedings of the 3rd NLP4IF Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda</source> (<publisher-loc>Barcelona</publisher-loc>).</citation>
</ref>
<ref id="B48">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Ovadya</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <source>Bridging-Based Ranking</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.belfercenter.org/publication/bridging-based-ranking">https://www.belfercenter.org/publication/bridging-based-ranking</ext-link> (accessed April 2, 2024).</citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennebaker</surname> <given-names>J. W.</given-names></name> <name><surname>King</surname> <given-names>L. A.</given-names></name></person-group> (<year>1999</year>). <article-title>Linguistic styles: language use as an individual difference</article-title>. <source>J. Pers. Soc. Psychol</source>. <volume>77</volume>:<fpage>1296</fpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.77.6.1296</pub-id><pub-id pub-id-type="pmid">10626371</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Fighting misinformation on social media using crowdsourced judgments of news source quality</article-title>. <source>Proc. Natl. Acad. Sci</source>. <volume>116</volume>, <fpage>2521</fpage>&#x02013;<lpage>2526</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1806781116</pub-id><pub-id pub-id-type="pmid">30692252</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennycook</surname> <given-names>G.</given-names></name> <name><surname>Rand</surname> <given-names>D. G.</given-names></name></person-group> (<year>2021</year>). <article-title>The psychology of fake news</article-title>. <source>Trends Cogn. Sci</source>. <volume>25</volume>, <fpage>388</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2021.02.007</pub-id><pub-id pub-id-type="pmid">33736957</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Petty</surname> <given-names>R. E.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name> <name><surname>Petty</surname> <given-names>R. E.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name></person-group> (<year>1986</year>). <source>The Elaboration Likelihood Model of Persuasion</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation>
</ref>
<ref id="B53">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pfeffer</surname> <given-names>J.</given-names></name> <name><surname>Matter</surname> <given-names>D.</given-names></name> <name><surname>Sargsyan</surname> <given-names>A.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;The half-life of a tweet,&#x0201D;</article-title> in <source>Proceedings of the International AAAI Conference on Web and Social Media</source> (<publisher-loc>Washington, DC</publisher-loc>).</citation>
</ref>
<ref id="B54">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pilarski</surname> <given-names>M.</given-names></name> <name><surname>Solovev</surname> <given-names>K. O.</given-names></name> <name><surname>Pr&#x000F6;llochs</surname> <given-names>N.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Community Notes vs. snoping: how the crowd selects fact-checking targets on social media,&#x0201D;</article-title> in <source>Proceedings of the International AAAI Conference on Web and Social Media</source> (<publisher-loc>Washington, DC</publisher-loc>).</citation>
</ref>
<ref id="B55">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pinto</surname> <given-names>M. R.</given-names></name> <name><surname>de Lima</surname> <given-names>Y. O.</given-names></name> <name><surname>Barbosa</surname> <given-names>C. E.</given-names></name> <name><surname>de Souza</surname> <given-names>J. M.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Towards fact-checking through crowdsourcing,&#x0201D;</article-title> in <source>2019 IEEE 23rd International Conference on Computer Supported Cooperative Work in Design</source> (<publisher-loc>Porto</publisher-loc>: <publisher-name>IEEE</publisher-name>).<pub-id pub-id-type="pmid">33392404</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pr&#x000F6;llochs</surname> <given-names>N.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Community-based fact-checking on Twitter&#x00027;s Birdwatch platform,&#x0201D;</article-title> in <source>Proceedings of the International AAAI Conference on Web and Social Media</source> (<publisher-loc>Palo Alto, CA</publisher-loc>).</citation>
</ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rashkin</surname> <given-names>H.</given-names></name> <name><surname>Choi</surname> <given-names>E.</given-names></name> <name><surname>Jang</surname> <given-names>J. Y.</given-names></name> <name><surname>Volkova</surname> <given-names>S.</given-names></name> <name><surname>Choi</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Truth of varying shades: analyzing language in fake news and political fact-checking,&#x0201D;</article-title> in <source>Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source> (<publisher-loc>Copenhagen</publisher-loc>).</citation>
</ref>
<ref id="B58">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Recasens</surname> <given-names>M.</given-names></name> <name><surname>Danescu-Niculescu-Mizil</surname> <given-names>C.</given-names></name> <name><surname>Jurafsky</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Linguistic models for analyzing and detecting biased language,&#x0201D;</article-title> in <source>Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics</source> (<publisher-loc>Sofia</publisher-loc>).</citation>
</ref>
<ref id="B59">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Saeed</surname> <given-names>M.</given-names></name> <name><surname>Traub</surname> <given-names>N.</given-names></name> <name><surname>Nicolas</surname> <given-names>M.</given-names></name> <name><surname>Demartini</surname> <given-names>G.</given-names></name> <name><surname>Papotti</surname> <given-names>P.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Crowdsourced fact-checking at Twitter: how does the crowd compare with experts?,&#x0201D;</article-title> in <source>Proceedings of the 31st ACM International Conference on Information &#x00026; Knowledge Management</source> (<publisher-loc>New York, NY</publisher-loc>).</citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwarz</surname> <given-names>N.</given-names></name></person-group> (<year>1998</year>). <article-title>Accessible content and accessibility experiences: the interplay of declarative and experiential information in judgment</article-title>. <source>Person. Soc. Psychol. Rev.</source> <volume>2</volume>, <fpage>87</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1207/s15327957pspr0202_2</pub-id><pub-id pub-id-type="pmid">15647137</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simpson</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). &#x0201C;<italic>Obama Never Said That&#x0201D;: Evaluating Fact-Checks for Topical Consistency and Quality</italic>. University of Washington ProQuest Dissertations &#x00026; Theses.</citation>
</ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Southwell</surname> <given-names>B. G.</given-names></name> <name><surname>Thorson</surname> <given-names>E. A.</given-names></name> <name><surname>Sheble</surname> <given-names>L.</given-names></name></person-group> (eds.). (<year>2018</year>). <article-title>&#x0201C;Introduction: misinformation among mass audiences as a focus for inquiry,&#x0201D;</article-title> in <source>Misinformation and Mass Audiences</source> (<publisher-loc>Austin, TX</publisher-loc>: <publisher-name>University of Texas Press</publisher-name>).</citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spinde</surname> <given-names>T.</given-names></name> <name><surname>Hinterreiter</surname> <given-names>S.</given-names></name> <name><surname>Haak</surname> <given-names>F.</given-names></name> <name><surname>Ruas</surname> <given-names>T.</given-names></name> <name><surname>Giese</surname> <given-names>H.</given-names></name> <name><surname>Meuschke</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>The media bias taxonomy: a systematic literature review on the forms and automated detection of media bias</article-title>. <source>arXiv</source> [preprint]. <pub-id pub-id-type="doi">10.48550/arXiv.2312.16148</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Stanovich</surname> <given-names>K. E.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Distinguishing the reflective, algorithmic, and autonomous minds: is it time for a tri-process theory,&#x0201D;</article-title> in <source>Two Minds: Dual Processes and Beyond</source>, eds. J. S. B. T. Evans, and K. Frankish (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>55</fpage>&#x02013;<lpage>88</lpage>.</citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ta</surname> <given-names>V. P.</given-names></name> <name><surname>Boyd</surname> <given-names>R. L.</given-names></name> <name><surname>Seraj</surname> <given-names>S.</given-names></name> <name><surname>Keller</surname> <given-names>A.</given-names></name> <name><surname>Griffith</surname> <given-names>C.</given-names></name> <name><surname>Loggarakis</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>An inclusive, real-world investigation of persuasion in language and verbal behavior</article-title>. <source>J. Comp. Soc. Sci.</source> <volume>5</volume>, <fpage>883</fpage>&#x02013;<lpage>903</lpage>. <pub-id pub-id-type="doi">10.1007/s42001-021-00153-5</pub-id><pub-id pub-id-type="pmid">34869936</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tausczik</surname> <given-names>Y. R.</given-names></name> <name><surname>Pennebaker</surname> <given-names>J. W.</given-names></name></person-group> (<year>2010</year>). <article-title>The psychological meaning of words: LIWC and computerized text analysis methods</article-title>. <source>J. Lang. Soc. Psychol.</source> <volume>29</volume>, <fpage>24</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1177/0261927X09351676</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tay</surname> <given-names>L. Q.</given-names></name> <name><surname>Hurlstone</surname> <given-names>M. J.</given-names></name> <name><surname>Kurz</surname> <given-names>T.</given-names></name> <name><surname>Ecker</surname> <given-names>U. K.</given-names></name></person-group> (<year>2022</year>). <article-title>A comparison of prebunking and debunking interventions for implied versus explicit misinformation</article-title>. <source>Br. J. Psychol</source>. <volume>113</volume>, <fpage>591</fpage>&#x02013;<lpage>607</lpage>. <pub-id pub-id-type="doi">10.1111/bjop.12551</pub-id><pub-id pub-id-type="pmid">34967004</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vallejo</surname> <given-names>G.</given-names></name> <name><surname>Baldwin</surname> <given-names>T.</given-names></name> <name><surname>Frermann</surname> <given-names>L.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Connecting the dots in news analysis: bridging the cross-disciplinary disparities in media bias and framing,&#x0201D;</article-title> in <source>Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science</source>.</citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van der Meer</surname> <given-names>T. G.</given-names></name> <name><surname>Jin</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source</article-title>. <source>Health Commun.</source> <volume>35</volume>, <fpage>560</fpage>&#x02013;<lpage>575</lpage>. <pub-id pub-id-type="doi">10.1080/10410236.2019.1573295</pub-id><pub-id pub-id-type="pmid">30761917</pub-id></citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vo</surname> <given-names>N.</given-names></name> <name><surname>Lee</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Learning from fact-checkers: analysis and generation of fact-checking language,&#x0201D;</article-title> in <source>Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</source>.<pub-id pub-id-type="pmid">37113446</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vraga</surname> <given-names>E. K.</given-names></name> <name><surname>Bode</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>I do not believe you: how providing a source corrects health misperceptions across social media platforms</article-title>. <source>Inf. Commun. Soc</source>. <volume>21</volume>, <fpage>1337</fpage>&#x02013;<lpage>1353</lpage>. <pub-id pub-id-type="doi">10.1080/1369118X.2017.1313883</pub-id><pub-id pub-id-type="pmid">19088921</pub-id></citation></ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vraga</surname> <given-names>E. K.</given-names></name> <name><surname>Kim</surname> <given-names>S. C.</given-names></name> <name><surname>Cook</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Testing logic-based and humor-based corrections for science, health, and political misinformation on social media</article-title>. <source>J. Broadcast. Electron. Media</source>. <volume>63</volume>, <fpage>393</fpage>&#x02013;<lpage>414</lpage>. <pub-id pub-id-type="doi">10.1080/08838151.2019.1653102</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Chao</surname> <given-names>F.</given-names></name> <name><surname>Yu</surname> <given-names>G.</given-names></name> <name><surname>Zhang</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <article-title>Factors influencing fake news rebuttal acceptance during the COVID-19 pandemic and the moderating effect of cognitive ability</article-title>. <source>Comput. Human Behav</source>. <volume>130</volume>:<fpage>107174</fpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2021.107174</pub-id><pub-id pub-id-type="pmid">35002055</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>T. D.</given-names></name> <name><surname>Lindsey</surname> <given-names>S.</given-names></name> <name><surname>Schooler</surname> <given-names>T. Y.</given-names></name></person-group> (<year>2000</year>). <article-title>A model of dual attitudes</article-title>. <source>Psychol. Rev</source>. <volume>107</volume>:<fpage>101</fpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.107.1.101</pub-id><pub-id pub-id-type="pmid">10687404</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wineburg</surname> <given-names>S.</given-names></name> <name><surname>McGrew</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Lateral reading and the nature of expertise: READING less and learning more when evaluating digital information</article-title>. <source>Teach. Coll. Rec</source>. <volume>121</volume>, <fpage>1</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1177/016146811912101102</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wojcik</surname> <given-names>S.</given-names></name> <name><surname>Hilgard</surname> <given-names>S.</given-names></name> <name><surname>Judd</surname> <given-names>N.</given-names></name> <name><surname>Mocanu</surname> <given-names>D.</given-names></name> <name><surname>Ragain</surname> <given-names>S.</given-names></name> <name><surname>Hunzaker</surname> <given-names>M. B.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Birdwatch: crowd wisdom and bridging algorithms can inform understanding and reduce the spread of misinformation</article-title>. <source>arXiv</source> [preprint]. <pub-id pub-id-type="doi">10.48550/arXiv.2210.15723</pub-id></citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Woolley</surname> <given-names>A. W.</given-names></name> <name><surname>Chabris</surname> <given-names>C. F.</given-names></name> <name><surname>Pentland</surname> <given-names>A.</given-names></name> <name><surname>Hashmi</surname> <given-names>N.</given-names></name> <name><surname>Malone</surname> <given-names>T. W.</given-names></name></person-group> (<year>2010</year>). <article-title>Evidence for a collective intelligence factor in the performance of human groups</article-title>. <source>Science</source> <volume>330</volume>, <fpage>686</fpage>&#x02013;<lpage>688</lpage>. <pub-id pub-id-type="doi">10.1126/science.1193147</pub-id><pub-id pub-id-type="pmid">20929725</pub-id></citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ye</surname> <given-names>L.</given-names></name></person-group> (<year>2024</year>). <article-title>Wordless: an integrated corpus tool with multilingual support for the study of language, literature, and translation</article-title>. <source>SoftwareX</source> <volume>28</volume>:<fpage>101931</fpage>. <pub-id pub-id-type="doi">10.1016/j.softx.2024.101931</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>A.</given-names></name> <name><surname>Naaman</surname> <given-names>M.</given-names></name></person-group> (<year>2023a</year>). <article-title>Variety, velocity, veracity, and viability: evaluating the contributions of crowdsourced and professional fact-checking</article-title>. <source>SocArxiv</source>. <pub-id pub-id-type="doi">10.31235/osf.io/yfxd3</pub-id><pub-id pub-id-type="pmid">33784477</pub-id></citation></ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>A.</given-names></name> <name><surname>Naaman</surname> <given-names>M.</given-names></name></person-group> (<year>2023b</year>). <article-title>Insights from a comparative study on the variety, velocity, veracity, and viability of crowdsourced and professional fact-checking services</article-title>. <source>J. Online Trust Saf.</source> <volume>2</volume>:<fpage>118</fpage>. <pub-id pub-id-type="doi">10.54501/jots.v2i1.118</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhong</surname> <given-names>W.</given-names></name> <name><surname>Yao</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Emergency discourse guidance mechanism in international social media platforms: taking the micro-communication of vaccine-related tweets as an example</article-title>. <source>Inf. Sci.</source> <volume>1</volume>, <fpage>93</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.13833/j.issn.1007-7634.2023.01.011</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Luo</surname> <given-names>Q.</given-names></name> <name><surname>Parker</surname> <given-names>A. G.</given-names></name> <name><surname>De Choudhury</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Synthetic lies: understanding AI-generated misinformation and evaluating algorithmic and human solutions,&#x0201D;</article-title> in <source>Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>.</citation>
</ref>
</ref-list>
</back>
</article>