<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="review-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Dyn.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Human Dynamics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Dyn.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2673-2726</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fhumd.2026.1791655</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Mini Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>AI-generated visual disinformation and digital equity: an intersectional analysis of algorithmic vulnerabilities among marginalised communities</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Putri</surname>
<given-names>Vinanda Cinta Cendekia</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2987959"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="software" vocab-term-identifier="https://credit.niso.org/contributor-roles/software/">Software</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Formal analysis" vocab-term-identifier="https://credit.niso.org/contributor-roles/formal-analysis/">Formal analysis</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Data curation" vocab-term-identifier="https://credit.niso.org/contributor-roles/data-curation/">Data curation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>Digital Communication Study Program, Faculty of Vocational Studies, Hasanuddin University</institution>, <city>Makassar</city>, <country country="id">Indonesia</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Vinanda Cinta Cendekia Putri, <email xlink:href="mailto:vinandaccp@unhas.ac.id">vinandaccp@unhas.ac.id</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-26">
<day>26</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>8</volume>
<elocation-id>1791655</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="rev-recd">
<day>07</day>
<month>02</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Putri.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Putri</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-26">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The proliferation of artificial intelligence (AI) technologies has fundamentally transformed the landscape of visual disinformation, creating novel challenges for digital equity and social justice. This mini review examines how AI-generated and AI-amplified visual content disproportionately impacts marginalised communities through intersecting vulnerabilities related to race, ethnicity, socioeconomic status, geographic location, and digital literacy. Drawing on intersectional theory and social identity frameworks, we synthesise recent empirical evidence demonstrating that algorithmic systems systematically disadvantage specific demographic groups through biased content generation, inequitable distribution mechanisms, and differential access to verification tools. Our analysis reveals that communities of colour, low-income populations, and individuals in the Global South face compounded risks from AI-driven disinformation ecosystems. We identify critical gaps in current interventions and propose equity-centred approaches to address these disparities, including algorithmic accountability frameworks, culturally responsive media literacy programs, and inclusive platform design. This review contributes to emerging scholarship at the intersection of AI ethics, communication studies, and social equity by highlighting how technological systems reproduce and amplify existing societal inequalities.</p>
</abstract>
<kwd-group>
<kwd>algorithmic bias</kwd>
<kwd>artificial intelligence</kwd>
<kwd>deepfakes</kwd>
<kwd>digital equity</kwd>
<kwd>generative AI</kwd>
<kwd>intersectionality</kwd>
<kwd>marginalised communities</kwd>
<kwd>media literacy</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="34"/>
<page-count count="6"/>
<word-count count="3945"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Digital Impacts</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>The rapid advancement of artificial intelligence technologies has ushered in an unprecedented era of synthetic media production, fundamentally altering the information ecosystem and raising critical concerns about digital equity. Generative AI systems now enable the creation of photorealistic images and videos that are virtually indistinguishable from authentic content (<xref ref-type="bibr" rid="ref23">Naffi, 2025</xref>; <xref ref-type="bibr" rid="ref24">Negreiro, 2025</xref>). The Generative AI market is projected to grow at a 40% compound annual growth rate, expanding from $16 billion in 2024 to $85 billion by 2029 (<xref ref-type="bibr" rid="ref32">S&#x0026;P Global, 2025</xref>), while deepfake detection company Onfido reported a 3,000% increase in deepfake attempts in 2023 (<xref ref-type="bibr" rid="ref29">Perkins and Verevis, 2015</xref>). This technological revolution has profound implications for marginalized communities, who face disproportionate vulnerabilities to AI-generated visual disinformation due to intersecting structural inequalities (<xref ref-type="bibr" rid="ref25">Noble, 2018</xref>; <xref ref-type="bibr" rid="ref3">Benjamin, 2020</xref>).</p>
<p>Visual disinformation is a particularly pernicious form of false information because human cognitive systems process images more efficiently and with greater emotional resonance than text-based content. Research demonstrates that visual content receives 94% more views than text-only content and generates significantly higher engagement rates across social media platforms (<xref ref-type="bibr" rid="ref19">Koshy, 2019</xref>; <xref ref-type="bibr" rid="ref6">Connell, 2025</xref>). Posts with images receive 2.3 times more engagement on Facebook, while visual content is 40 times more likely to be shared across social media channels (<xref ref-type="bibr" rid="ref6">Connell, 2025</xref>; <xref ref-type="bibr" rid="ref19">Koshy, 2019</xref>). When combined with AI capabilities, this visual processing advantage creates amplified risks, as synthetically generated images can exploit cognitive biases while evading traditional detection mechanisms (<xref ref-type="bibr" rid="ref12">Hamiel, 2025</xref>).</p>
<p>The intersection of AI technologies and social inequities demands urgent scholarly attention. Communities already experiencing systemic marginalisation face compounded vulnerabilities in the AI-driven information ecosystem through multiple pathways: differential exposure to harmful content due to algorithmic targeting, limited access to verification tools and digital literacy resources, cultural and linguistic barriers to fact-checking infrastructure, and historical patterns of discriminatory representation in training data that perpetuate stereotypes through AI-generated content (<xref ref-type="bibr" rid="ref20">Lendvai and Gosztonyi, 2025</xref>; <xref ref-type="bibr" rid="ref28">Pasipamire and Muroyiwa, 2024</xref>). These intersecting vulnerabilities create what we term &#x2018;algorithmic injustice&#x2019; in the disinformation landscape.</p>
<p>This mini review synthesises emerging research at the intersection of AI technologies, visual disinformation, and social equity. We examine how algorithmic systems systematically disadvantage specific demographic groups and identify critical gaps in current interventions. Our analysis is grounded in intersectional theory (<xref ref-type="bibr" rid="ref8">Crenshaw, 1989</xref>; <xref ref-type="bibr" rid="ref13">Hill Collins, 2000</xref>), which recognises that individuals experience multiple, overlapping forms of oppression and privilege, and social identity theory (<xref ref-type="bibr" rid="ref33">Tajfel and Turner, 1979</xref>), which illuminates how group membership shapes information processing and trust. By applying these frameworks to the AI disinformation context, we advance understanding of how technological systems reproduce and amplify existing societal inequalities.</p>
<sec id="sec2">
<label>1.1</label>
<title>Literature review methodology</title>
<p>This Mini Review employs a narrative synthesis approach to examine the intersection of AI-generated visual disinformation and social equity. Literature was identified through systematic searches of academic databases (Scopus, Web of Science, Google Scholar) and specialized repositories (arXiv, SSRN) conducted between October and December 2025. Search terms included combinations of &#x201C;generative AI,&#x201D; &#x201C;deepfakes,&#x201D; &#x201C;visual disinformation,&#x201D; &#x201C;algorithmic bias,&#x201D; &#x201C;marginalized communities,&#x201D; &#x201C;digital equity,&#x201D; and &#x201C;intersectionality.&#x201D; The review prioritized peer-reviewed articles, institutional reports, and policy documents published between 2023 and 2025 to capture recent developments in generative AI technologies, with foundational theoretical works included for conceptual grounding.</p>
<p>Inclusion criteria focused on: (1) empirical studies or theoretical analyses addressing AI-generated visual content; (2) research examining differential impacts across social identities or communities; (3) scholarship on platform governance, algorithmic systems, or digital equity; and (4) English-language sources with accessible full text. The final corpus comprised approximately 50 sources spanning communication studies, critical algorithm studies, computer science, sociology, and policy research, with representation from North American, European, and Global South contexts. Thematic analysis identified four primary analytical domains: AI production mechanisms, intersectional vulnerability frameworks, platform algorithmic governance, and equity-centered interventions. The vulnerabilities matrix (<xref ref-type="table" rid="tab1">Table 1</xref>) represents a conceptual synthesis derived from patterns across the reviewed literature rather than original empirical data.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Intersectional vulnerabilities to AI-generated visual disinformation (Conceptual synthesis derived from reviewed literature).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Dimension</th>
<th align="left" valign="top">Primary vulnerabilities</th>
<th align="left" valign="top">Algorithmic mechanisms</th>
<th align="left" valign="top">Equity interventions</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Race and ethnicity</td>
<td align="left" valign="top">Biased facial recognition; stereotypical AI outputs; discriminatory targeting</td>
<td align="left" valign="top">Training data bias; performance disparities; stereotype amplification</td>
<td align="left" valign="top">Diverse training datasets; equitable testing; culturally responsive moderation</td>
</tr>
<tr>
<td align="left" valign="top">Socioeconomic status</td>
<td align="left" valign="top">Limited verification access; mobile constraints; educational gaps</td>
<td align="left" valign="top">Digital divide in tool access; targeting of vulnerable populations</td>
<td align="left" valign="top">Free verification tools; mobile-optimized detection; community literacy</td>
</tr>
<tr>
<td align="left" valign="top">Geographic location</td>
<td align="left" valign="top">Infrastructure limits; linguistic barriers; limited factchecking</td>
<td align="left" valign="top">Western-centric design; Global South underserved; localized targeting</td>
<td align="left" valign="top">Multilingual factchecking; region-specific models; local moderators</td>
</tr>
<tr>
<td align="left" valign="top">Digital literacy</td>
<td align="left" valign="top">Reduced AI awareness; limited technical skills; social proof reliance</td>
<td align="left" valign="top">Viral amplification exploits heuristics; complex UX disadvantages novices</td>
<td align="left" valign="top">AI literacy curricula; simplified interfaces; peer education programs</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="sec3">
<label>2</label>
<title>AI and the production of visual disinformation</title>
<sec id="sec4">
<label>2.1</label>
<title>Generative AI technologies and synthetic media</title>
<p>Recent advances in generative AI have democratized access to sophisticated image manipulation and generation tools. These technologies operate through complex training processes on massive datasets, learning to generate novel visual content that mimics authentic photographs and videos. The global market for AI-generated deepfakes is projected to reach $79.1 million by the end of 2024, with a compound annual growth rate of 37.6% (<xref ref-type="bibr" rid="ref29">Perkins and Verevis, 2015</xref>). In 2023, deepfake fraud attempts accounted for 6.5% of total fraud incidents, marking a 2,137% increase over three years (<xref ref-type="bibr" rid="ref29">Perkins and Verevis, 2015</xref>). While these innovations offer legitimate creative applications, they simultaneously create unprecedented opportunities for malicious actors to produce convincing disinformation at scale (<xref ref-type="bibr" rid="ref5">Carr and K&#x00F6;hler, 2024</xref>).</p>
<p>Critically, the training data underlying these AI systems reflects and often amplifies existing societal biases. Studies of major image generation models reveal systematic underrepresentation of non-Western contexts, stereotypical portrayals of racial and ethnic minorities, and skewed gender representations (<xref ref-type="bibr" rid="ref4">Bonil et al., 2025</xref>; <xref ref-type="bibr" rid="ref20">Lendvai and Gosztonyi, 2025</xref>). When prompted with neutral descriptors, these systems disproportionately generate images aligned with dominant cultural stereotypes, effectively encoding discrimination into the visual output. These biased outputs become particularly harmful when deployed in disinformation campaigns targeting specific communities.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec id="sec5">
<label>2.2</label>
<title>Algorithmic amplification and distribution mechanisms</title>
<p>Beyond content generation, AI systems play a crucial role in amplifying and distributing visual disinformation through platform recommendation algorithms. Social media platforms employ sophisticated machine learning models to maximise engagement, often prioritising emotionally charged and controversial content regardless of veracity. Research demonstrates that false visual information spreads significantly faster than actual content, with platform algorithms inadvertently accelerating this dissemination (<xref ref-type="bibr" rid="ref15">Hootsuite, 2025</xref>). Marginalised communities face heightened exposure to such content due to algorithmic targeting practices that exploit demographic characteristics and online behaviours (<xref ref-type="bibr" rid="ref2">Arora et al., 2023</xref>). Platforms&#x2019; optimisation for engagement creates feedback loops in which sensationalised disinformation receives preferential distribution to vulnerable populations, compounding informational inequities.</p>
</sec>
</sec>
<sec id="sec6">
<label>3</label>
<title>Intersectionality and differential vulnerabilities</title>
<sec id="sec7">
<label>3.1</label>
<title>Race, ethnicity, and cultural dimensions</title>
<p>Communities of colour experience distinctive vulnerabilities to AI-generated visual disinformation rooted in historical patterns of discrimination and contemporary algorithmic bias. Facial recognition systems, which often serve as components of verification tools, demonstrate significantly higher error rates for individuals with darker skin tones, particularly for minority groups (<xref ref-type="bibr" rid="ref11">Franklin et al., 2024</xref>). This &#x2018;technology performance gap&#x2019; means that minority communities cannot rely equally on AI-powered tools to detect synthetic media, leaving them with asymmetric protection against deepfakes and manipulated images.</p>
<p>Moreover, AI-generated disinformation frequently exploits racial and ethnic stereotypes, with documented cases of synthetic images depicting people of colour in criminal contexts or crises circulating widely during social movements and political campaigns (<xref ref-type="bibr" rid="ref17">Insikt Group, 2024</xref>; <xref ref-type="bibr" rid="ref5">Carr and K&#x00F6;hler, 2024</xref>). The cultural context of information evaluation further compounds these vulnerabilities. Communities with collectivist cultural orientations exhibit distinct trust heuristics and information-sharing patterns compared to individualist Western contexts. Yet, platform architectures and detection systems are predominantly designed based on Western user behaviours and values (<xref ref-type="bibr" rid="ref20">Lendvai and Gosztonyi, 2025</xref>).</p>
</sec>
<sec id="sec8">
<label>3.2</label>
<title>Socioeconomic status and digital access</title>
<p>Socioeconomic disparities create fundamental inequities in both exposure to and protection from AI-driven visual disinformation. Low-income populations typically access the internet through mobile devices with limited screen size and computational capacity, constraining their ability to scrutinise image authenticity through zooming, reverse image search, or metadata analysis (<xref ref-type="bibr" rid="ref14">Hollimon et al., 2025</xref>; <xref ref-type="bibr" rid="ref31">Raihan et al., 2024</xref>). These communities also face restricted access to premium fact-checking services, verification tools, and high-quality media literacy education.</p>
<p>Research examining information behaviours across socioeconomic strata reveals that individuals with lower educational attainment and income demonstrate reduced confidence in identifying manipulated images and greater reliance on social proof as a credibility heuristic (<xref ref-type="bibr" rid="ref26">Obed and Boateng, 2025</xref>). When AI-amplified disinformation reaches virality through engagement-optimizing algorithms, these social proof signals become particularly misleading for economically marginalised populations. Furthermore, communities experiencing economic precarity often lack the temporal resources for extensive fact-checking, creating time-based vulnerabilities that algorithmic systems can exploit through rapid-fire dissemination of synthetic content.</p>
</sec>
<sec id="sec9">
<label>3.3</label>
<title>Geographic and linguistic barriers</title>
<p>The Global South faces compounded vulnerabilities to AI-generated visual disinformation, including infrastructure limitations, linguistic barriers, and cultural distance from dominant fact-checking ecosystems. Major fact-checking organisations and verification platforms predominantly operate in English and serve Western audiences, leaving non-English-speaking communities with minimal access to authoritative debunking resources (<xref ref-type="bibr" rid="ref20">Lendvai and Gosztonyi, 2025</xref>; <xref ref-type="bibr" rid="ref30">Poggi, 2025</xref>). As of 2024, 2.6 billion people remained offline globally, with internet penetration in developing countries significantly lower than in developed nations (<xref ref-type="bibr" rid="ref30">Poggi, 2025</xref>). AI-generated disinformation can be rapidly localised and culturally adapted to specific regions, whereas fact-checking responses lag significantly. Rural and remote communities additionally contend with limited bandwidth that restricts access to high-resolution image analysis tools while simultaneously experiencing targeted disinformation campaigns that exploit local socio-political tensions (<xref ref-type="bibr" rid="ref14">Hollimon et al., 2025</xref>). The geographic concentration of AI development in wealthy nations further ensures that detection tools and platform safeguards are optimised to protect users in developed markets rather than to address the distinctive vulnerabilities of Global South populations.</p>
</sec>
</sec>
<sec id="sec10">
<label>4</label>
<title>Platform algorithms and equity implications</title>
<sec id="sec11">
<label>4.1</label>
<title>Engagement optimisation and demographic targeting</title>
<p>Platform recommendation algorithms employ sophisticated demographic profiling to optimise content delivery, creating differential information environments across social groups. These systems analyse user interactions, demographic attributes, and inferred characteristics to predict engagement likelihood, often amplifying emotionally resonant content regardless of accuracy. Research demonstrates that algorithms disproportionately expose certain demographic groups to misleading visual content, particularly during politically charged periods or health crises (<xref ref-type="bibr" rid="ref21">Liu et al., 2025</xref>; <xref ref-type="bibr" rid="ref15">Hootsuite, 2025</xref>).</p>
<p>Critically, the engagement metrics that guide algorithmic distribution encode existing social inequalities. Content that reinforces dominant narratives and stereotypes typically generates higher engagement among majority populations, creating algorithmic feedback loops that marginalise counter-narratives and authentic representations from minority communities (<xref ref-type="bibr" rid="ref27">O'Neil, 2016</xref>). When AI-generated disinformation exploits these stereotypes, platform algorithms may inadvertently prioritise its distribution to the very communities most harmed by it. This creates &#x2018;algorithmic redlining,&#x2019; wherein marginalised groups receive inferior information quality through systematically biased content curation (<xref ref-type="bibr" rid="ref16">Humber, 2023</xref>).</p>
</sec>
<sec id="sec12">
<label>4.2</label>
<title>Moderation disparities and enforcement gaps</title>
<p>Content moderation systems, increasingly powered by AI, demonstrate systematic disparities in protecting different demographic groups from harmful disinformation. Automated moderation tools frequently misclassify content from minority communities due to biases in training data, leading to both over-moderation of legitimate speech and under-moderation of harmful disinformation (<xref ref-type="bibr" rid="ref4">Bonil et al., 2025</xref>; <xref ref-type="bibr" rid="ref11">Franklin et al., 2024</xref>). Analysis of platform enforcement reveals that AI-generated disinformation targeting marginalised communities receives slower removal and less comprehensive action compared to similar content affecting majority populations. These disparities reflect underlying inequities in how platforms perceive valuable users worthy of protection, with moderation resources concentrated on serving dominant demographic groups. The result is a two-tiered information ecosystem in which marginalised communities experience both heightened exposure to AI-driven disinformation and diminished platform accountability for addressing these harms.</p>
<p>The following matrix (<xref ref-type="table" rid="tab1">Table 1</xref>) presents a conceptual synthesis of vulnerability patterns identified across the reviewed literature. These categories are analytical constructs that organize insights from multiple empirical studies, theoretical frameworks, and policy analyses rather than representing original empirical findings. The matrix illustrates how different axes of marginalization intersect to create compounded exposure to AI-generated visual disinformation.</p>
</sec>
</sec>
<sec id="sec13">
<label>5</label>
<title>Discussion and future directions</title>
<sec id="sec14">
<label>5.1</label>
<title>Toward equity-centred AI governance</title>
<p>Addressing the inequitable impacts of AI-generated visual disinformation requires a fundamental reconceptualisation of platform governance and algorithmic accountability. Current regulatory frameworks and industry self-regulation efforts predominantly centre on the experiences and needs of privileged users in developed nations, neglecting the distinctive vulnerabilities of marginalised communities (<xref ref-type="bibr" rid="ref20">Lendvai and Gosztonyi, 2025</xref>). An equity-centred approach demands that platform policies, detection systems, and moderation practices be designed explicitly to protect the most vulnerable populations rather than to optimise for the majority user experience.</p>
<p>This reorientation requires several critical interventions. First, mandatory algorithmic impact assessments should evaluate how AI systems affect different demographic groups, with particular attention to compounding disadvantages faced by individuals at the intersection of multiple marginalised identities. Second, platform accountability mechanisms must incorporate meaningful participation from affected communities in the design, deployment, and oversight of content moderation systems (<xref ref-type="bibr" rid="ref10">Eslami et al., 2025</xref>; <xref ref-type="bibr" rid="ref1">Alon-Barkat et al., 2025</xref>). Research demonstrates that communities possess the capacity to critique algorithmic impacts when provided with participatory governance opportunities, challenging the assumption that technical complexity deters meaningful engagement. Third, regulatory frameworks should mandate equitable resource allocation for protecting diverse populations, preventing the current pattern wherein moderation capacity concentrates on serving dominant user groups (<xref ref-type="bibr" rid="ref24">Negreiro, 2025</xref>).</p>
</sec>
<sec id="sec15">
<label>5.2</label>
<title>Culturally responsive media literacy</title>
<p>Traditional media literacy interventions demonstrate limited effectiveness for addressing AI-driven disinformation among marginalised communities, as these programs typically assume Western cultural contexts, high educational attainment, and extensive digital access (<xref ref-type="bibr" rid="ref22">Merod, 2025</xref>; <xref ref-type="bibr" rid="ref9">DeMio, 2024</xref>). Culturally responsive approaches recognise that information evaluation practices are fundamentally shaped by community values, historical experiences with institutional trust, and culturally specific communication norms.</p>
<p>Effective interventions must be co-designed with target communities, incorporating local knowledge systems and leveraging existing community networks for peer education (<xref ref-type="bibr" rid="ref18">Jackson et al., 2024</xref>). Programs should address AI-specific literacy needs, including understanding synthetic media generation, recognising algorithmic amplification patterns, and awareness of demographic targeting practices. Critically, such initiatives must confront the structural inequities that create differential vulnerabilities rather than placing responsibility solely on individual users to navigate hostile information environments (<xref ref-type="bibr" rid="ref23">Naffi, 2025</xref>).</p>
</sec>
<sec id="sec16">
<label>5.3</label>
<title>Research gaps and future scholarship</title>
<p>Despite growing recognition of equity concerns in AI systems, significant research gaps persist. First, empirical studies examining the specific mechanisms through which algorithmic systems disadvantage particular demographic groups remain limited, particularly for intersectional identities and Global South contexts (<xref ref-type="bibr" rid="ref28">Pasipamire and Muroyiwa, 2024</xref>; <xref ref-type="bibr" rid="ref30">Poggi, 2025</xref>). Second, longitudinal research tracking the evolution of vulnerabilities as AI technologies advance is urgently needed. Third, intervention effectiveness must be rigorously evaluated across diverse populations to identify which approaches successfully protect marginalised communities. Fourth, scholarship should examine how communities develop grassroots resistance strategies and vernacular expertise for navigating AI-driven disinformation ecosystems. Finally, critical attention to the political economy of AI development can illuminate how existing power structures shape whose interests algorithmic systems serve and whose harms receive acknowledgement and remediation (<xref ref-type="bibr" rid="ref7">Crawford, 2021</xref>; <xref ref-type="bibr" rid="ref34">Zuboff, 2019</xref>). Advancing digital equity in the age of AI-generated disinformation requires sustained interdisciplinary collaboration among computer scientists, communication scholars, social scientists, and, most importantly, the communities experiencing these algorithmic injustices.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec17">
<label>6</label>
<title>Conclusion</title>
<p>The proliferation of AI-generated visual disinformation represents a critical threat to digital equity and social justice. This review has demonstrated how algorithmic systems systematically disadvantage marginalised communities by leveraging intersecting vulnerabilities related to race, ethnicity, socioeconomic status, geographic location, and digital literacy. The evidence synthesised here reveals that AI technologies do not merely reflect existing inequalities but actively reproduce and amplify them through biased training data, discriminatory performance disparities, inequitable platform governance, and differential access to protective resources. Addressing these challenges requires comprehensive interventions that span technical, educational, regulatory, and community-centred approaches. Only through sustained commitment to centring the experiences and needs of the most vulnerable populations can we build AI systems and information ecosystems that advance rather than undermine digital equity.</p>
</sec>
</body>
<back>
<sec sec-type="author-contributions" id="sec18">
<title>Author contributions</title>
<p>VP: Writing &#x2013; original draft, Software, Formal analysis, Data curation, Conceptualization, Methodology, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<ack>
<title>Acknowledgments</title>
<p>The authors thank the Digital Communication Study Program, Faculty of Vocational Studies at Hasanuddin University, for institutional support.</p>
</ack>
<sec sec-type="COI-statement" id="sec19">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec20">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript.</p>
</sec>
<sec sec-type="disclaimer" id="sec21">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Alon-Barkat</surname><given-names>S.</given-names></name> <name><surname>Busuioc</surname><given-names>M.</given-names></name> <name><surname>Schwoerer</surname><given-names>K.</given-names></name> <name><surname>Wei&#x00DF;m&#x00FC;ller</surname><given-names>K. S.</given-names></name></person-group> (<year>2025</year>). <article-title>Algorithmic discrimination in public service provision: understanding citizens&#x2019; attribution of responsibility for human versus algorithmic discriminatory outcomes</article-title>. <source>J. Public Adm. Res. Theory</source> <volume>35</volume>, <fpage>469</fpage>&#x2013;<lpage>488</lpage>. doi: <pub-id pub-id-type="doi">10.1093/jopart/muaf024</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arora</surname><given-names>A.</given-names></name> <name><surname>Barrett</surname><given-names>M.</given-names></name> <name><surname>Lee</surname><given-names>E.</given-names></name> <name><surname>Oborn</surname><given-names>E.</given-names></name> <name><surname>Prince</surname><given-names>K.</given-names></name></person-group> (<year>2023</year>). <article-title>Risk and the future of AI: algorithmic bias, data colonialism, and marginalization</article-title>. <source>Inf. Organ.</source> <volume>33</volume>:100478. doi: <pub-id pub-id-type="doi">10.1016/j.infoandorg.2023.100478</pub-id></mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Benjamin</surname><given-names>R.</given-names></name></person-group> (<year>2020</year>). <source>Race after technology: Abolitionist tools for the new Jim code</source>. <publisher-loc>Cambridge, UK; Medford, MA</publisher-loc>: <publisher-name>Polity</publisher-name>.</mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Bonil</surname><given-names>G.</given-names></name> <name><surname>Hashiguti</surname><given-names>S.</given-names></name> <name><surname>Silva</surname><given-names>J.</given-names></name> <name><surname>Gondim</surname><given-names>J.</given-names></name> <name><surname>Maia</surname><given-names>H.</given-names></name> <name><surname>Silva</surname><given-names>N.</given-names></name> <etal/></person-group>. (<year>2025</year>). <article-title>Yet another algorithmic bias: a discursive analysis of large language models reinforcing dominant discourses on gender and race</article-title>, 1&#x2013;29. Available online at: <ext-link xlink:href="https://arxiv.org/abs/2508.10304" ext-link-type="uri">https://arxiv.org/abs/2508.10304</ext-link></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Carr</surname><given-names>Randolf</given-names></name> <name><surname>K&#x00F6;hler</surname><given-names>Paula</given-names></name></person-group>. (<year>2024</year>). AI-pocalypse Now? Disinformation, AI, and the Super Election Year. Available online at: <ext-link xlink:href="https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year" ext-link-type="uri">https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year</ext-link> (Accessed June 23, 2025).</mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Connell</surname><given-names>Adam</given-names></name></person-group>. (<year>2025</year>). 39 Latest Visual Content Statistics and Trends for 2026. Available online at: <ext-link xlink:href="https://bloggingwizard.com/visual-content-statistics/" ext-link-type="uri">https://bloggingwizard.com/visual-content-statistics/</ext-link>.</mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Crawford</surname><given-names>K.</given-names></name></person-group> (<year>2021</year>). <source>The atlas of AI: Power, politics, and the planetary costs of artificial intelligence</source>. London: <publisher-name>Yale University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Crenshaw</surname><given-names>K.</given-names></name></person-group> (<year>1989</year>). <article-title>Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics</article-title>. <source>Univ. Chicago Legal Forum</source> <volume>1989</volume>, <fpage>139</fpage>&#x2013;<lpage>167</lpage>.</mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>DeMio</surname><given-names>P. S.</given-names></name></person-group> (<year>2024</year>). <source>How states and districts can close the digital divide to increase college and career readiness</source>. Washington: <publisher-name>Center for American Progress</publisher-name>.</mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Eslami</surname><given-names>M.</given-names></name> <name><surname>Fox</surname><given-names>S.</given-names></name> <name><surname>Shen</surname><given-names>H.</given-names></name> <name><surname>Fan</surname><given-names>B.</given-names></name> <name><surname>Lin</surname><given-names>Y.-R.</given-names></name> <name><surname>Farzan</surname><given-names>R.</given-names></name> <etal/></person-group>. (<year>2025</year>). &#x201C;<chapter-title>From margins to the Table: charting the potential for public participatory governance of algorithmic decision making</chapter-title>&#x201D; in <source>Proceedings of the 2025 ACM conference on fairness, accountability, and transparency</source>. doi: <pub-id pub-id-type="doi">10.1145/3715275.3732173</pub-id></mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Franklin</surname><given-names>G.</given-names></name> <name><surname>Stephens</surname><given-names>R.</given-names></name> <name><surname>Piracha</surname><given-names>M.</given-names></name> <name><surname>Tiosano</surname><given-names>S.</given-names></name> <name><surname>Lehouillier</surname><given-names>F.</given-names></name> <name><surname>Koppel</surname><given-names>R.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>The sociodemographic biases in machine learning algorithms: a biomedical informatics perspective</article-title>. <source>Life</source> <volume>14</volume>, 1&#x2013;15. doi: <pub-id pub-id-type="doi">10.3390/life14060652</pub-id>, <pub-id pub-id-type="pmid">38929638</pub-id></mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hamiel</surname><given-names>N</given-names></name></person-group>. (<year>2025</year>). <article-title>Deepfakes proved a different threat than expected. Here's how to defend against them</article-title>. Cybersecurity. Available online at: <ext-link xlink:href="https://www.weforum.org/stories/2025/01/deepfakes-different-threat-than-expected/" ext-link-type="uri">https://www.weforum.org/stories/2025/01/deepfakes-different-threat-than-expected/</ext-link> (Accessed June 23, 2025).</mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Hill Collins</surname><given-names>P.</given-names></name></person-group> (<year>2000</year>). <source>Black feminist thought: Knowledge, consciousness, and the politics of empowerment</source>. <edition>Rev. 10th anniversary</edition> Edn. <publisher-loc>New York</publisher-loc>: <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hollimon</surname><given-names>L. A.</given-names></name> <name><surname>Taylor</surname><given-names>K. V.</given-names></name> <name><surname>Fiegenbaum</surname><given-names>R.</given-names></name> <name><surname>Carrasco</surname><given-names>M.</given-names></name> <name><surname>Garchitorena Gomez</surname><given-names>L.</given-names></name> <name><surname>Chung</surname><given-names>D.</given-names></name> <etal/></person-group>. (<year>2025</year>). <article-title>Redefining and solving the digital divide and exclusion to improve healthcare: going beyond access to include availability, adequacy, acceptability, and affordability</article-title>. <source>Front. Digit. Health.</source> <volume>7</volume>:<fpage>1508686</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fdgth.2025.1508686</pub-id>, <pub-id pub-id-type="pmid">40330871</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="book"><collab id="coll1">Hootsuite</collab> (<year>2025</year>). <source>Social media trends 2025</source>. Vancouver: <publisher-name>Hootsuite</publisher-name>.</mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Humber</surname><given-names>N. J.</given-names></name></person-group> (<year>2023</year>). <article-title>A home for digital equity: algorithmic redlining and property technology</article-title>. <source>Calif. Law Rev.</source> <volume>111</volume>, 1421&#x2013;1484. doi: <pub-id pub-id-type="doi">10.15779/Z38ZP3W18S</pub-id></mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="other"><collab id="coll2">Insikt Group</collab> (<year>2024</year>). Targets, Objectives, and Emerging Tactics of Political Deepfakes. Recorded Future. Available online at: <ext-link xlink:href="https://www.recordedfuture.com/research/targets-objectives-emerging-tactics-political-deepfakes" ext-link-type="uri">https://www.recordedfuture.com/research/targets-objectives-emerging-tactics-political-deepfakes</ext-link></mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Jackson</surname><given-names>Jessica K.</given-names></name> <name><surname>Starr</surname><given-names>Jeffrey</given-names></name> <name><surname>Weaver</surname><given-names>D&#x2019;Andre</given-names></name></person-group>. (<year>2024</year>). A New Approach to Digital Equity: A Framework for States and Schools Ed Policy. <ext-link xlink:href="https://www.gettingsmart.com/2024/09/09/a-new-approach-to-digital-equity-a-framework-for-states-and-schools/" ext-link-type="uri">https://www.gettingsmart.com/2024/09/09/a-new-approach-to-digital-equity-a-framework-for-states-and-schools/</ext-link>.</mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Koshy</surname><given-names>Vinay</given-names></name></person-group>. (<year>2019</year>). Visual Content Marketing Statistics: 52 Must-Know Insights for 2024. <ext-link xlink:href="https://www.sproutworth.com/visual-content-marketing-statistics/" ext-link-type="uri">https://www.sproutworth.com/visual-content-marketing-statistics/</ext-link>.</mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lendvai</surname><given-names>G. F.</given-names></name> <name><surname>Gosztonyi</surname><given-names>G.</given-names></name></person-group> (<year>2025</year>). <article-title>Algorithmic bias as a core legal dilemma in the age of artificial intelligence: conceptual basis and the current state of regulation</article-title>. <source>Laws</source> <volume>14</volume>, 1&#x2013;15. doi: <pub-id pub-id-type="doi">10.3390/laws14030041</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>L.</given-names></name> <name><surname>Yingfei</surname><given-names>W.</given-names></name> <name><surname>Zhen</surname><given-names>F.</given-names></name> <name><surname>Shaohui</surname><given-names>W.</given-names></name></person-group> (<year>2025</year>). <article-title>The impact of verbal and visual content on consumer engagement in social media marketing</article-title>. <source>Prod. Oper. Manag.</source> <volume>34</volume>, <fpage>3416</fpage>&#x2013;<lpage>3437</lpage>. doi: <pub-id pub-id-type="doi">10.1177/10591478251349892</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Merod</surname><given-names>A.</given-names></name></person-group> (<year>2025</year>). How K-12 leaders can tackle the &#x2018;digital use divide&#x2019;. Available online at: <ext-link xlink:href="https://www.k12dive.com/news/how-k-12-leaders-can-tackle-the-digital-use-divide/804672/" ext-link-type="uri">https://www.k12dive.com/news/how-k-12-leaders-can-tackle-the-digital-use-divide/804672/</ext-link> (Accessed November 11, 2025).</mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Naffi</surname><given-names>N.</given-names></name></person-group> (<year>2025</year>). "Deepfakes and the crisis of knowing." Available online at: <ext-link xlink:href="https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing" ext-link-type="uri">https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing</ext-link> (Accessed November 10, 2025).</mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Negreiro</surname><given-names>M.</given-names></name></person-group> (<year>2025</year>) in <source>Children and deepfakes</source>. ed. <collab id="coll3">European Parliamentary Research Service: European Parliament</collab>. Strasbourg, France.</mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Noble</surname><given-names>S. U.</given-names></name></person-group> (<year>2018</year>). <source>Algorithms of oppression: how search engines reinforce racism</source>. New York: <publisher-name>NYU Press</publisher-name>.</mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Obed</surname><given-names>B.</given-names></name> <name><surname>Boateng</surname><given-names>B.</given-names></name></person-group> (<year>2025</year>). <article-title>Algorithmic bias in educational systems: examining the impact of AI-driven decision making in modern education</article-title>. <source>World J. Adv. Res. Rev.</source> <volume>25</volume>, <fpage>2012</fpage>&#x2013;<lpage>2017</lpage>. doi: <pub-id pub-id-type="doi">10.30574/wjarr.2025.25.1.0253</pub-id></mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>O'Neil</surname><given-names>C.</given-names></name></person-group> (<year>2016</year>). <source>Weapons of math destruction: How big data increases inequality and threatens democracy</source>. <edition>First</edition> Edn. <publisher-loc>New York</publisher-loc>: <publisher-name>Crown</publisher-name>.</mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pasipamire</surname><given-names>N.</given-names></name> <name><surname>Muroyiwa</surname><given-names>A.</given-names></name></person-group> (<year>2024</year>). <article-title>Navigating algorithm bias in AI: ensuring fairness and trust in Africa</article-title>. <source>Front. Res. Metr. Anal.</source> <volume>9</volume>:<fpage>1486600</fpage>. doi: <pub-id pub-id-type="doi">10.3389/frma.2024.1486600</pub-id>, <pub-id pub-id-type="pmid">39512269</pub-id></mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Perkins</surname><given-names>C.</given-names></name> <name><surname>Verevis</surname><given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Transnational television remakes</article-title>. <source>Continuum</source> <volume>29</volume>, <fpage>677</fpage>&#x2013;<lpage>683</lpage>. doi: <pub-id pub-id-type="doi">10.1080/10304312.2015.1068729</pub-id></mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Poggi</surname><given-names>Andrea</given-names></name></person-group> (<year>2025</year>) "The Digital Divide: A Barrier to Social, Economic and Political Equity." <ext-link xlink:href="https://www.ispionline.it/en/publication/the-digital-divide-a-barrier-to-social-economic-and-political-equity-204564" ext-link-type="uri">https://www.ispionline.it/en/publication/the-digital-divide-a-barrier-to-social-economic-and-political-equity-204564</ext-link></mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Raihan</surname><given-names>M. M. H.</given-names></name> <name><surname>Subroto</surname><given-names>S.</given-names></name> <name><surname>Chowdhury</surname><given-names>N.</given-names></name> <name><surname>Koch</surname><given-names>K.</given-names></name> <name><surname>Ruttan</surname><given-names>E.</given-names></name> <name><surname>Turin</surname><given-names>T. C.</given-names></name></person-group> (<year>2024</year>). <article-title>Dimensions and barriers for digital (in)equity and digital divide: a systematic integrative review</article-title>. <source>Digital Transformation and Society</source> <volume>4</volume>, <fpage>111</fpage>&#x2013;<lpage>127</lpage>. doi: <pub-id pub-id-type="doi">10.1108/dts-04-2024-0054</pub-id></mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="other"><collab id="coll4">S&#x0026;P Global</collab> (<year>2025</year>) "Generative AI market revenue projected to grow at a 40% CAGR from 2024&#x2013;2029."Available online at: <ext-link xlink:href="https://www.spglobal.com/market-intelligence/en/news-insights/research/generative-ai-market-revenue-projected-to-grow-at-a-40-cagr-from-2024-2029" ext-link-type="uri">https://www.spglobal.com/market-intelligence/en/news-insights/research/generative-ai-market-revenue-projected-to-grow-at-a-40-cagr-from-2024-2029</ext-link></mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Tajfel</surname><given-names>H.</given-names></name> <name><surname>Turner</surname><given-names>J. C.</given-names></name></person-group> (<year>1979</year>). &#x201C;<chapter-title>An integrative theory of intergroup conflict</chapter-title>&#x201D; in <source>The social psychology of intergroup relations</source>. eds. <person-group person-group-type="editor"><name><surname>Austin</surname><given-names>W. G.</given-names></name> <name><surname>Worchel</surname><given-names>S.</given-names></name></person-group> (<publisher-loc>Monterey, CA</publisher-loc>: <publisher-name>Brooks/Cole</publisher-name>), <fpage>33</fpage>&#x2013;<lpage>47</lpage>.</mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zuboff</surname><given-names>S.</given-names></name></person-group> (<year>2019</year>). <source>The age of surveillance capitalism: The fight for a human future at the new frontier of power</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>PublicAffairs</publisher-name>.</mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1046346/overview">Daisuke Akiba</ext-link>, The City University of New York, United States</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1494059/overview">Janneth Trejo-Quintana</ext-link>, National Autonomous University of Mexico, Mexico</p>
</fn>
</fn-group>
</back>
</article>