<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="review-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Polit. Sci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Political Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Polit. Sci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2673-3145</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpos.2026.1626848</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The challenges of AI regulation: data protection, civil rights, and landmark cases</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Freitas</surname>
<given-names>Judite Gon&#x00E7;alves De</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/479409"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>Faculty of Humanities and Social Sciences, Portuguese Institute of International Relations, Fernando Pessoa University</institution>, <city>Porto</city>, <country country="pt">Portugal</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Judite Gon&#x00E7;alves De Freitas, <email xlink:href="mailto:jfreitas@ufp.edu.pt">jfreitas@ufp.edu.pt</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-03-25">
<day>25</day>
<month>03</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>8</volume>
<elocation-id>1626848</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>05</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>20</day>
<month>02</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>03</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Freitas.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Freitas</copyright-holder>
<license>
<ali:license_ref start_date="2026-03-25">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The increasing impact of artificial intelligence (AI) poses complex challenges to the protection of personal data and the defence of civil rights. The growth of the digital economy, based on the continuous recording and analysis of human behaviour, has increased concerns about privacy and individual autonomy. This study analyses how the main legal frameworks of the European Union (EU), specifically the General Data Protection Regulation and the Artificial Intelligence Act (AI Act), enhance data protection for individuals in the EU, restricting measures and reinforcing ethical standards. Using a qualitative approach, the research combines exploratory analysis of European legislation with comparative analysis of two emblematic cases in different contexts, illustrating the restriction of civil liberties for the same structural reason the control of information and collective behaviour, namely the Cambridge Analytica scandal and the rise of algorithmic censorship mechanisms in India under the pretext of countering foreign interference and disinformation. The aim is to understand how different political contexts and legal architectures respond to the growing tensions between technological innovation and the preservation of fundamental rights. The findings indicate that the Cambridge Analytica case revealed structural limitations in privacy protection systems, reinforcing the need for stronger accountability mechanisms. In contrast, the Indian case illustrates how the deployment of AI can facilitate state surveillance and information control, justified by constitutional articles, undermining freedom of expression and other civil liberties. The choice of these cases is deliberate, as it reveals the differences in political and institutional approaches to AI governance and the mechanisms developed to defend or violate fundamental freedoms. The European Union, by conceiving the right to privacy as a fundamental principle, has created a pioneering global regulatory benchmark, reinforced by the adaptive work of the European Data Protection Board (EDPB). Ultimately, the study argues that only egalitarian, comprehensive and adaptable legal systems can ensure that the development of AI is aligned with democratic principles and respects individual freedoms in contexts of increasing automation.</p>
</abstract>
<kwd-group>
<kwd>AI regulation</kwd>
<kwd>civil rights</kwd>
<kwd>data protection</kwd>
<kwd>DPDPA</kwd>
<kwd>GDPR</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="70"/>
<page-count count="7"/>
<word-count count="6152"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Politics of Technology</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>The sustained innovation of artificial intelligence (AI) and its rising presence in modern society pose a multitude of challenges and opportunities with direct repercussions on civil and political rights (<xref ref-type="bibr" rid="ref67">Zuboff, 2019</xref>; <xref ref-type="bibr" rid="ref10">Bradshaw and Howard, 2018</xref>). While generative AI technologies promise to democratize access to and the production of data, they simultaneously increase legitimate concerns about unprecedented forms of surveillance enabled by the mass collection of personal data and the use of opaque, strategic decision-making systems that may compromise privacy, freedom of expression and individual autonomy (<xref ref-type="bibr" rid="ref50">Pasquale, 2015</xref>; <xref ref-type="bibr" rid="ref65">Wachter et al., 2017</xref>). These dynamics are particularly visible in the context of &#x201C;surveillance capitalism&#x201D; (<xref ref-type="bibr" rid="ref67">Zuboff, 2019</xref>), a model of accumulation grounded in extensive and asymmetric data extraction, in which personal information, including identity, image, location, sexual orientation, religious beliefs and everyday habits, is constantly captured, monetised and repurposed, reinforcing control mechanisms and restricting the exercise of fundamental rights (<xref ref-type="bibr" rid="ref67">Zuboff, 2019</xref>; <xref ref-type="bibr" rid="ref36">Isaak and Hanna, 2018</xref>).</p>
<p>Artificial intelligence is a hard regulatory problem, and it operates across spatial, temporal, and complexity scales that surpass the conceptual and institutional capacities of traditional regulatory frameworks, which further exacerbates these asymmetries and challenges the ability of existing governance structures to protect individual rights effectively. Against this backdrop, European institutions have progressively recognised data protection as a fundamental right through instruments such as the, yet AI systems remain far from neutral in their use of data, often operating without robust ethical and transparency principles and thereby compromising fairness and impartiality (<xref ref-type="bibr" rid="ref24">European Union, 2016</xref>; <xref ref-type="bibr" rid="ref64">Veale and Zuiderveen, 2021</xref>).</p>
<p>This article examines how the development of AI can simultaneously function as an instrument for promoting or degrading fundamental rights, depending on the regulatory framework and the concrete practices through which these technologies are implemented (<xref ref-type="bibr" rid="ref30">Gorwa et al., 2020</xref>). It focuses on the European Union&#x2019;s legal regulation of the digital sphere, in particular the GDPR and the AI Act, exploring how these instruments seek to foster trustworthy and ethical AI systems while constraining harmful or exploitative uses of the technology (<xref ref-type="bibr" rid="ref19">European Commission, 2021b</xref>; <xref ref-type="bibr" rid="ref21">European Data Protection Board, 2024</xref>).</p>
<p>From this perspective, the central research question is: to what extent can the GDPR and the AI Act effectively protect civil rights and prevent the misuse of personal data in AI systems, especially when compared with regulatory environments where such protections are vulnerable (by the centralised regulatory power), as documented in debates on global variations in data protection regimes and digital authoritarianism (<xref ref-type="bibr" rid="ref35">Human Rights Watch, 2021b</xref>).</p>
<p>To address this question, the study adopts a qualitative research design combining, first, exploratory analysis of the main principles, regulatory mechanisms and rights-based safeguards established in the GDPR and the AI Act, and second, a comparative case study of two emblematic events: the Cambridge Analytica data manipulation scandal and the implementation of AI-driven media censorship practices in India (<xref ref-type="bibr" rid="ref12">Cadwalladr and Graham-Harrison, 2018</xref>; <xref ref-type="bibr" rid="ref34">Human Rights Watch, 2021a</xref>). The Cambridge Analytica case reveals how legal safeguards can be challenged in real-world contexts where AI and data analytics shape democratic participation, while AI-based censorship in India offers a critical counterpoint that illustrates the vulnerabilities of less robust human-rights frameworks (<xref ref-type="bibr" rid="ref9004">Udupa et al., 2021</xref>). Taken together, these cases show that civil rights can be compromised through the indiscriminate and exploitative use of AI systems operating without sufficient transparency or ethical controls, underscoring the urgent need for governance arrangements capable of reconciling rapid technological innovation with the safeguarding of a just, democratic and rights-respecting digital future (<xref ref-type="bibr" rid="ref60">United Nations Human Rights Council, 2018</xref>; <xref ref-type="bibr" rid="ref22">European Parliament, 2019</xref>).</p>
<sec id="sec2">
<label>1.1</label>
<title>AI, surveillance, and social implications</title>
<p>The acceleration of artificial intelligence (AI) and digital surveillance technologies has profoundly recon&#x2019;ured contemporary social, political, and legal dynamics. <xref ref-type="bibr" rid="ref67">Zuboff (2019)</xref>, in her seminal work <italic>The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power</italic>, proposes that a new model of capitalism &#x201C;surveillance capitalism&#x201D; emerged from the commercialization of behavioural data. This paradigm represents not merely a technological shift but a structural transformation in the dynamics of power, autonomy, and social control. Whereas industrial capitalism relied on physical and finite resources, surveillance capitalism is grounded in the extraction, processing, and economic use of personal information, generating what Zuboff defines as &#x201C;behavioural futures markets&#x201D; (<xref ref-type="bibr" rid="ref67">Zuboff, 2019</xref>, p. 150).</p>
<p>Through algorithmic profiling and behavioural targeting, multinational corporations such as Google, Meta, and Amazon collect both ordinary and sensitive personal data (as defined in Article 4 of the <xref ref-type="bibr" rid="ref24">European Union, 2016</xref>), building predictive systems that shape and influence individual preferences and decision-making. This continuous monitoring subjects&#x2019; individuals to unprecedented forms of exposure, while reducing their capacity for autonomous action (<xref ref-type="bibr" rid="ref11">Brundage et al., 2022</xref>; <xref ref-type="bibr" rid="ref45">Meissner and Wulf, 2023</xref>). The proliferation of virtual assistants, facial recognition systems, and Internet of Things (IoT) devices further extends this dependency, expanding the risks of data breaches, surveillance opacity, and accountability gaps (<xref ref-type="bibr" rid="ref47">Narayanan et al., 2023</xref>; <xref ref-type="bibr" rid="ref2">Algorithm Watch, 2022</xref>). Behavioural data thus emerges as the raw material of a new regime of digital power, used to anticipate and manipulate conduct for commercial, political, or social control purposes. These developments place fundamental rights of privacy, freedom of expression, and political participation under increasing pressure, raising urgent concerns about the preservation of democratic legitimacy (<xref ref-type="bibr" rid="ref25">Fjeld et al., 2020</xref>; <xref ref-type="bibr" rid="ref23">European Parliament, 2022</xref>).</p>
<p>Within this evolving landscape, scholars such as <xref ref-type="bibr" rid="ref49">Pariser (2011)</xref> and <xref ref-type="bibr" rid="ref63">van Dijck et al. (2021)</xref> have shown how algorithmic filtering produces &#x201C;filter bubbles&#x201D; that narrow the horizon of public discourse and foster epistemic polarization. Such mechanisms, by privileging engagement and personalization, systematically restrict plural deliberation, thereby undermining the robust exchange of ideas that democracy requires (<xref ref-type="bibr" rid="ref48">Noble, 2018</xref>; <xref ref-type="bibr" rid="ref46">Napoli, 2021</xref>).</p>
<p>In parallel, <xref ref-type="bibr" rid="ref5">Bauman and Lyon (2012)</xref> introduce the concept of &#x201C;liquid surveillance&#x201D; to describe a networked, dispersed, and often voluntary form of data gathering that contrasts with the coercive models of disciplinary control described by <xref ref-type="bibr" rid="ref27">Foucault (1993)</xref>. Unlike traditional forms of surveillance that operated through centralised authority, liquid surveillance operates flawlessly through digital infrastructures framed as convenient, connected, and configurable. Such opacity disguises exclusionary patterns and discriminatory practices, as algorithmic systems reproduce social hierarchies while claiming neutrality (<xref ref-type="bibr" rid="ref6">Benjamin, 2019</xref>). Scholars such as <xref ref-type="bibr" rid="ref43">Lyon (2023)</xref> and <xref ref-type="bibr" rid="ref3">Amoore (2021)</xref> caution that these dynamics normalise the internalization of control, eroding critical consciousness and reinforcing conformity. The erosion of deliberative diversity and the algorithmic curation of information contribute to a broader crisis of democratic communication, in which access to information becomes contingent on the invisible architectures of platform capitalism.</p>
<p>From a broader philosophical perspective, several thinkers have emphasised the social implications of this algorithmic turn. <xref ref-type="bibr" rid="ref42">Lanier (2018)</xref> contends that the digital environment commodifies social interaction, eroding empathy and incentivizing performative self-presentation (<xref ref-type="bibr" rid="ref44">Matzner, 2022</xref>). Similarly, <xref ref-type="bibr" rid="ref31">Han (2015</xref>, <xref ref-type="bibr" rid="ref32">2022</xref>, <xref ref-type="bibr" rid="ref33">2023)</xref> argues that the culture of digital transparency and self-exposure undermines civic freedom by transferring mechanisms of control from the state to the market. Within this &#x201C;society of transparency,&#x201D; citizens increasingly become participants in their own surveillance, internalizing observation as a normative condition of social validation (<xref ref-type="bibr" rid="ref16">Couldry and Mej&#x00ED;as, 2021</xref>). This structural shift transforms the public sphere from a collective arena of democratic deliberation into a spectacle of individualised visibility, diluting politics into atomised expression.</p>
<p>Extending this analysis, <xref ref-type="bibr" rid="ref15">Couldry and Mej&#x00ED;as (2019</xref>, <xref ref-type="bibr" rid="ref16">2021)</xref> conceptualise the global data order as a form of &#x201C;data colonialism.&#x201D; They argue that data extraction reproduces colonial logics of appropriation, particularly in the Global South, where technological infrastructures remain largely controlled by transnational corporations (<xref ref-type="bibr" rid="ref9002">Birhane, 2023</xref>; <xref ref-type="bibr" rid="ref64">Veale and Zuiderveen, 2021</xref>). Behavioural data becomes a new resource frontier, mirroring historical resource extraction and perpetuating dependency structures under the guise of innovation. This process deepens geopolitical asymmetries, undermining digital autonomy and civic sovereignty in developing nations.</p>
<p>Contrasting with these critical frameworks, <xref ref-type="bibr" rid="ref61">Unwin (2017</xref>, <xref ref-type="bibr" rid="ref62">2021)</xref> proposes that AI and digital technologies can advance civic rights and democratic inclusion when governed by ethical and participatory principles. For Unwin, digital access and literacy constitute integral dimensions of modern citizenship, requiring states to develop equitable infrastructures and regulatory mechanisms that promote responsible digital participation. International frameworks similarly position AI ethics within a broader agenda of human-centred governance, emphasizing inclusivity, transparency, and accountability as essential to the continuity of democratic societies (<xref ref-type="bibr" rid="ref26">Floridi, 2021</xref>; <xref ref-type="bibr" rid="ref59">United Nations, 2022</xref>; <xref ref-type="bibr" rid="ref58">UNESCO, 2022</xref>).</p>
<p>In synthesis, the convergence of surveillance capitalism, data colonialism, and digital transparency delineates a complex ethical and political field. Contemporary scholarship (<xref ref-type="bibr" rid="ref8">Bietti, 2020</xref>; <xref ref-type="bibr" rid="ref20">European Commission, 2024</xref>) reinforces that AI governance must reconcile innovation with the protection of civil liberties, demanding mechanisms of public oversight to ensure the accountability of algorithmic systems. The transformation of surveillance into a diffuse, participatory, and self-reinforcing sociotechnical system represents one of the central challenges of the twenty-first century (<xref ref-type="bibr" rid="ref14">Christodoulou and Limniotis, 2024</xref>). Ensuring that automation and data-driven governance remain compatible with democratic norms requires a recalibration of power, law, and technological design.</p>
</sec>
</sec>
<sec id="sec3">
<label>2</label>
<title>Case studies</title>
<sec id="sec4">
<label>2.1</label>
<title>Regulatory and institutional framework</title>
<p>The identification and analysis of the principles, regulatory instruments, and fundamental rights safeguards in the GDPR and AI Act reveal complementary yet distinct approaches. The GDPR focuses on the protection of personal data, establishing core principles such as lawfulness, transparency, data minimisation, purpose limitation, and accountability, alongside robust data subject rights including access, rectification, objection, portability, and restriction of processing (<xref ref-type="bibr" rid="ref24">European Union, 2016</xref>). These safeguards primarily operate at the level of data processing, imposing obligations on any entity that collects or uses personal information, including scenarios where processing involves AI systems (<xref ref-type="bibr" rid="ref9">Bietti, 2025</xref>).</p>
<p>The AI Act (<xref ref-type="bibr" rid="ref20">European Commission, 2024</xref>) adopts a risk-based approach to regulating artificial intelligence systems, categorizing them into four levels of risk: unacceptable, high, limited, and minimal. Systems classified as unacceptable risk are prohibited outright due to their potential to cause significant harm or violate fundamental rights, including practices such as social scoring and subliminal manipulation. High-risk AI systems, which encompass applications with substantial implications for safety and fundamental rights, such as human identification, critical infrastructure management, and law enforcement, are subject to stringent requirements. These include comprehensive risk management, data governance standards, technical documentation, transparency obligations, human oversight, and post-market monitoring mechanisms. Limited-risk systems are subject to lighter regulatory obligations, primarily focused on transparency, such as informing users of AI interaction (e.g., chatbots), while minimal-risk AI systems face minimal to no mandatory requirements (<xref ref-type="bibr" rid="ref18">European Commission, 2021a</xref>).</p>
<p>Within the context of this study, the EU GDPR framework remains paramount as the primary legal instrument protecting fundamental rights through the control of personal data processing. Although the AI Act regulates the entire lifecycle of AI systems to prevent practices threatening these rights, it assumes a secondary role relative to the GDPR. The GDPR ensures rights-based safeguards directly tied to personal data privacy and governance, while the AI Act complements this by addressing the broader governance of AI system risks (<xref ref-type="bibr" rid="ref19">European Commission, 2021b</xref>; <xref ref-type="bibr" rid="ref56">Trail, 2025</xref>).</p>
<p>In India, under Prime Minister Narendra Modi (2014&#x2014;present), personal data protection legislation had little relevance until 2023. Regulatory oversight tends to be politically conditioned, granting the state a broad scope of discretionary authority (<xref ref-type="bibr" rid="ref40">Kaye, 2019</xref>; <xref ref-type="bibr" rid="ref66">Yadav and Yadav, 2025</xref>). There is often a combination of state surveillance and private platforms, which collectively exercise control over information and public discourse (<xref ref-type="bibr" rid="ref10">Bradshaw and Howard, 2018</xref>; <xref ref-type="bibr" rid="ref57">Udupa and Pohjonen, 2019</xref>).</p>
<p>The Personal Data Bill (<xref ref-type="bibr" rid="ref9003">Government of India, 2018</xref>) was not adopted and was instead replaced by a new law; the original bill process led to the enactment of the <xref ref-type="bibr" rid="ref17">Digital Personal Data Protection Act (2023)</xref>, constituting the country&#x2019;s first legal framework for personal data. The DPDPA applies to digital personal data processed in India, including data collected online or digitised offline, and extends extraterritorially when targeting Indian residents. It defines &#x201C;personal data&#x201D; as any information relating to an identifiable individual and requires that processing be carried out only for lawful purposes based on free, specific, and informed consent, with exceptions for certain &#x201C;legitimate purposes,&#x201D; such as government services, emergencies, or employment-related processing. Data controllers are obligated to ensure data accuracy, security, purpose limitation, and erasure once the purpose has been fulfilled, while data principals (individuals) are granted rights including access, correction, erasure, grievance redressal, and the ability to appoint representatives.</p>
<p>Compared to the European Union&#x2019;s GDPR, the DPDPA applies only to digital personal data, excluding non-digital formats unless they are subsequently digitised, whereas the GDPR covers personal data regardless of format. India&#x2019;s DPDPA places strong emphasis on government exemptions, granting broad powers to the executive regarding national security, public order, and sovereignty. This means exempted government agencies can process personal data without adhering to fundamental safeguards such as obtaining consent, ensuring data security, or transparency obligations (<xref ref-type="bibr" rid="ref51">PRS India Legislative Research, 2023</xref>).</p>
</sec>
<sec id="sec5">
<label>2.2</label>
<title>Challenges to fundamental rights: data manipulation and media censorship</title>
<p>This point applies the theoretical and normative framework to two comparative case studies, employing qualitative methodology and exploratory analysis. The proposed structure is as follows: (i) Cambridge Analytica and data manipulation in electoral contexts, (ii) AI-based media censorship in India, and (iii) a cross-cutting comparison.</p>
<p>The right to information, understood as the free and pluralistic access to trustworthy data, was gravely threatened by the Cambridge Analytica scandal, which involved the undue collection of personal data from millions of Facebook users for manipulative political profiling, thus violating data protection regulations such as the GDPR and precursor directives. Cambridge Analytica accessed data from millions of profiles via a quiz application, using it for microtargeting in elections, such as the 2016 US election and Brexit Techniques included aggregating behavioural data, such as likes and shares, to create voter profiles for personalised political messaging, often without explicit consent.</p>
<p>The creation of political campaigns aimed at manipulating voters constituted a reprehensible and unlawful act, for both ethical and legal reasons, namely due to the violation of data protection principles established in the GDPR, and violated fundamental democratic guarantees such as freedom to vote and electoral integrity. These political messages were tailored to influence electoral behaviour, particularly in key swing states such as Michigan, Wisconsin, and Pennsylvania (<xref ref-type="bibr" rid="ref12">Cadwalladr and Graham-Harrison, 2018</xref>). This violated fundamental rights to privacy and information both in the EU and globally. Although the GDPR provided safeguards such as data minimisation and purpose limitation (Articles 5 and 25), enforcement proved insufficient due to delayed fines, highlighting gaps in cross-border accountability mechanisms.</p>
<p>The Indian government has systematically manipulated the media through a combination of coercion, ownership control, and strategic propaganda. Since the Bharatiya Janata Party (BJP) came to power in 2014 to the present day, it has employed financial and political pressure, including allocation or withdrawal of government advertising, to incentivise media outlets to align with its ideological agenda and marginalise dissenting voices (<xref ref-type="bibr" rid="ref39">Jeffrey, 2020</xref>). Journalists and media houses critical of the government often face intimidation, legal harassment, and economic sanctions, leading to self-censorship (<xref ref-type="bibr" rid="ref41">Kumar and Ravi, 2021</xref>). Furthermore, prominent news channels have been co-opted to function as government mouthpieces, promoting a pro-government narrative while suppressing opposition views (<xref ref-type="bibr" rid="ref13">Chatterjee, 2019</xref>). The government also deploys extensive digital tactics such as fake social media accounts and AI-driven bots to amplify favourable content and discredit critics domestically and internationally (<xref ref-type="bibr" rid="ref54">Singh and Patel, 2023</xref>). Independent investigations reveal collusion between political actors and media executives to influence election coverage and public opinion (<xref ref-type="bibr" rid="ref52">Rao, 2024</xref>). Combined with opaque regulatory frameworks lacking transparency and limited judicial oversight, these practices have effectively transformed significant segments of the Indian media landscape into instruments of state propaganda, thereby undermining democratic discourse and pluralism (<xref ref-type="bibr" rid="ref7">Bhatia, 2020</xref>; <xref ref-type="bibr" rid="ref38">Iyer, 2023</xref>).</p>
<p>The Indian government used the revelations about Cambridge Analytica as a pretext to tighten information technology laws. India&#x2019;s <italic>Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules</italic> (<xref ref-type="bibr" rid="ref37">IT Rules, 2021</xref>) and the <xref ref-type="bibr" rid="ref17">Digital Personal Data Protection Act (2023)</xref>. The first impose stringent obligations on social media intermediaries, especially those with large user bases, requiring proactive AI-driven content monitoring and the identification of the first originator of information upon government or judicial orders. These rules broaden the government&#x2019;s power to restrict content deemed harmful to sovereignty, security, or public order, with fast timelines for content removal and mandatory cooperation from intermediaries. However, the rules rely heavily on automated filtering and government &#x201C;fact-check&#x201D; units, which often lack transparency and independent oversight, leading to over-censorship and suppression of dissent (<xref ref-type="bibr" rid="ref1">Access Now, 2024</xref>). The broad and subjective definitions of prohibited content create uncertainty and enable potential misuse, allowing the government to manipulate media narratives despite the regulatory framework&#x2019;s intent to safeguard against misinformation (<xref ref-type="bibr" rid="ref7">Bhatia, 2020</xref>).</p>
<p>Unlike the European GDPR, Indian law does not guarantee the right to journalistic information. The DPDP Act altered the Right to Information (RTI) Act of 2005, prohibiting the disclosure of almost all personal information, which makes it difficult for the press to scrutinise public officials. According to the RSF World Press Freedom Index (<xref ref-type="bibr" rid="ref53">RSF, 2025</xref>), India ranks 159th out of 180 countries in 2025, a slight improvement from 161st place in 2023. This change was due to the decline of other countries, as press freedom in India continues to fall after <xref ref-type="bibr" rid="ref55">The Telecommunications Act (2023)</xref> and other government legislation that restricts the right to information and freedom of the press (<xref ref-type="bibr" rid="ref28">Freedom House, 2025</xref>).</p>
<p>The comparison between the Cambridge Analytica scandal and media manipulation in India under the BJP highlights distinct yet converging threats to democratic integrity (transparency, accountability and legality) and the right to information (including freedom of expression and quality of information, inter alia). In the Cambridge Analytica case, the violation stemmed not only from unauthorised harvesting of personal data, but also from the distortion of citizens&#x2019; informational environment itself. By secretly extracting millions of Facebook profiles through a deceptive quiz app and deploying psychometric profiling to microtarget voters in the 2016 US election and the Brexit referendum, Cambridge Analytica undermined individuals&#x2019; ability to access balanced, truthful, and non-manipulative political information (<xref ref-type="bibr" rid="ref12">Cadwalladr and Graham-Harrison, 2018</xref>). The opaque nature of the targeted messaging created information asymmetries in which certain groups received tailored political narratives unknown to the broader public, compromising the transparency necessary for informed democratic participation (<xref ref-type="bibr" rid="ref4">Azgin and Kiralp, 2024</xref>; <xref ref-type="bibr" rid="ref9005">Malgieri and Santos, 2025</xref>).</p>
<p>In India, the erosion of the right to information is driven primarily by state power: the BJP&#x2019;s use of AI-driven bots, fake accounts, and co-opted media outlets amplifies pro-government messaging while systematically discrediting dissenting voices, reshaping public discourse through orchestrated digital propaganda (<xref ref-type="bibr" rid="ref54">Singh and Patel, 2023</xref>; <xref ref-type="bibr" rid="ref52">Rao, 2024</xref>). This manipulation is reinforced by the opaque and expansive 2021 IT Rules, which mandate algorithmic content surveillance and the ability to trace message originators, enabling censorship under vague &#x201C;public order&#x201D; justifications. Such mechanisms contribute to widespread self-censorship among journalists and platforms, further shrinking informational plurality (<xref ref-type="bibr" rid="ref39">Jeffrey, 2020</xref>; <xref ref-type="bibr" rid="ref7">Bhatia, 2020</xref>; <xref ref-type="bibr" rid="ref38">Iyer, 2023</xref>).</p>
<p>In both cases, information ecosystems are turned into instruments of political influence: Cambridge Analytica through covert behavioural assessment that simultaneously violated privacy and the right to receive impartial public information, and the Indian government through overt regulatory pressure and narrative control. Together, they illustrate how both private actors and state institutions can compromise free expression, electoral fairness, and democratic pluralism by shaping what citizens can know, and what they are prevented from knowing, during critical political moments (<xref ref-type="bibr" rid="ref41">Kumar and Ravi, 2021</xref>; <xref ref-type="bibr" rid="ref13">Chatterjee, 2019</xref>).</p>
<p>The normative and institutional framework of the two cases reveals differences between the European context and that of India, with direct implications for how algorithmic surveillance is deployed and contested. In the Cambridge Analytica scandal, the existence of the GDPR provides a relatively robust set of principles and obligations, including lawfulness of processing, purpose limitation, data minimisation and strong data subject rights, enforced by independent data protection authorities with investigative and sanctioning powers (<xref ref-type="bibr" rid="ref24">European Union, 2016</xref>; <xref ref-type="bibr" rid="ref64">Veale and Zuiderveen, 2021</xref>). By contrast, in India, data protection rules are less comprehensive and inviolable (they allow data sharing between government agencies without explicit consent), which limits both regulatory predictability and the institutional capacity to oversee large-scale data processing and enforce effective sanctions. This asymmetry translates institutionally into clearer avenues for individual and collective redress, as well as regulatory investigations, in the European setting, whereas in India oversight tends to be politically conditioned, with broad room for executive discretion (<xref ref-type="bibr" rid="ref10">Bradshaw and Howard, 2018</xref>; <xref ref-type="bibr" rid="ref40">Kaye, 2019</xref>).</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec6">
<label>3</label>
<title>Conclusion</title>
<p>A comparative analysis of the Cambridge Analytica data manipulation scandal and AI-driven media censorship practices and government exemption from law enforcement in India demonstrates how distinct configurations of technological power&#x2014;commercial exploitation and electoral manipulation in the former case and state-driven information control in the latter&#x2014;can produce converging threats to civil and political rights. Despite their different institutional, political and social contexts, both cases reveal a common pattern: algorithmic systems, when deployed without adequate transparency and accountability, oversight, or ethical constraints, become mechanisms for distorting the informational environment on which democratic participation and the exercise of civic freedoms depend.</p>
<p>Cambridge Analytica exemplifies a violation of the right to privacy and data protection, as well as individual autonomy and freedom, through psychometric manipulation and political microtargeting. In India, several civil and political rights are at stake, particularly freedom of expression and democratic political participation, due to the control of information flows and the regulatory pressure imposed by the <xref ref-type="bibr" rid="ref37">IT Rules (2021)</xref>, as well as the right to privacy when AI systems are used for citizen surveillance. In this context, the legal framework continues to balance privacy protections with a strong role of the state in regulating digital spaces. Whereas Cambridge Analytica subverted democratic processes through hidden manipulation, India&#x2019;s model embeds the restriction of information in the institutional apparatus of governance.</p>
<p>Both cases significantly compromise the right to information. Cambridge Analytica fragmented the public sphere through opaque, personalised political messaging that citizens could neither scrutinise nor contest, thereby undermining the public transparency necessary for democratic deliberation. In India, the suppression of dissent, the amplification of pro-government narratives, and the algorithmic filtering of content structurally reduce pluralism, resulting in an informational landscape shaped by coercion, surveillance, and regulatory opacity. In each instance, individuals are deprived not only of accurate and diverse information but also of the capacity to exercise informed judgment in political matters.</p>
<p>These dynamics extend directly to individual rights, particularly privacy, freedom of expression, autonomy, and political participation. Cambridge Analytica&#x2019;s data harvesting breached fundamental privacy guarantees and enabled psychological manipulation, while India&#x2019;s AI-supported censorship fosters self-censorship, chills public debate, and weakens the safeguards that protect citizens against arbitrary intrusion. By constraining both personal decision-making and collective democratic agency, the technological practices examined reveal how algorithmic systems can amplify existing power asymmetries and erode the structural preconditions of democratic life.</p>
<p>The cases also illuminate the pivotal role of legal and institutional mechanisms. The GDPR offers a dense framework of rights, principles, and enforcement tools capable, at least in theory, of constraining exploitative data practices. The Cambridge Analytica scandal exposed limitations in cross-border enforcement and the reactive nature of regulatory action, but it also affirmed the value of independent supervisory authorities, legally mandated transparency, and rights-based data governance. In contrast, India&#x2019;s regulatory environment&#x2014;marked by broad government exemptions under the DPDPA, executive discretion, and the absence of strong institutional oversight&#x2014;provides limited avenues for accountability and facilitates the misuse of AI for political control. These structural divergences highlight how the effectiveness of legal safeguards depends not only on legislative design but also on political will, institutional independence, and a culture of rights-based compliance.</p>
<p>Taken together, the findings underscore a central insight: AI systems are not inherently emancipatory or repressive, but their societal impact is determined by the governance regimes, power structures, and normative commitments that shape their deployment. Ensuring that AI contributes to democratic resilience rather than democratic decay requires regulatory architectures that combine robust data protection, transparent algorithmic oversight, meaningful redress mechanisms, and strong guarantees of media freedom and pluralism. Without such safeguards, the rapid expansion of AI risks entrenching new modalities of informational control capable of undermining the very rights and democratic principles that digital technologies once promised to enhance.</p>
<p>Contemporary pressure on civil rights stems from the convergence of data exploitation practices by private <italic>companies</italic> and state political control mechanisms, which increasingly restrict the right to information, freedom of expression, and media pluralism. The Cambridge Analytica and India cases serve as paradigmatic examples of this dual front of current democratic erosion.</p>
</sec>
</body>
<back>
<sec sec-type="author-contributions" id="sec7">
<title>Author contributions</title>
<p>JF: Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<sec sec-type="COI-statement" id="sec8">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec9">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="other"><collab id="coll1">Access Now</collab> (<year>2024</year>). India must say no to Government fact-checking. Available online at: <ext-link xlink:href="https://www.accessnow.org/press-release/2023-fact-checking-amendment-it-rules-jan-2024" ext-link-type="uri">https://www.accessnow.org/press-release/2023-fact-checking-amendment-it-rules-jan-2024</ext-link> (Accessed December 4, 2025).</mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="other"><collab id="coll2">Algorithm Watch</collab> (<year>2022</year>). <source>Automating Society Report 2022</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Algorithm Watch</publisher-name>. Available online at: <ext-link xlink:href="https://algorithmwatch.org/en/publications/" ext-link-type="uri">https://algorithmwatch.org/en/publications/</ext-link> (Accessed December 4, 2025).</mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Amoore</surname><given-names>L.</given-names></name></person-group> (<year>2021</year>). <source>Cloud Ethics: Algorithms and the Attributes of Ourselves and Others</source>. <publisher-loc>Durham, NC</publisher-loc>: <publisher-name>Duke University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Azgin</surname><given-names>B.</given-names></name> <name><surname>Kiralp</surname><given-names>S.</given-names></name></person-group> (<year>2024</year>). <article-title>Surveillance, disinformation, and legislative measures in the 21st century: AI, social media, and the future of democracies</article-title>. <source>Soc. Sci.</source> <volume>13</volume>:<fpage>510</fpage>. doi: <pub-id pub-id-type="doi">10.3390/socsci13100510</pub-id></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Bauman</surname><given-names>Z.</given-names></name> <name><surname>Lyon</surname><given-names>D.</given-names></name></person-group> (<year>2012</year>). <source>Liquid Surveillance: A Conversation</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Polity Press</publisher-name>.</mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Benjamin</surname><given-names>R.</given-names></name></person-group> (<year>2019</year>). <source>Race After Technology: Abolitionist Tools for the New Jim Code</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Polity Press</publisher-name>.</mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bhatia</surname><given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Media control and political power in India</article-title>. <source>J. South Asian Stud.</source> <volume>43</volume>, <fpage>112</fpage>&#x2013;<lpage>130</lpage>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Bietti</surname><given-names>E.</given-names></name></person-group> (<year>2020</year>). <chapter-title>From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy</chapter-title>. In <conf-name>Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT&#x002A;)</conf-name> (pp. <fpage>210</fpage>&#x2013;<lpage>219</lpage>). <comment>Association for Computing Machinery</comment>. (Accessed December 9, 2025).</mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bietti</surname><given-names>E.</given-names></name></person-group> (<year>2025</year>). <article-title>Data is infrastructure</article-title>. <source>Theor. Inq. Law</source> <volume>26</volume>, <fpage>55</fpage>&#x2013;<lpage>87</lpage>. doi: <pub-id pub-id-type="doi">10.1515/til-2025-0004</pub-id></mixed-citation></ref>
<ref id="ref9002"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Birhane</surname><given-names>A.</given-names></name></person-group> (<year>2023</year>). <article-title>How AI can distort human beliefs</article-title>. <source>Science</source> <volume>380</volume>, <fpage>1222</fpage>&#x2013;<lpage>1223</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.adg0520</pub-id></mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bradshaw</surname><given-names>S.</given-names></name> <name><surname>Howard</surname><given-names>P. N.</given-names></name></person-group> (<year>2018</year>). <article-title>The global organization of social media disinformation campaigns</article-title>. <source>J. Int. Aff.</source> <volume>71</volume>, <fpage>23</fpage>&#x2013;<lpage>32</lpage>.</mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brundage</surname><given-names>M.</given-names></name> <name><surname>Avin</surname><given-names>S.</given-names></name> <name><surname>Wang</surname><given-names>J.</given-names></name> <name><surname>Belfield</surname><given-names>H.</given-names></name> <name><surname>Krueger</surname><given-names>G.</given-names></name> <name><surname>Hadfield</surname><given-names>G.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Toward trustworthy AI development: mechanisms for supporting verifiable claims</article-title>. <source>Sci. Eng. Ethics</source> <volume>28</volume>, <fpage>15</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2004.07213</pub-id></mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Cadwalladr</surname><given-names>C.</given-names></name> <name><surname>Graham-Harrison</surname><given-names>E.</given-names></name></person-group> (<year>2018</year>). Revealed: 50 million Facebook profiles harvested for Cambridge analytica in major data breach. The Guardian. Available online at: <ext-link xlink:href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election" ext-link-type="uri">https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election</ext-link> (Accessed December 19, 2025).</mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chatterjee</surname><given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>News channels as mouthpieces: the Indian case</article-title>. <source>Media Cult. Soc.</source> <volume>41</volume>, <fpage>678</fpage>&#x2013;<lpage>694</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0163443719831304</pub-id></mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Christodoulou</surname><given-names>A.</given-names></name> <name><surname>Limniotis</surname><given-names>G.</given-names></name></person-group> (<year>2024</year>). <article-title>Artificial intelligence as a challenge for data protection law</article-title>. <source>Comput. Law Secur. Rev.</source> <volume>54</volume>:<fpage>105742</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clsr.2024.105742</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Couldry</surname><given-names>N.</given-names></name> <name><surname>Mej&#x00ED;as</surname><given-names>U. A.</given-names></name></person-group> (<year>2019</year>). <source>The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism</source>. <publisher-loc>Stanford, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Couldry</surname><given-names>N.</given-names></name> <name><surname>Mej&#x00ED;as</surname><given-names>U. A.</given-names></name></person-group> (<year>2021</year>). <article-title>Data colonialism: rethinking big data&#x2019;s relation to the contemporary subject</article-title>. <source>Telev. New Media</source> <volume>22</volume>, <fpage>153</fpage>&#x2013;<lpage>167</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1527476418796632</pub-id></mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="other"><collab id="coll3">Digital Personal Data Protection Act</collab>. (<year>2023</year>) The gazette of India extraordinary. Available online at: <ext-link xlink:href="https://egazette.gov.in/WriteReadData/2023/248045.pdf" ext-link-type="uri">https://egazette.gov.in/WriteReadData/2023/248045.pdf</ext-link> (Accessed December 9, 2025).</mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="other"><collab id="coll4">European Commission</collab> (<year>2021a</year>). Europe&#x2019;s Digital decade: digital targets for 2030. Available online at: <ext-link xlink:href="https://digital-strategy.ec.europa.eu" ext-link-type="uri">https://digital-strategy.ec.europa.eu</ext-link> (Accessed December 9, 2025).</mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="other"><collab id="coll5">European Commission</collab> (<year>2021b</year>). Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. COM (2021) 206 final. Available online at: <ext-link xlink:href="https://eur-lex.europa.eu" ext-link-type="uri">https://eur-lex.europa.eu</ext-link> (Accessed December 9, 2025).</mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="other"><collab id="coll6">European Commission</collab> (<year>2024</year>). Artificial intelligence act (regulation 2024/1689). Available online at: <ext-link xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689" ext-link-type="uri">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689</ext-link> (Accessed December 7, 2025).</mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="other"><collab id="coll7">European Data Protection Board</collab>. (<year>2024</year>). Statement 3/2024 on data protection authorities&#x2019; role in the artificial intelligence act framework. Available online at: <ext-link xlink:href="https://www.edpb.europa.eu/system/files/2024-07/edpb_statement_202403_dpasroleaiact_en.pdf" ext-link-type="uri">https://www.edpb.europa.eu/system/files/2024-07/edpb_statement_202403_dpasroleaiact_en.pdf</ext-link> (Accessed December 9, 2025)</mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="other"><collab id="coll8">European Parliament</collab>. (<year>2019</year>). Report on a comprehensive European industrial policy on artificial intelligence and robotics, A8-0019/2019, 30 January 2019. Available online at: <ext-link xlink:href="https://www.europarl.europa.eu/doceo/document/A-8-2019-0019_EN.html" ext-link-type="uri">https://www.europarl.europa.eu/doceo/document/A-8-2019-0019_EN.html</ext-link> (Accessed December 5, 2025).</mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="other"><collab id="coll9">European Parliament</collab>. (<year>2022</year>). Artificial intelligence in the digital age: balancing innovation and fundamental rights. European parliamentary research service. Available online at:<ext-link xlink:href="https://op.europa.eu/en/publication-detail/-/publication/8f622861-7533-11ed-9887-01aa75ed71a1/language-en" ext-link-type="uri">https://op.europa.eu/en/publication-detail/-/publication/8f622861-7533-11ed-9887-01aa75ed71a1/language-en</ext-link> (Accessed December 10, 2025).</mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="other"><collab id="coll10">European Union</collab> (<year>2016</year>). Regulation (EU) 2016/679 of the European Parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (general data protection regulation), <fpage>1</fpage>&#x2013;<lpage>88</lpage>. Available online at: <ext-link xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679" ext-link-type="uri">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679</ext-link> (Accessed May 4, 2025).</mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Fjeld</surname><given-names>J.</given-names></name> <name><surname>Achten</surname><given-names>N.</given-names></name> <name><surname>Hilligoss</surname><given-names>H.</given-names></name> <name><surname>Nagy</surname><given-names>A.</given-names></name> <name><surname>Srikumar</surname><given-names>M.</given-names></name></person-group> (<year>2020</year>). <source>Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based AI Frameworks</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Berkman Klein Center for Internet &#x0026; Society, Harvard University</publisher-name>.</mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Floridi</surname><given-names>L.</given-names></name></person-group> (<year>2021</year>). <source>The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Foucault</surname><given-names>M.</given-names></name></person-group> (<year>1993</year>). <source>Discipline and Punish: The Birth of the Prison</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Penguin</publisher-name>.</mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="other"><collab id="coll11">Freedom House</collab> (<year>2025</year>). Annual global report on political rights and civil liberties. Available online at: <ext-link xlink:href="https://freedomhouse.org/country/india" ext-link-type="uri">https://freedomhouse.org/country/india</ext-link> (Accessed December 8, 2025).</mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gorwa</surname><given-names>R.</given-names></name> <name><surname>Binns</surname><given-names>R.</given-names></name> <name><surname>Katzenbach</surname><given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Algorithmic content moderation: technical and political challenges in the automation of platform governance</article-title>. <source>Big Data Soc.</source> <volume>7</volume>, <fpage>205395171989794</fpage>&#x2013;<lpage>205395171989715</lpage>. doi: <pub-id pub-id-type="doi">10.1177/2053951719897945</pub-id></mixed-citation></ref>
<ref id="ref9003"><mixed-citation publication-type="journal"><comment>Government of India</comment>. (<year>2018</year>). <source>Draft Personal Data Protection Bill, 2018. Ministry of Electronics and Information Technology. Available online at:</source> <ext-link xlink:href="https://meity.gov.in/writereaddata/files/Personal_Data_Protection_Bill,2018.pdf" ext-link-type="uri">https://meity.gov.in/writereaddata/files/Personal_Data_Protection_Bill,2018.pdf</ext-link> (Accessed December 2, 2025).</mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Han</surname><given-names>B.-C.</given-names></name></person-group> (<year>2015</year>). <source>The Transparency Society</source>. <publisher-loc>Stanford, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Han</surname><given-names>B.-C.</given-names></name></person-group> (<year>2022</year>). <source>Psychopolitics: Neoliberalism and New Technologies of Power</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Verso Books</publisher-name>.</mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Han</surname><given-names>B.-C.</given-names></name></person-group> (<year>2023</year>). <source>Vita Contemplativa: In Praise of Inactivity</source> (Trad. D. Steuer). <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Polity Press</publisher-name>.</mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="other"><collab id="coll13">Human Rights Watch</collab> (<year>2021a</year>) World report 2021: freedom of expression and privacy. Available online at: <ext-link xlink:href="https://www.hrw.org/pt/world-report/2021" ext-link-type="uri">https://www.hrw.org/pt/world-report/2021</ext-link> (Accessed December 8, 2025).</mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="other"><collab id="coll14">Human Rights Watch</collab> (<year>2021b</year>). India: digital authoritarianism and online repression. Available online at: <ext-link xlink:href="https://www.hrw.org/world-report/2021/country-chapters/india" ext-link-type="uri">https://www.hrw.org/world-report/2021/country-chapters/india</ext-link> (Accessed December 9, 2025)</mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Isaak</surname><given-names>J.</given-names></name> <name><surname>Hanna</surname><given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>User data privacy: Facebook, Cambridge analytica and data harvesting</article-title>. <source>Computer</source> <volume>51</volume>. doi: <pub-id pub-id-type="doi">10.1109/MC.2018.3191268</pub-id></mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="other"><collab id="coll15">IT Rules</collab> (<year>2021</year>) The information technology (intermediary guidelines and digital media ethics code) rules, government of India. Available online at: <ext-link xlink:href="https://www.wipo.int/wipolex/en/legislation/details/21553" ext-link-type="uri">https://www.wipo.int/wipolex/en/legislation/details/21553</ext-link> (Accessed December10, 2025)</mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Iyer</surname><given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Legal frameworks and media freedom in India</article-title>. <source>Indian J. Law Dem.</source> <volume>9</volume>, <fpage>95</fpage>&#x2013;<lpage>110</lpage>.</mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jeffrey</surname><given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Media manipulation and political influence in Indian democracy</article-title>. <source>Asian J. Commun.</source> <volume>30</volume>, <fpage>293</fpage>&#x2013;<lpage>307</lpage>.</mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kaye</surname><given-names>D.</given-names></name></person-group> (<year>2019</year>). Report of the special rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations. Available online at: <ext-link xlink:href="https://www.ohchr.org/en/special-procedures/sr-freedom-of-expression" ext-link-type="uri">https://www.ohchr.org/en/special-procedures/sr-freedom-of-expression</ext-link> (Accessed December 2, 2025).</mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname><given-names>A.</given-names></name> <name><surname>Ravi</surname><given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Journalistic freedom under threat: case studies from India</article-title>. <source>Int. J. Press/Polit.</source> <volume>26</volume>, <fpage>456</fpage>&#x2013;<lpage>472</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1940161220980975</pub-id></mixed-citation></ref>
<ref id="ref42"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lanier</surname><given-names>J.</given-names></name></person-group> (<year>2018</year>). <source>Ten Arguments for Deleting Your Social Media Accounts Right Now</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Henry Holt</publisher-name>.</mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lyon</surname><given-names>D.</given-names></name></person-group> (<year>2023</year>). <source>Surveillance after Snowden</source>. <edition>2nd</edition> Ed. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Polity Press</publisher-name>.</mixed-citation></ref>
<ref id="ref9005"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Malgieri</surname><given-names>G.</given-names></name> <name><surname>Santos</surname><given-names>C.</given-names></name></person-group> (<year>2025</year>). <article-title>Assessing the (severity of) impacts on fundamental rights</article-title>. <source>Comput. Law Secur. Rev.</source> <volume>56</volume>:<fpage>106113</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clsr.2025.106113</pub-id></mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Matzner</surname><given-names>T.</given-names></name></person-group> (<year>2022</year>). <article-title>Digital infrastructures and the erosion of empathy</article-title>. <source>Media Cult. Soc.</source> <volume>44</volume>, <fpage>1321</fpage>&#x2013;<lpage>1339</lpage>.</mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Meissner</surname><given-names>M.</given-names></name> <name><surname>Wulf</surname><given-names>V.</given-names></name></person-group> (<year>2023</year>). <article-title>Governance challenges in AI-driven organizations</article-title>. <source>Inf. Syst. J.</source> <volume>33</volume>, <fpage>914</fpage>&#x2013;<lpage>934</lpage>. doi: <pub-id pub-id-type="doi">10.1111/isj.12437</pub-id></mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Napoli</surname><given-names>P. M.</given-names></name></person-group> (<year>2021</year>). <source>Social Media and the Public Interest: Media Regulation in the Disinformation Age</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Columbia University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Narayanan</surname><given-names>A.</given-names></name> <name><surname>Chen</surname><given-names>J.</given-names></name> <name><surname>Raji</surname><given-names>I. D.</given-names></name></person-group> (<year>2023</year>). <article-title>Algorithmic accountability: beyond auditing</article-title>. <source>Commun. ACM</source> <volume>66</volume>, <fpage>46</fpage>&#x2013;<lpage>52</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3531147</pub-id></mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Noble</surname><given-names>S. U.</given-names></name></person-group> (<year>2018</year>). <source>Algorithms of Oppression: How Search Engines Reinforce Racism</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>NYU Press</publisher-name>.</mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Pariser</surname><given-names>E.</given-names></name></person-group> (<year>2011</year>). <source>The Filter Bubble: What the Internet is Hiding from You</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Penguin</publisher-name>.</mixed-citation></ref>
<ref id="ref50"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Pasquale</surname><given-names>F.</given-names></name></person-group> (<year>2015</year>). <source>The Black Box Society: The Secret Algorithms that Control Money and Information</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Harvard University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref51"><mixed-citation publication-type="other"><collab id="coll16">PRS India Legislative Research</collab>. (<year>2023</year>). The Digital Personal Data Protection Bill. India. Available online at: <ext-link xlink:href="https://prsindia.org/billtrack/digital-personal-data-protection-bill-2023" ext-link-type="uri">https://prsindia.org/billtrack/digital-personal-data-protection-bill-2023</ext-link> (Accessed December, 17, 2025).</mixed-citation></ref>
<ref id="ref52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rao</surname><given-names>V.</given-names></name></person-group> (<year>2024</year>). <article-title>Election coverage and media collusion in India</article-title>. <source>J. Polit. Media Stud.</source> <volume>15</volume>, <fpage>45</fpage>&#x2013;<lpage>62</lpage>.</mixed-citation></ref>
<ref id="ref53"><mixed-citation publication-type="other"><collab id="coll17">RSF</collab> (<year>2025</year>) India index 2025. Available online at: <ext-link xlink:href="https://rsf.org/en/world-press-freedom-index-2025-over-half-worlds-population-red-zones" ext-link-type="uri">https://rsf.org/en/world-press-freedom-index-2025-over-half-worlds-population-red-zones</ext-link> (Accessed December 8, 2025).</mixed-citation></ref>
<ref id="ref54"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname><given-names>N.</given-names></name> <name><surname>Patel</surname><given-names>R.</given-names></name></person-group> (<year>2023</year>). <article-title>Digital propaganda and disinformation strategies in India</article-title>. <source>New Media Soc.</source> <volume>25</volume>, <fpage>1568</fpage>&#x2013;<lpage>1585</lpage>.</mixed-citation></ref>
<ref id="ref55"><mixed-citation publication-type="other"><collab id="coll18">The Telecommunications Act</collab> (<year>2023</year>) The Telecommunications Act (act no. 44 of 2023), enacted by the parliament of India, 24 December 2023. Available online at: <ext-link xlink:href="https://www.indiacode.nic.in/handle/123456789/20101" ext-link-type="uri">https://www.indiacode.nic.in/handle/123456789/20101</ext-link> (Accessed December 8, 2025)</mixed-citation></ref>
<ref id="ref56"><mixed-citation publication-type="other"><collab id="coll19">Trail</collab> (<year>2025</year>). The AI Governance Co-pilot [Software]. Available online at: <ext-link xlink:href="https://www.trail-ml.com/" ext-link-type="uri">https://www.trail-ml.com/</ext-link> (Accessed December 10, 2025).</mixed-citation></ref>
<ref id="ref9004"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Udupa</surname><given-names>S.</given-names></name> <name><surname>Hickok</surname><given-names>E.</given-names></name> <name><surname>Maronikolakis</surname><given-names>A.</given-names></name> <name><surname>Schuetze</surname><given-names>H.</given-names></name> <name><surname>Csuka</surname><given-names>L.</given-names></name> <name><surname>Wisiorek</surname><given-names>A.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>AI, extreme speech and the challenges of online content moderation (Policy Brief)</article-title>. <source>AI4Dignity Project.</source> doi: <pub-id pub-id-type="doi">10.5282/ubm/epub.76087</pub-id></mixed-citation></ref>
<ref id="ref57"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Udupa</surname><given-names>S.</given-names></name> <name><surname>Pohjonen</surname><given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Extreme speech and global digital cultures &#x2014; introduction</article-title>. <source>Int. J. Commun.</source> <volume>13</volume>, <fpage>3049</fpage>&#x2013;<lpage>3067</lpage>.</mixed-citation></ref>
<ref id="ref58"><mixed-citation publication-type="other"><collab id="coll20">UNESCO</collab> (<year>2022</year>). Recommendation on the ethics of artificial intelligence, Paris: UNESCO. Available online at: <ext-link xlink:href="https://unesdoc.unesco.org/ark:/48223/pf0000381137" ext-link-type="uri">https://unesdoc.unesco.org/ark:/48223/pf0000381137</ext-link> (Accessed December 7, 2025)</mixed-citation></ref>
<ref id="ref59"><mixed-citation publication-type="book"><collab id="coll21">United Nations</collab> (<year>2022</year>). <source>AI Governance and Human Rights: A Framework for Global Cooperation</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>United Nations</publisher-name>. Available online at: <ext-link xlink:href="https://www.un.org/en/global-issues/artificial-intelligence" ext-link-type="uri">https://www.un.org/en/global-issues/artificial-intelligence</ext-link></mixed-citation></ref>
<ref id="ref60"><mixed-citation publication-type="other"><collab id="coll22">United Nations Human Rights Council</collab>. (<year>2018</year>). Report of the special rapporteur on the promotion and protection of the right to freedom of opinion and expression (Report No. A/HRC/38/35). Office of the United Nations High Commissioner for Human Rights. Available online at: <ext-link xlink:href="https://undocs.org/A/HRC/38/35" ext-link-type="uri">https://undocs.org/A/HRC/38/35</ext-link> (Accessed December 10, 2025)</mixed-citation></ref>
<ref id="ref61"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Unwin</surname><given-names>T.</given-names></name></person-group> (<year>2017</year>). <source>Reclaiming Information and Communication Technologies for Development</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. doi: <pub-id pub-id-type="doi">10.1093/oso/9780198795292.001.0001</pub-id></mixed-citation></ref>
<ref id="ref62"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Unwin</surname><given-names>T.</given-names></name></person-group> (<year>2021</year>). <article-title>Towards ethical AI for development: reflections on digital rights</article-title>. <source>Infor. Technol. Int. Dev.</source> <volume>17</volume>, <fpage>25</fpage>&#x2013;<lpage>39</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s13753-021-00348-8</pub-id></mixed-citation></ref>
<ref id="ref63"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Van Dijck</surname><given-names>J.</given-names></name> <name><surname>Poell</surname><given-names>T.</given-names></name> <name><surname>de Waal</surname><given-names>M.</given-names></name></person-group> (<year>2021</year>). <source>The Platform Society: Public Values in a Connective World</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref64"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Veale</surname><given-names>M.</given-names></name> <name><surname>Zuiderveen</surname><given-names>B. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Demystifying the draft EU artificial intelligence act</article-title>. <source>Comput. Law. Rev. Int.</source> <volume>22</volume>, <fpage>97</fpage>&#x2013;<lpage>112</lpage>. doi: <pub-id pub-id-type="doi">10.9785/cri-2021-220402</pub-id></mixed-citation></ref>
<ref id="ref65"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wachter</surname><given-names>S.</given-names></name> <name><surname>Mittelstadt</surname><given-names>B.</given-names></name> <name><surname>Floridi</surname><given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>Why a right to explanation of automated decision-making does not exist in the general data protection regulation</article-title>. <source>Int. Data Privacy Law</source> <volume>7</volume>, <fpage>76</fpage>&#x2013;<lpage>99</lpage>. doi: <pub-id pub-id-type="doi">10.1093/idpl/ipx005</pub-id></mixed-citation></ref>
<ref id="ref66"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yadav</surname><given-names>P.</given-names></name> <name><surname>Yadav</surname><given-names>A. K.</given-names></name></person-group> (<year>2025</year>). <article-title>Issues and challenges in the protection of right to privacy in the era of artificial intelligence: an overview</article-title>. <source>Indian J. Integr. Res. Law</source> <volume>5</volume>, <fpage>1577</fpage>&#x2013;<lpage>1596</lpage>.</mixed-citation></ref>
<ref id="ref67"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Zuboff</surname><given-names>S.</given-names></name></person-group> (<year>2019</year>). <source>The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Public Affairs</publisher-name>.</mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001"><p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2859256/overview">Pedro Tom&#x00E1;s Nevado-Batalla Moreno</ext-link>, University of Salamanca, Spain</p></fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002"><p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2709848/overview">Simant Shankar Bharti</ext-link>, VIZJA University, Poland</p><p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/50752/overview">Gregory Collet</ext-link>, Universit&#x00E9; libre de Bruxelles, Belgium</p></fn>
</fn-group>
</back>
</article>