<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article article-type="research-article" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Commun.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Communication</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Commun.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2297-900X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fcomm.2025.1620310</article-id><article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading"><subject>Original Research</subject></subj-group>
</article-categories>
<title-group>
<article-title>The right to game AI-systems: a speculative right for contestation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>van den Boom</surname>
<given-names>Freyja</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2856632"/>
</contrib>
</contrib-group>
<aff id="aff1"><label>1</label><institution>Research Group Personal Rights &#x0026; Property Rights, Centre National de la Recherche Scientifique</institution>, <city>Paris</city>, <country country="fr">France</country></aff>
<aff id="aff2"><label>2</label><institution>Law Department, University of Antwerp</institution>, <city>Antwerpen</city>, <country country="be">Belgium</country></aff>
<author-notes><corresp id="c001"><label>&#x002A;</label>Correspondence: Freyja van den Boom, <email xlink:href="mailto:freyja.vandenboom@uantwerpen.be">freyja.vandenboom@uantwerpen.be</email></corresp></author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-01-20">
<day>20</day>
<month>01</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2025</year>
</pub-date>
<volume>10</volume>
<elocation-id>1620310</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>05</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>09</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 van den Boom.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>van den Boom</copyright-holder>
<license><ali:license_ref start_date="2026-01-20">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>This paper proposes the &#x2018;right to game AI-systems&#x2019; as a speculative design artifact to challenge dominant narratives that position &#x2018;gaming&#x2019; as a threat to algorithmic integrity. We argue that in high-stakes domains like insurance, health, and welfare, gaming the system should be recognized as a legitimate and necessary act of agency, resistance, and contestation. Rooted in a critical reading of the GDPR and the EU AI Act, and employing Causal Layered Analysis (CLA) and speculative design, we reframe the &#x2018;right to game&#x2019; as a vital response to structural opacity and the unequal power dynamics inherent in AI governance. By connecting &#x2018;gaming&#x2019; to established concepts of contestability, ethical hacking, and playful exploration, this paper argues for a radical shift in perspective that empowers individuals to become active participants in, rather than passive subjects of, algorithmic decision-making.</p>
</abstract>
<kwd-group>
<kwd>algorithmic governance</kwd>
<kwd>contestability</kwd>
<kwd>transparency</kwd>
<kwd>accountability</kwd>
<kwd>speculative design</kwd>
<kwd>futures studies</kwd>
<kwd>socio-legal studies</kwd>
<kwd>GDPR</kwd>
</kwd-group><funding-group><funding-statement>The author(s) declare that no financial support was received for the research and/or publication of this article.</funding-statement></funding-group>
<counts>
<fig-count count="2"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="49"/>
<page-count count="9"/>
<word-count count="6852"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Media Governance and the Public Sphere</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="sec1">
<label>1</label>
<title>Introduction: systems we&#x2019;re not allowed to game</title>
<p>Artificial Intelligence (AI) has rapidly evolved from a speculative technology into a ubiquitous, powerful force, fundamentally reshaping decision-making processes across critical sectors. From assessing creditworthiness and insurance premiums to allocating healthcare resources and determining eligibility for social welfare, AI-driven systems are now instrumental in governing people&#x2019;s lives (<xref ref-type="bibr" rid="ref27">O&#x2019;Neil, 2016</xref>). This pervasive integration means that automated decisions are routinely made about individuals, often with profound and life-altering consequences (<xref ref-type="bibr" rid="ref27">O&#x2019;Neil, 2016</xref>; <xref ref-type="bibr" rid="ref28">Pasquale, 2015</xref>).</p>
<p>Numerous real-world examples underscore the critical need for transparency and contestability in algorithmic governance. In the German case of <italic>Hesse v. Agentur f&#x00FC;r Arbeit</italic>, a court questioned the transparency and accountability of a risk scoring algorithm (the Austrian AMS system) used to assess job seekers, highlighting concerns about individuals being disadvantaged by an inscrutable system.<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> The widely reported A-level grading scandal in the United Kingdom demonstrated how an opaque algorithm, initially presented as a neutral tool, disproportionately downgraded students from marginalised backgrounds based on factors seemingly unrelated to their individual performance, revealing a stark example of algorithmic bias and a lack of accountability for its discriminatory impact (<xref ref-type="bibr" rid="ref36">UK Parliament, House of Commons Education Committee, 2020</xref>). In the Netherlands, the SyRI (System Risk Indication) system, designed to detect welfare fraud using a range of personal data, was ultimately ruled unlawful by a court due to its violation of human rights, particularly the right to privacy, but also raising significant concerns about its potential for discrimination against low-income and migrant communities (<xref ref-type="bibr" rid="ref26">NJCM v The Dutch State, 2020</xref>; <xref ref-type="bibr" rid="ref37">Van Bekkum and Borgesius, 2021</xref>). The opacity of the algorithm made it impossible for affected individuals to understand why they were flagged as potential risks, denying them the opportunity to meaningfully contest the basis of the state&#x2019;s suspicion.</p>
<p>These cases are not isolated incidents; they are symptomatic of a broader phenomenon where the opacity of algorithmic decision-making systems denies individuals the opportunity to understand, scrutinize, and effectively contest the basis upon which they are being judged. This lack of transparency is particularly problematic because, as critical scholars have demonstrated, algorithmic systems frequently reproduce and amplify existing societal biases and structural inequalities (<xref ref-type="bibr" rid="ref13">Eubanks, 2018</xref>; <xref ref-type="bibr" rid="ref7">Couldry and Mejias, 2019</xref>). Trained on historical data that reflects past and present discrimination, and designed with objectives and logics that may implicitly favor dominant groups, these systems can inadvertently or intentionally entrench disadvantage and reinforce existing power structures and social hierarchies under the guise of technical neutrality and efficiency (<xref ref-type="bibr" rid="ref7">Couldry and Mejias, 2019</xref>).</p>
<p>The opacity of AI systems not only hinders individual understanding and contestation but also tends to obscure the political choices and value judgments embedded within algorithmic design and deployment. This can lead to outcomes that are not objective, data-driven inevitabilities but the result of deliberate design decisions, data selection, and power dynamics (<xref ref-type="bibr" rid="ref48">Winner, 1980</xref>). This makes it harder to identify the mechanisms through which inequality is reproduced and limits opportunities for people to take action (<xref ref-type="bibr" rid="ref24">Mittelstadt et al., 2016</xref>; <xref ref-type="bibr" rid="ref49">Zarsky, 2013</xref>).</p>
<p>To make this more concrete, this is about when for example: your car insurance premium is determined by a telematics &#x201C;black box&#x201D; that monitors your every move; or when you need healthcare and your treatment options are ranked by a health algorithm; or when you are looking for a job and your opportunities are filtered by an automated hiring system These are systems, we are subject to, but not allowed to understand, much less challenge in a meaningful way.<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> In each of these cases, the system&#x2019;s logic may be protected as a corporate asset. This opacity denies individuals the opportunity to scrutinize and effectively contest the basis upon which they are being judged, often reproducing and amplifying existing societal biases and structural inequalities under a guise of technical neutrality (<xref ref-type="bibr" rid="ref24">Mittelstadt et al., 2016</xref>).</p>
<p>In this context, we ask how individuals can exercise meaningful control over their lives. This paper argues that addressing the harms of opaque algorithmic governance requires a fundamental shift in perspective. We propose a provocative concept: the right for individuals to &#x2018;game&#x2019; AI systems. This concept is introduced not in the sense of malicious exploitation, but as a speculative design artifact that challenges dominant narratives. We argue that in high-stakes domains, understanding how these systems work and adjusting one&#x2019;s behavior in response should be recognized as a legitimate act of agency and participatory sense-making.</p>
<p>This paper will proceed as follows. Section 2 will analyze the current regulatory landscape and use Causal Layered Analysis (CLA) to deconstruct the fear of gaming that underpins algorithmic secrecy. Section 3 will reconceptualize gaming as a powerful form of contestation, linking it to academic literature on contestable AI, ethical hacking, and playful exploration. Section 4 will operationalize this concept through a speculative design scenario: the <italic>&#x201C;Fair Play Insurance Dashboard&#x201D;</italic> to illustrate its practical benefits. Section 5 will then discuss the ethical parameters and implications of such a right. Finally, we conclude by advocating for a future where individuals are empowered as active algorithmic citizens, not as passive data subjects.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>The status quo: algorithmic secrecy and the fear of &#x2018;gaming&#x2019;</title>
<p>The prevailing approach to AI governance is characterized by a fundamental tension: a stated desire for transparency on one hand, and a deep-seated institutional and economic structure that fiercely protects algorithmic secrecy on the other. This section first examines the legislative landscape that enables this conflict and then uses Causal Layered Analysis (CLA) to uncover the deeper narratives that fuel the fear of gaming the system.</p>
<sec id="sec3">
<label>2.1</label>
<title>The regulatory paradox: the right to access vs. trade secrecy</title>
<p>The European Union&#x2019;s General Data Protection Regulation (GDPR) and the AI Act represent landmark efforts to regulate algorithmic decision-making. Articles 15 and 22 of the GDPR grant individuals the right to meaningful information about the logic involved in automated decision-making (<xref ref-type="bibr" rid="ref30">Regulation (EU), 2016</xref>; <xref ref-type="bibr" rid="ref22">Malgieri and Comand&#x00E9;, 2017</xref>).</p>
<p>Article 15, the right of access by the data subject, is particularly relevant.<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> It grants individuals the right to obtain confirmation as to whether personal data concerning them is being processed, and if so, to access that data and specific information about the processing. Article 15(1)(h) states that this information includes <italic>&#x201C;meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.&#x201D;</italic> This provision seems to directly address the need for individuals to understand how automated decisions affecting them are made.</p>
<p>Furthermore, Article 22 grants data subjects the right &#x201C;not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.&#x201D; While this article provides a right against solely automated decisions in high-stakes scenarios, it implicitly underscores the need for transparency and human involvement where such decisions are permitted or used to inform human decisions.</p>
<p>However, these promises of transparency are significantly undermined by countervailing protections for corporate interests. Recital 63 of the GDPR explicitly states that the right of access &#x201C;<italic>should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property</italic>&#x201D;. <xref ref-type="fn" rid="fn0004"><sup>4</sup></xref></p>
<p>In practice, this exception is often used to block any meaningful access (<xref ref-type="bibr" rid="ref45">Wachter et al., 2017</xref>). People trying to obtain greater algorithmic transparency under the GDPR have frequently encountered this trade secret defense from companies, highlighting the practical difficulties individuals face in exercising their rights when confronted with powerful corporate interests (<xref ref-type="bibr" rid="ref44">Veale et al., 2021</xref>).</p>
<p>When a driver asks their telematics insurer why their premium has increased, they are likely to receive vague categories of risk factors (driving style, route choices) rather than the specific weights and data points that constitute the meaningful information they need (<xref ref-type="bibr" rid="ref4">Bucher, 2017</xref>). The algorithms remain a black box. This creates a regulatory paradox where the right to transparency exists in theory but is often unenforceable in the face of corporate claims of confidentiality (<xref ref-type="bibr" rid="ref44">Veale et al., 2021</xref>).</p>
<p>The European Union&#x2019;s Artificial Intelligence Act (AI Act) represents a first comprehensive legal framework.</p>
<p>A framework specifically designed to regulate AI systems.<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref> Adopting a risk-based approach, the AI Act imposes varying levels of obligations on AI systems depending on their potential to cause harm. AI systems used in critical areas such as healthcare, education, employment, and law enforcement are classified as high-risk and are subject to stricter requirements.</p>
<p>The AI Act includes specific transparency obligations for high-risk AI systems under Article 13 and broader transparency obligations for certain AI systems under Article 52. For high-risk systems, providers must design and develop them to enable human oversight and provide accompanying information that is &#x201C;clear and adequate.&#x201D; This information is intended to help users understand the system&#x2019;s capabilities and limitations.</p>
<p>Article 52 imposes specific transparency obligations for AI systems intended to interact with individuals or generate content, requiring users to be informed that they are interacting with or exposed to AI.</p>
<p>While the AI Act reinforces the importance of transparency and accountability for high-risk AI, it does not fully resolve the problems we face with implementing the GDPR in a way that protects trade secrets (<xref ref-type="bibr" rid="ref9002">Oxford Law Blogs, 2025</xref>). While it builds upon GDPR principles, the AI Act&#x2019;s focus is more on the safety and fundamental rights risks posed by the AI system itself, rather than the data protection rights of individuals concerning automated decisions.</p>
<p>The Act aims to balance innovation with safety and rights, but the specific details of how transparency will be enforced and how it will be weighed against claims of confidentiality remain subject to ongoing debate and the development of implementing standards and guidelines.</p>
<p>The burden often remains on affected individuals or civil society organizations to identify problematic AI systems and advocate for greater transparency and accountability.</p>
<p>Furthermore, the regulatory landscape for AI transparency is fragmented. Even with the GDPR and AI Act, individuals seeking to understand how algorithmic decisions are made about them must navigate complex interactions between data protection law, sector-specific regulations (e.g., in healthcare or finance), and intellectual property law (<xref ref-type="bibr" rid="ref23">Mittelstadt, 2019</xref>).</p>
<p>The lack of a single, clear, and universally enforceable right to truly understand algorithmic logic significantly hinders the ability of individuals to exercise agency and contest decisions that impact their lives, contributing to the opacity that underpins the reproduction of inequality.<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref></p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Unpacking the fear of gaming: a causal layered analysis</title>
<p>To understand the persistent resistance to transparency, we must look deeper than legal texts. Causal Layered Analysis (CLA), a futures research method, helps deconstruct the narratives that sustain the status quo (<xref ref-type="bibr" rid="ref16">Inayatullah, 1998</xref>). CLA is a poststructuralist futures research method that moves beyond conventional, surface-level analyses of issues to uncover the deeper causes, worldviews, and metaphors that shape our understanding and limit possibilities for change (<xref ref-type="bibr" rid="ref16">Inayatullah, 1998</xref>). By applying CLA to the debate surrounding algorithmic transparency and the fear of providing access and transparency, we can peel back the layers and reveal the often-hidden power dynamics and ideological commitments that maintain the status quo of algorithmic opacity (<xref ref-type="fig" rid="fig1">Figure 1</xref>).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>CLA applied.</p>
</caption>
<graphic xlink:href="fcomm-10-1620310-g001.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">A pyramid graphic illustrating four hierarchical levels of perceptions about algorithms. The top level, "Litany," includes: "People should not game the system," "Transparency enables cheating," and "Algorithms must remain secret." The second level, "Systemic Cause," lists: "Data extraction business models," "Competition and IP law prioritizing corporate interests," and "Lack of robust enforcement mechanisms." The third level, "Worldview," states: "Algorithms are neutral and objective," "Individuals are bad actors who will misuse transparency," and "Corporations deserve protection; individuals do not." The base level, "Myth/Metaphor," features: "The system is a black box," "The algorithm is a wise judge," and "Gaming is deception, not strategy."</alt-text>
</graphic>
</fig>
<p>CLA operates on four distinct, yet interconnected layers:</p>
<list list-type="simple">
<list-item><p>(a) Litany: immediate concerns and the fear of gaming</p></list-item>
</list>
<p>At the surface level, the litany of the algorithmic transparency debate is dominated by immediate concerns about the potential negative consequences of granting individuals access to the inner workings of AI systems. The narrative frequently presented in media, policy discussions, and by corporate actors focuses on the risk that users will game the system if they understand its logic (<xref ref-type="bibr" rid="ref39">Van den Boom, 2020</xref>). Examples cited include individuals manipulating credit scoring algorithms to improve their rating without genuine financial responsibility, drivers altering behavior only when telematics systems are active, or students finding loopholes in educational assessment AI.</p>
<p>This layer emphasizes threats to algorithmic integrity, system security, and the potential for widespread manipulation or exploitation. The proposed solutions at this level often involve technical safeguards to prevent gaming, legal penalties for misuse, or simply maintaining secrecy to make gaming impossible. This litany, while containing elements of valid concern about system security, tends to frame the issue as a problem of malicious individual behavior that must be controlled.</p>
<list list-type="simple">
<list-item><p>(b) Systemic causes: structures of power and competition</p></list-item>
</list>
<p>Moving beneath the surface litany, the systemic layer reveals the underlying social, economic, and legal structures that give rise to and sustain the fear-of-gaming narrative. A primary systemic cause is the economic incentive for companies to protect their AI algorithms as valuable trade secrets and intellectual property. In a highly competitive market, the specific design, training data, and operational parameters of a performant algorithm can represent a significant competitive advantage. Transparency is perceived as a direct threat to this advantage, potentially allowing competitors to replicate successful models without incurring the same research and development costs (<xref ref-type="bibr" rid="ref39">Van den Boom, 2020</xref>). This economic structure creates a powerful, vested interest in maintaining algorithmic opacity, and the legal frameworks surrounding intellectual property often provide robust mechanisms for doing so, creating a direct conflict with data protection and transparency rights.</p>
<p>Furthermore, the concentration of power within large technology companies and institutions that develop and deploy AI systems is a key systemic factor. This concentration of power allows these actors to shape the discourse around AI, influencing policy and public perception.</p>
<p>The fear of gaming can be strategically amplified by those in power to justify maintaining control and limiting external scrutiny. The complex technical nature of advanced AI also acts as a systemic barrier, creating an information asymmetry between those who build and deploy AI and those who are subjected to its decisions, reinforcing existing power inequality. The fragmented and often weakly enforced regulatory landscape for AI accountability also contributes to this layer by failing to create sufficient systemic pressure for meaningful transparency.</p>
<list list-type="simple">
<list-item><p>(c) Worldview: beliefs in control and market primacy</p></list-item>
</list>
<p>The systemic causes are, in turn, supported by deeper worldviews and paradigms. A dominant worldview underpinning the fear-of-gaming narrative is a strong belief in control and order imposed from the top down. This worldview views systems as entities to be managed and protected by experts and authorities, with users as passive recipients or potential disruptors who need to be controlled or contained. From this perspective, granting individuals the power that comes with understanding a system&#x2019;s inner workings is inherently risky and undesirable, as it might lead to unpredictable outcomes outside of intended control.</p>
<p>Another powerful worldview at play is the prioritization of market dynamics and corporate interests over individual rights and democratic accountability. This perspective holds that the pursuit of economic efficiency, innovation (often narrowly defined by market advantage), and corporate profitability are paramount. Within this worldview, the protection of trade secrets is seen as essential for market functioning and innovation, and concerns about individual transparency rights are secondary or viewed as obstacles to progress. This worldview often assumes that market competition will eventually lead to optimal outcomes, including trustworthy AI, without the need for extensive external regulation or mandatory transparency that might impede corporate strategies. This perspective often downplays or fails to adequately account for how market forces can exacerbate, rather than mitigate, inequality and social harms when left unchecked (<xref ref-type="bibr" rid="ref47">Whittaker, 2021</xref>).</p>
<list list-type="simple">
<list-item><p>(d) Myth/Metaphor: Underlying Beliefs about Human Nature and AI</p></list-item>
</list>
<p>At the deepest layer, the worldviews are sustained by powerful, often unconscious, myths and metaphors. The fear of gaming taps into deep-seated cultural myths about human nature, often portraying individuals as fundamentally self-interested actors prone to cheating and exploiting systems for personal gain. This myth justifies the need for external control and surveillance, reinforcing the idea that individuals cannot be trusted with power or knowledge about the systems that govern them.<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref></p>
<p>At the same time, there are powerful myths surrounding AI itself. AI is often portrayed metaphorically as objective, a neutral judge capable of making perfectly rational decisions free from human bias (<xref ref-type="bibr" rid="ref14">Gillespie, 2014</xref>). We also see the narrative that AI systems are too hard to understand even for expert, so we just have to accept instead of scrutinize these systems.<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref> These myths contribute to a sense of technological determinism or inevitability best left in the hands of a select few experts (<xref ref-type="bibr" rid="ref25">Morozov, 2013</xref>; <xref ref-type="bibr" rid="ref3">Broussard, 2019</xref>; <xref ref-type="bibr" rid="ref9">De Liban, 2024</xref>). The combination of the myth of the untrustworthy individual and the myth of the inscrutable or infallible algorithm enables the narrative that denying transparency is necessary and appropriate (<xref ref-type="bibr" rid="ref32">Schellmann, 2024</xref>; <xref ref-type="bibr" rid="ref9004">Kroll et al., 2017</xref>). The algorithm becomes a benevolent, necessary authority that must be protected from the potentially malicious actions of individuals seeking to exploit its secrets (<xref ref-type="bibr" rid="ref18">Kak and West, 2023</xref>).</p>
<p>Using Causal Layered Analysis reveals that refusing transparency and access rights is not a simple response to security risks. It is a narrative deeply embedded in structures of power, competition, and worldviews that prioritise corporate control over individual agency (<xref ref-type="bibr" rid="ref40">Van den Boom, 2022</xref>). Challenging this requires more than demanding transparency; it requires a fundamental reframing of what it means to interact with an algorithmic system (<xref ref-type="bibr" rid="ref31">Rudin, 2019</xref>).</p>
</sec>
</sec>
<sec id="sec5">
<label>3</label>
<title>Reimagining the right to gaming systems as a right of contestation</title>
<p>We deliberately use the provocative term <italic>gaming</italic> to challenge its negative connotation and reframe it as a legitimate and necessary form of contestation in an algorithmic society. This section builds a theoretical foundation for this re-appropriation, connecting it to established legal and interdisciplinary debates on contestability (<xref ref-type="bibr" rid="ref46">Wachter et al., 2020</xref>), civic resistance (<xref ref-type="bibr" rid="ref6">Cohen, 2019</xref>), play as critique (<xref ref-type="bibr" rid="ref34">Sicart, 2014</xref>), And ethical hacking (<xref ref-type="bibr" rid="ref2">Bellaby, 2023</xref>).</p>
<sec id="sec6">
<label>3.1</label>
<title>From illegitimate cheating to a right of contestation</title>
<p>In everyday use, <italic>gaming the system</italic> implies deception or manipulation. We acknowledge this cultural framing. However, in the context of opaque, high-stakes AI systems, often shielded from scrutiny by trade secrets or proprietary protections, gaming becomes a rational and necessary response to structural power imbalances (<xref ref-type="bibr" rid="ref18">Kak and West, 2023</xref>; <xref ref-type="bibr" rid="ref12">Edwards and Veale, 2018</xref>). When official channels for redress or explanation fail, users are left with few options but to experiment, test, or subvert the system to understand or challenge it.</p>
<p>We argue that <italic>gaming</italic> in this context takes on multiple democratic functions:</p>
<list list-type="bullet">
<list-item><p>Agency: Reclaiming control in systems where individuals are typically positioned as passive subjects of computation, subject to automated decisions without recourse (<xref ref-type="bibr" rid="ref6">Cohen, 2019</xref>).</p></list-item>
<list-item><p>Resistance: Pushing back against dominant narratives that position algorithmic outcomes as objective or inevitable (<xref ref-type="bibr" rid="ref13">Eubanks, 2018</xref>).</p></list-item>
<list-item><p>Participatory sense-making: Engaging with algorithmic systems not just to interpret their outputs but to actively make sense of how they construct subjects and realities (<xref ref-type="bibr" rid="ref20">Lindley et al., 2020</xref>).</p></list-item>
<list-item><p>Behavioral self-modification: Using knowledge gained from gaming to adapt one&#x2019;s behavior and achieve fairer outcomes within algorithmic systems (<xref ref-type="bibr" rid="ref1">Ananny and Crawford, 2018</xref>).</p></list-item>
</list>
<p>This reframing aligns with the growing literature on <italic>contestable AI</italic>, which seeks to provide procedural mechanisms for users to intervene in automated decision-making (<xref ref-type="bibr" rid="ref46">Wachter et al., 2020</xref>). Our concept of the <italic>right to game AI systems</italic> is a speculative legal proposition that extends beyond explanation rights, advocating for user-driven practices of resistance, redress, and reappropriation.</p>
</sec>
<sec id="sec7">
<label>3.2</label>
<title>Playing with the trouble</title>
<p>This re-appropriation finds further grounding in the legal recognition of <italic>ethical hacking, red teaming, and adversarial testing</italic> as legitimate modes of system critique (<xref ref-type="bibr" rid="ref9003">European Union., 2024</xref>; <xref ref-type="bibr" rid="ref43">Veale, 2020</xref>). However, these practices are usually reserved for technical experts under institutional oversight. Our speculative intervention envisions a future in which such practices are <italic>democratized.</italic> Here, the individuals most impacted by algorithmic systems (welfare recipients, insured drivers) are empowered to act as <italic>civic auditors,</italic> drawing on their lived experience to test, probe, and challenge the logic and effects of these systems (<xref ref-type="bibr" rid="ref41">Van den Boom, 2023</xref>).</p>
<p>In this framing, gaming shifts from a self-interested or deceptive practice into a distributed method of algorithmic accountability. It functions as a form of <italic>grassroots red teaming</italic>, allowing users to stress-test decision systems and demand more robust, just, and transparent design. Unpacking these socio-technical infrastructures requires interventions from multiple, distributed vantage points (<xref ref-type="bibr" rid="ref8">Crawford and Joler, 2018</xref>).</p>
</sec>
<sec id="sec8">
<label>3.3</label>
<title>Why gaming and not just contesting?</title>
<p>While the notion of <italic>contestability</italic> has gained traction in AI governance discourse, it often remains tethered to formal legal procedures or institutional processes (<xref ref-type="bibr" rid="ref46">Wachter et al., 2020</xref>). We retain the term <italic>gaming</italic> because of its speculative, insurgent potential. <italic>Contesting</italic> suggests recourse within an existing system; <italic>gaming</italic> implies tactics deployed precisely when those systems are inaccessible, incomplete, or untrustworthy.<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref></p>
<p>The <italic>Right to Game AI Systems</italic> is thus presented not as a formal legal right in the traditional sense, but as a speculative legal artifact, a tool to provoke discussion about what rights might be needed when we are governed by inscrutable, non-negotiable infrastructures. It reflects the critical legal insight that law itself is often structured to deny access or recognition to certain subjects, and that resistance must often come from outside its formal channels (<xref ref-type="bibr" rid="ref10">Delacroix and Wagner, 2021</xref>).</p>
</sec>
</sec>
<sec id="sec9">
<label>4</label>
<title>A speculative artifact: the right to game AI-systems in practice</title>
<p>Causal Layered Analysis (CLA) helps us break down the deeper stories and systems that support the lack of transparency in algorithmic technologies. Speculative design builds on this by offering a way to go beyond critique (<xref ref-type="bibr" rid="ref41">Van den Boom, 2023</xref>). It allows us to imagine and explore futures where people are not simply governed by algorithms but have agency and control over them. This section introduces a speculative scenario that puts into practice the idea of a right to game AI systems.</p>
<p>Speculative design is not about solving current issues or creating market products. Rather, it provides a way to ask &#x201C;what if?&#x201D; about technological and societal direction, allowing us to envision futures beyond existing legal, social, or technological frameworks. It challenges dominant worldviews and opens doors to imagining alternatives (<xref ref-type="bibr" rid="ref11">Dunne and Raby, 2013</xref>; <xref ref-type="bibr" rid="ref21">Lindley and Green, 2021</xref>). Within AI governance, it enables us to question the passive roles often assigned to users and imagine futures where power is redistributed toward individuals (<xref ref-type="bibr" rid="ref20">Lindley et al., 2020</xref>).</p>
<p>The <italic>Right to Game AI-Systems</italic> is a speculative legal artifact, a fictional, provocative tool. It is not a legally enforceable right, but a means to rethink how people might contest or engage with algorithmic systems. By imagining individuals who can test, deceive, or resist algorithmic decisions, the artifact aims to surface assumptions embedded in current governance frameworks and invite alternative models of fairness, accountability, and agency (<xref ref-type="bibr" rid="ref20">Lindley et al., 2020</xref>).</p>
<p>Speculative design turns abstract values like transparency and resistance into tangible experiences that can be imagined, lived through, and discussed. It concretizes intangible concepts in narrative or material form.<xref ref-type="fn" rid="fn0010"><sup>10</sup></xref> By envisioning a future where individuals have the right to game AI systems, we can explore both positive outcomes and unintended consequences and anticipate ethical or legal challenges (<xref ref-type="bibr" rid="ref29">Pschetz et al., 2017</xref>; <xref ref-type="bibr" rid="ref35">Tallyn et al., 2018</xref>).</p>
<p>More broadly, this speculative approach reshapes our view of human&#x2013;AI relations. Rather than focusing solely on protection from harm, it invites consideration of how people might actively shape, resist, or subvert AI systems. The <italic>Right to Game</italic> framework prompts reflection on who holds power in algorithmic systems and under what conditions such power can be contested (<xref ref-type="bibr" rid="ref20">Lindley et al., 2020</xref>).</p>
<p>Consider the example of a driver whose insurance premium is set by a telematics black box. Instead of passive acceptance, we imagine a speculative tool called <italic>The Fair Play Insurance Dashboard</italic>. Although fictional, this interface makes algorithmic decisions visible, contestable, and even gameable by the user.<xref ref-type="fn" rid="fn0011"><sup>11</sup></xref></p>
<p>Our driver logs into her insurance portal and is greeted by the Fair Play Dashboard. This innovative tool goes beyond simply displaying her premium; it empowers her with four key features: the module, the explorer, the simulator, and alerts.</p>
<table-wrap position="anchor" id="tab1">
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" valign="top" colspan="2">The fair play insurance dashboard</td>
</tr>
<tr>
<td align="left" valign="top"><bold>Radical transparency module:</bold><break/>This section lists every data point the insurer&#x2019;s algorithm uses: every trip, start/end times, speed, acceleration/braking patterns, routes taken, and even contextual data like weather and traffic density.<break/>It also lists non-driving data points that might be used, such as the car&#x2019;s model, age, and color.</td>
<td align="left" valign="top"><bold>Logic and weights explorer:</bold><break/>This is the core of the dashboard. It displays the key factors influencing the Driver risk score and, crucially, their relative weights.<break/>For example:<break/><italic>Hard Braking Events: 35%</italic><break/><italic>Driving Between 11&#x202F;p.m. - 5&#x202F;a.m.: 25%</italic><break/><italic>Exceeding Speed Limit: 20%</italic><break/><italic>Driving in High-Risk Zones: 15%</italic><break/><italic>Total Mileage: 5%</italic></td>
</tr>
<tr>
<td align="left" valign="top"><bold>The gaming simulator:</bold><break/>This is an interactive tool where the driver can play with the model. They can use sliders to adjust variables and see the immediate impact on simulated premium.<break/>For example,<break/>&#x201C;<italic>What if I had made 50% fewer hard-braking maneuvers last month?&#x201D;</italic><break/>The simulator shows a premium drop of 15%.<break/><italic>&#x201C;What if I avoided all driving after 11&#x202F;p.m.?&#x201D;</italic> The simulator shows a premium drop of 20%.</td>
<td align="left" valign="top"><bold>Bias and fairness alerts:</bold><break/><italic>&#x201C;Our model flags the &#x2018;North Industrial&#x2019; zone as high-risk, which increases your premium. We recognize this may disproportionately affect residents or workers in this area.</italic><break/><italic>You have the right to request a review of this factor.&#x201D;</italic><break/>This feature turns the driver from a passive subject into an active participant in ensuring the system&#x2019;s fairness.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>These features allow her to actively &#x2018;play&#x2019; with her premium and driving behavior. These features allow her to interactively explore her premium and driving behavior.</p>
<p>The dashboard is intended to spark discussion rather than provide direct assistance. By enabling drivers to experiment with different scenarios without altering their actual behavior, they can &#x201C;win&#x201D; by lowering their premiums.</p>
<p>This approach highlights the fact that insurers may not fully understand or have access to the algorithms they use. Research shows that insurers set premiums using algorithms that lack transparency regarding the factors influencing risk scores. By outsourcing risk scoring algorithms, insurers no longer know whether there is a clear link between driving behavior, risk scores, and the premiums drivers pay. This ambiguity can lead to potential unfair discrimination, as drivers may be unaware of the underlying data and logic that determine their rates (<xref ref-type="bibr" rid="ref38">Van Bekkum et al., 2025</xref>). The Fair Play Dashboard aims to provoke people to challenge how we allow algorithmic decision-making in insurance and other aspects of our lives that have a serious impact.</p>
</sec>
<sec id="sec10">
<label>5</label>
<title>The parameters and implications of a right to game</title>
<p>With this example of a beneficial vision of gaming the AI system, we now turn to the ethical framework needed to guide its implementation. Having the right to game AI systems should not be absolute. There are serious concerns and negative consequences when people are allowed to challenge systems through creative practices. In the context of insurance, for example, when people know the company will have a threshold before they will investigate, people may abuse this knowledge and make sure their claims are below the threshold instead of filing for the actual (lower) amount. Therefore, the right we propose must be balanced against societal interests and be the least likely to cause harm.</p>
<p>The following conditions, presented here as a decision flow, can serve as a guide for determining when this right should apply (<xref ref-type="fig" rid="fig2">Figure 2</xref>).</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Decision flow for applying the right to game.</p>
</caption>
<graphic xlink:href="fcomm-10-1620310-g002.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart evaluating the justification for algorithmic decision-making. It starts by asking if the decision is high-stakes or significant. If yes, it checks if the individual is subject to scoring. If both are yes, it asks if the trade secret claim serves innovation or obscures inequality. Obscuring leads to "The right to game applies." Innovation leads to no further action. If the decision is not significant, it checks for public interest in fairness. If there is none, limited access is justifiable.</alt-text>
</graphic>
</fig>
<p>This framework helps to set the boundaries of the right. It is not a call against the protection of trade secrets, but to contest that when an algorithm functions as a gatekeeper to important interests of individuals, the decision must be in favor of radical transparency and individual agency.<xref ref-type="fn" rid="fn0012"><sup>12</sup></xref></p>
<p>The Fair Play Dashboard scenario meets all these criteria: insurance is high-stakes, Drivers are being scored, there is a public interest in fair pricing, and the insurer&#x2019;s claim of secrecy risks obscuring biases against certain drivers or neighborhoods.</p>
</sec>
<sec id="sec11">
<label>6</label>
<title>Conclusion: toward algorithmic citizenship</title>
<p>This paper has argued for a fundamental reframing of our interaction with AI systems. We have challenged the narrative that positions gaming the system as a threat, re-imagining it as a legitimate and necessary right of contestation. By connecting this provocative concept to established academic literature and illustrating its potential through a speculative design scenario, we have shown how empowering individuals to understand and engage with algorithmic logic can lead to fairer, more equitable, and more effective outcomes.</p>
<p>The fear of individuals gaming AI systems they are subject to is a narrative that serves to protect existing power structures. By reframing this, we advocate for a future where individuals transition from being passive data subjects to becoming active stakeholders, enabled to make well-informed decisions (<xref ref-type="bibr" rid="ref15">Gillespie et al., 2014</xref>). This requires a fundamental shift in how we develop and regulate AI, prioritizing transparency, agency, and accountability. Embracing this openness is essential for fostering trust and ensuring that the future shaped by AI is one where power is more equitably distributed and fundamental human rights are protected.</p>
<p>In other words, what we have experienced is that using speculative scenarios helps make the benefits of the right to game tangible. It transforms the relationship between the driver and their insurer from one of opaque judgment to one of transparent negotiation. Furthermore, it can help companies to shift from merely avoiding punishment to actively pursuing improvement because there are clear rules they and others can follow. Instead of accepting given narratives, using speculative design can raise awareness and improve stakeholder engagement to lead to better outcomes for both the individual (lower premiums, safer driving) and society (fewer accidents, fairer pricing; <xref ref-type="bibr" rid="ref15">Gillespie et al., 2014</xref>).</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec12">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec sec-type="author-contributions" id="sec13">
<title>Author contributions</title>
<p>FB: Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>

<sec sec-type="COI-statement" id="sec15">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec16">
<title>Generative AI statement</title>
<p>The authors declare that Gen AI was used in the creation of this manuscript. For conceptualisation and redraft.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec17">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec><ref-list>
<title>References</title>
<ref id="ref9001"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Amerirad</surname><given-names>B.</given-names></name> <name><surname>Cattaneo</surname><given-names>M.</given-names></name> <name><surname>Kenett</surname><given-names>R. S.</given-names></name> <name><surname>Luciano</surname><given-names>E.</given-names></name></person-group> (<year>2023</year>). <article-title>Adversarial Artificial Intelligence in Insurance: From an Example to Some Potential Remedies</article-title>. <source>Risks</source>, <volume>11</volume>, <fpage>20</fpage>. doi: <pub-id pub-id-type="doi">10.3390/risks11010020</pub-id></mixed-citation></ref>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ananny</surname><given-names>M.</given-names></name> <name><surname>Crawford</surname><given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>Seeing without knowing: limitations of the transparency ideal</article-title>. <source>New Media Soc.</source> <volume>20</volume>, <fpage>973</fpage>&#x2013;<lpage>989</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1461444816676645</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Bellaby</surname><given-names>R. W.</given-names></name></person-group> (<year>2023</year>). <source>Hacks, hackers, and political hacking</source>. <publisher-loc>Bristol</publisher-loc>: <publisher-name>Bristol University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Broussard</surname><given-names>M.</given-names></name></person-group> (<year>2019</year>). <source>Artificial unintelligence: How computers misunderstand the world</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bucher</surname><given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms</article-title>. <source>Inf. Commun. Soc.</source> <volume>20</volume>, <fpage>30</fpage>&#x2013;<lpage>44</lpage>. doi: <pub-id pub-id-type="doi">10.1080/1369118X.2016.1154086</pub-id></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Busuioc</surname><given-names>M.</given-names></name> <name><surname>Curtin</surname><given-names>D.</given-names></name> <name><surname>Almada</surname><given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>Reclaiming transparency: contesting the logics of secrecy within the AI act</article-title>. <source>Eur. Law Open</source> <volume>2</volume>, <fpage>79</fpage>&#x2013;<lpage>105</lpage>. doi: <pub-id pub-id-type="doi">10.1017/elo.2022.47</pub-id></mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Cohen</surname><given-names>J. E.</given-names></name></person-group> (<year>2019</year>). <source>Between truth and power: The legal constructions of informational capitalism</source>. <publisher-loc>Oxford, UK</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Couldry</surname><given-names>N.</given-names></name> <name><surname>Mejias</surname><given-names>U. A.</given-names></name></person-group> (<year>2019</year>). <source>The costs of connection: How data is colonizing human life and appropriating it for capitalism</source>. <publisher-loc>Redwood City, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Crawford</surname><given-names>K.</given-names></name> <name><surname>Joler</surname><given-names>V.</given-names></name></person-group> (<year>2018</year>). <italic>Anatomy of an AI system</italic>.</mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Delacroix</surname><given-names>S.</given-names></name> <name><surname>Wagner</surname><given-names>B.</given-names></name></person-group> (<year>2021</year>). <article-title>Constructing a mutually supportive interface between AI and human values</article-title>. <source>Nat. Mach. Intell.</source> <volume>3</volume>, <fpage>103</fpage>&#x2013;<lpage>105</lpage>. doi: <pub-id pub-id-type="doi">10.2139/ssrn.3404179</pub-id></mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>De Liban</surname><given-names>K.</given-names></name></person-group> (<year>2024</year>). <italic>Inescapable AI: The ways AI decides how low-income people work, live, learn, and survive</italic>. Techtonic Justice.</mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Dunne</surname><given-names>A.</given-names></name> <name><surname>Raby</surname><given-names>F.</given-names></name></person-group> (<year>2013</year>). <source>Speculative everything: Design, fiction, and social dreaming</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Edwards</surname><given-names>L.</given-names></name> <name><surname>Veale</surname><given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Enslaving the algorithm</article-title>. <source>IEEE Secur. Privacy</source> <volume>16</volume>, <fpage>46</fpage>&#x2013;<lpage>54</lpage>. doi: <pub-id pub-id-type="doi">10.1109/MSP.2018.2701152</pub-id></mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Eubanks</surname><given-names>V.</given-names></name></person-group> (<year>2018</year>). <source>Automating inequality: How high-tech tools profile, exclude, and punish the poor</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>St. Martin&#x2019;s Press</publisher-name>.</mixed-citation></ref>
<ref id="ref9003"><mixed-citation publication-type="journal">European Union. (<year>2024</year>). <article-title>Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union acts and repealing Commission Decision 2022/C 227/04. Official Journal of the European Union, L 2024/1689</article-title></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Gillespie</surname><given-names>T.</given-names></name> <name><surname>Boczkowski</surname><given-names>P. J.</given-names></name> <name><surname>Foot</surname><given-names>K. A.</given-names></name></person-group> (<year>2014</year>). <source>Media technologies</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Gillespie</surname><given-names>T.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>The relevance of algorithms</article-title>&#x201D; in <source>Media technologies: Essays on communication, materiality, and society</source>. eds. <person-group person-group-type="editor"><name><surname>Gillespie</surname><given-names>T.</given-names></name> <name><surname>Boczkowski</surname><given-names>P. J.</given-names></name> <name><surname>Foot</surname><given-names>K. A.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>), <fpage>167</fpage>&#x2013;<lpage>194</lpage>.</mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Inayatullah</surname><given-names>S.</given-names></name></person-group> (<year>1998</year>). <article-title>Causal layered analysis: Poststructuralism as method</article-title>. <source>Futures</source> <volume>30</volume>, <fpage>815</fpage>&#x2013;<lpage>829</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0016-3287(98)00086-X</pub-id></mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kak</surname><given-names>A.</given-names></name> <name><surname>West</surname><given-names>S. M.</given-names></name></person-group> (<year>2023</year>). <italic>Landscape: Confronting tech power</italic>. AI Now Institute.</mixed-citation></ref>
<ref id="ref9004"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kroll</surname><given-names>J. A.</given-names></name> <name><surname>Huey</surname><given-names>J.</given-names></name> <name><surname>Barocas</surname><given-names>S.</given-names></name> <name><surname>Felten</surname><given-names>E. W.</given-names></name> <name><surname>Reidenberg</surname><given-names>J. R.</given-names></name> <name><surname>Robinson</surname><given-names>D. G.</given-names></name> <etal/></person-group> (<year>2017</year>). <article-title>Accountable Algorithms</article-title>. <source>University of Pennsylvania Law Review</source>, <volume>165</volume>, <fpage>633</fpage>&#x2013;<lpage>705</lpage>.</mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lindley</surname><given-names>J.</given-names></name> <name><surname>Akmal</surname><given-names>H.</given-names></name> <name><surname>Coulton</surname><given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>Design research and object-oriented ontology</article-title>. <source>Open Philos.</source> <volume>3</volume>, <fpage>11</fpage>&#x2013;<lpage>41</lpage>. doi: <pub-id pub-id-type="doi">10.1515/opphil-2020-0002</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lindley</surname><given-names>J.</given-names></name> <name><surname>Green</surname><given-names>D. P.</given-names></name></person-group> (<year>2021</year>). <article-title>The ultimate measure of success for speculative design is to disappear completely</article-title>. <source>Interact. Design Architect.</source> <volume>51</volume>, <fpage>32</fpage>&#x2013;<lpage>51</lpage>. doi: <pub-id pub-id-type="doi">10.55612/s-5002-051-002</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Malgieri</surname><given-names>G.</given-names></name> <name><surname>Comand&#x00E9;</surname><given-names>G.</given-names></name></person-group> (<year>2017</year>). <article-title>Why a right to legibility of automated decision-making exists in the general data protection regulation</article-title>. <source>Int. Data Privacy Law</source> <volume>7</volume>, <fpage>243</fpage>&#x2013;<lpage>265</lpage>. doi: <pub-id pub-id-type="doi">10.1093/idpl/ipx019</pub-id></mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mittelstadt</surname><given-names>B.</given-names></name> <name><surname>Allo</surname><given-names>P.</given-names></name> <name><surname>Taddeo</surname><given-names>M.</given-names></name> <name><surname>Wachter</surname><given-names>S.</given-names></name> <name><surname>Floridi</surname><given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>The ethics of algorithms: mapping the debate</article-title>. <source>Big Data Soc.</source> <volume>3</volume>, <fpage>1</fpage>&#x2013;<lpage>21</lpage>. doi: <pub-id pub-id-type="doi">10.1177/2053951716679679</pub-id>, <pub-id pub-id-type="pmid">40991152</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Mittelstadt</surname><given-names>B. D.</given-names></name></person-group> (<year>2019</year>). <italic>Principles for regulating medical AI</italic>. Life Sciences, Society and Policy, No. 15.</mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Morozov</surname><given-names>E.</given-names></name></person-group> (<year>2013</year>). <source>To save everything, click here: The folly of technological solutionism</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Public Affairs</publisher-name>.</mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll1">NJCM v The Dutch State</collab></person-group>. (<year>2020</year>). The Hague District Court, ECLI:NL:RBDHA, No. 2020, pp. 1878.</mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>O&#x2019;Neil</surname><given-names>C.</given-names></name></person-group> (<year>2016</year>). <source>Weapons of math destruction: How big data increases inequality and threatens democracy</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Crown Publishing Group</publisher-name>.</mixed-citation></ref>
<ref id="ref9002"><mixed-citation publication-type="book">Oxford Law Blog. (<year>2025</year>). July 23. <source>Secrecy without oversight: How trade secrets could potentially undermine the AI Act&#x2019;s transparency mandate</source>. <publisher-name>Oxford Law Blog</publisher-name></mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Pasquale</surname><given-names>F.</given-names></name></person-group> (<year>2015</year>). <source>The black box society: The secret algorithms that control money and information</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard University Press</publisher-name>.</mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Pschetz</surname><given-names>L.</given-names></name> <name><surname>Gianni</surname><given-names>R.</given-names></name> <name><surname>Tallyn</surname><given-names>E.</given-names></name> <name><surname>Speed</surname><given-names>C.</given-names></name></person-group> (<year>2017</year>). <italic>Bitbarista: exploring perceptions of data transactions in the internet of things</italic>. Proceedings of the 2017 CHI conference on human factors in computing systems, pp. 2964&#x2013;2975.</mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll2">Regulation (EU)</collab></person-group>. (<year>2016</year>). <italic>679 (general data protection regulation) [2016] OJ L 119/1</italic>. Regulation (EU).</mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rudin</surname><given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead</article-title>. <source>Nat. Mach. Intell.</source> <volume>1</volume>, <fpage>206</fpage>&#x2013;<lpage>215</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s42256-019-0048-x</pub-id>, <pub-id pub-id-type="pmid">35603010</pub-id></mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Schellmann</surname><given-names>H.</given-names></name></person-group> (<year>2024</year>). The algorithm. Hachette books.; Accountable algorithms. University of Pennsylvania Law Review, No. 165, pp. 633&#x2013;705.</mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Sicart</surname><given-names>M.</given-names></name></person-group> (<year>2014</year>). <source>Play matters</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tallyn</surname><given-names>E.</given-names></name> <name><surname>Pschetz</surname><given-names>L.</given-names></name> <name><surname>Gianni</surname><given-names>R.</given-names></name> <name><surname>Speed</surname><given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>Exploring machine autonomy and provenance data in coffee consumption: a field study of Bitbarista</article-title>. <source>Proc. ACM Hum. Comput. Interact.</source> <volume>2</volume>, <fpage>1</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3274439</pub-id>, <pub-id pub-id-type="pmid">40727313</pub-id></mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll3">UK Parliament, House of Commons Education Committee</collab></person-group>. (<year>2020</year>). The impact of COVID-19 on education and children&#x2019;s services. UK Parliament, House of Commons Education Committee.</mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Van Bekkum</surname><given-names>M.</given-names></name> <name><surname>Borgesius</surname><given-names>F. Z.</given-names></name></person-group> (<year>2021</year>). <article-title>Digital welfare fraud detection and the Dutch SyRI judgment</article-title>. <source>Eur. J. Soc. Secur.</source> <volume>23</volume>, <fpage>323</fpage>&#x2013;<lpage>340</lpage>. doi: <pub-id pub-id-type="doi">10.1177/13882627211031257</pub-id></mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Van Bekkum</surname><given-names>M.</given-names></name> <name><surname>Zuiderveen Borgesius</surname><given-names>F.</given-names></name> <name><surname>Heskes</surname><given-names>T.</given-names></name></person-group> (<year>2025</year>). <article-title>AI, insurance, discrimination and unfair differentiation: an overview and research agenda</article-title>. <source>Law Innov. Technol.</source> <volume>17</volume>, <fpage>177</fpage>&#x2013;<lpage>204</lpage>. doi: <pub-id pub-id-type="doi">10.1080/17579961.2025.2469348</pub-id></mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Van den Boom</surname><given-names>F.</given-names></name></person-group> (<year>2020</year>). <article-title>Vehicle data controls, balancing interests under the trade secrets directive</article-title>. <source>Int. J. Technol. Policy Law</source> <volume>3</volume>:<fpage>11</fpage>.</mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Van den Boom</surname><given-names>F.</given-names></name></person-group> (<year>2023</year>). <italic>The state of glitch, a speculative design provocation for inclusive AI futures</italic>, in Morals &#x0026; Machines Journal published by nomos publishing house.</mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Van den Boom</surname><given-names>F.</given-names></name></person-group> (<year>2022</year>). &#x201C;<article-title>Driven by digital innovations: regulating in-vehicle data access and use</article-title>&#x201D; in <source>Informational rights and informational wrongs: A Tapestry for Our Times</source>. eds. <person-group person-group-type="editor"><name><surname>Borghi</surname><given-names>M.</given-names></name> <name><surname>Brownsword</surname><given-names>R.</given-names></name></person-group> (<publisher-loc>Abingdon</publisher-loc>: <publisher-name>Routledge</publisher-name>).</mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Veale</surname><given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>A critical take on the policy recommendations of the EU high-level expert group on artificial intelligence</article-title>. <source>Eur. J. Risk Regul.</source> <volume>11</volume>, <fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi: <pub-id pub-id-type="doi">10.1017/err.2019.65</pub-id>, <pub-id pub-id-type="pmid">41003751</pub-id></mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Veale</surname><given-names>M.</given-names></name> <name><surname>Binns</surname><given-names>R.</given-names></name> <name><surname>Edwards</surname><given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>Algorithms that remember: model inversion attacks and data protection law</article-title>. <source>Phil. Trans. R. Soc. A</source> <volume>379</volume>:<fpage>83</fpage>. doi: <pub-id pub-id-type="doi">10.1098/rsta.2018.0083</pub-id></mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wachter</surname><given-names>S.</given-names></name> <name><surname>Mittelstadt</surname><given-names>B.</given-names></name> <name><surname>Floridi</surname><given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>Why a right to explanation of automated decision-making does not exist in the general data protection regulation</article-title>. <source>Int. Data Privacy Law</source> <volume>7</volume>, <fpage>76</fpage>&#x2013;<lpage>99</lpage>. doi: <pub-id pub-id-type="doi">10.1093/idpl/ipx005</pub-id></mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wachter</surname><given-names>S.</given-names></name> <name><surname>Mittelstadt</surname><given-names>B.</given-names></name> <name><surname>Russell</surname><given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI</article-title>. <source>Comput. Law Secur. Rev.</source> <volume>36</volume>:<fpage>105373</fpage>. doi: <pub-id pub-id-type="doi">10.2139/ssrn.3547922</pub-id>, <pub-id pub-id-type="pmid">40330906</pub-id></mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Whittaker</surname><given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>The steep cost of capture</article-title>. <source>Interactions</source> <volume>28</volume>, <fpage>50</fpage>&#x2013;<lpage>55</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3488666</pub-id></mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Winner</surname><given-names>L.</given-names></name></person-group> (<year>1980</year>). <article-title>Do artifacts have politics?</article-title> <source>Daedalus</source> <volume>109</volume>, <fpage>121</fpage>&#x2013;<lpage>136</lpage>.</mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zarsky</surname><given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>The trouble with algorithms</article-title>. <source>Univ. California Hastings Law J.</source> <volume>61</volume>, <fpage>57</fpage>&#x2013;<lpage>116</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0162243915605575</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn id="fn0013" fn-type="custom" custom-type="edited-by"><p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2716168/overview">Jesse Josua Benjamin</ext-link>, Eindhoven University of Technology, Netherlands</p></fn>
<fn id="fn0014" fn-type="custom" custom-type="reviewed-by"><p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/686656/overview">Simon David Hirsbrunner</ext-link>, University of T&#x00FC;bingen, Germany</p><p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/816334/overview">Denisa Reshef Kera</ext-link>, Tel Aviv University, Israel</p></fn>
</fn-group>
<fn-group>
<fn id="fn0001"><label>1</label><p>Hesse v Agentur f&#x00FC;r Arbeit Sozialgericht Gie&#x00DF;en, Judgment of 20 February 2020, S 14 AS 101/19.</p></fn>
<fn id="fn0002"><label>2</label><p>See <italic>Dun &#x0026; Bradstreet Austria</italic>, Case C-203/22, Judgment of 27 February 2025, Court of Justice of the European Union.</p></fn>
<fn id="fn0003"><label>3</label><p>We argued for the need for a broad interpretation of access rights and trade secrets (<xref ref-type="bibr" rid="ref39">Van den Boom, 2020</xref>).</p></fn>
<fn id="fn0004"><label>4</label><p>Recital 63 GDPR.</p></fn>
<fn id="fn0005"><label>5</label><p>Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence Act).</p></fn>
<fn id="fn0006"><label>6</label><p>This is despite the CJEU, which recently confirmed the right to explanation of automated decision-making. Instead, the balancing of interests must be carried out on a case-by-case basis, <italic>Dun &#x0026; Bradstreet Austria</italic>, Case C-203/22, para. 75 (CJEU, 2025). see <ext-link xlink:href="https://curia.europa.eu/juris/document/document.jsf?text=&#x0026;docid=295841&#x0026;pageIndex=0&#x0026;doclang=en&#x0026;mode=lst&#x0026;dir=&#x0026;occ=first&#x0026;part=1&#x0026;cid=1555350" ext-link-type="uri">https://curia.europa.eu/juris/document/document.jsf?text=&#x0026;docid=295841&#x0026;pageIndex=0&#x0026;doclang=en&#x0026;mode=lst&#x0026;dir=&#x0026;occ=first&#x0026;part=1&#x0026;cid=1555350</ext-link>.</p></fn>
<fn id="fn0007"><label>7</label><p>For an analysis of the arguments, see <xref ref-type="bibr" rid="ref5">Busuioc et al. (2023)</xref> who found that <italic>[.] the effectiveness of secrecy as an antidote for gaming is far from uncontested.</italic> Busuioc M, Curtin D, Almada M. Reclaiming transparency: contesting the logics of secrecy within the AI Act. <italic>European Law Open</italic>. 2023;2(1):79&#x2013;105; Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. <italic>Nature Machine Intelligence, 1</italic>, 206.</p></fn>
<fn id="fn0008"><label>8</label><p>This understanding that systems are too complex to be understood by ordinary people was also refuted by the CJEU in <italic>Dun &#x0026; Bradstreet Austria</italic>, Case C-203/22 (CJEU 2025).</p></fn>
<fn id="fn0009"><label>9</label><p><xref ref-type="bibr" rid="ref6">Cohen (2019)</xref>; Eubanks.<sup>13</sup></p></fn>
<fn id="fn0010"><label>10</label><p>Dunne &#x0026; Raby<sup>12</sup>.</p></fn>
<fn id="fn0011"><label>11</label><p>See for other examples, Bitbarista, which was designed to provoke reflection on autonomy and data norms (<xref ref-type="bibr" rid="ref35">Tallyn et al., 2018</xref>).</p></fn>
<fn id="fn0012"><label>12</label><p>On whether the public interest should outweigh secrecy. Oxford Law Blogs. (2025, July 23). <italic>Secrecy without oversight: How trade secrets could potentially undermine the AI Act&#x2019;s transparency mandate</italic>.</p></fn>
</fn-group>
</back>
</article>