<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Polit. Sci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Political Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Polit. Sci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2673-3145</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpos.2025.1625394</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Approaching the integration of large language models in the parliamentary workspace</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name><surname>von Lucke</surname> <given-names>J&#x00F6;rn</given-names></name><xref ref-type="aff" rid="aff1"/><xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2920170"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="methodology" vocab-term-identifier="https://credit.niso.org/contributor-roles/methodology/">Methodology</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="visualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/visualization/">Visualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>The Open Government Institute, Zeppelin University</institution>, <city>Friedrichshafen</city>, <country country="de">Germany</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: J&#x00F6;rn von Lucke, <email xlink:href="mailto:joern.vonlucke@zu.de">joern.vonlucke@zu.de</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-11">
<day>11</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2025</year>
</pub-date>
<volume>7</volume>
<elocation-id>1625394</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>05</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>14</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>12</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 von Lucke.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>von Lucke</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-11">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The integration of large language models (LLMs) into parliamentary workspaces offers transformative potential but also significant challenges. This contribution provides a structured analysis of LLM adoption in legislative environments, examining theoretical foundations, identifying implementation options, and assessing broader implications. It situates LLMs within the wider AI landscape in parliamentary processes, exploring their unique characteristics and applicability. A comprehensive SWOT analysis evaluates strengths, weaknesses, opportunities, and threats, highlighting potential efficiency gains in legislative drafting and policy analysis, alongside risks such as bias, misinformation, political manipulation, and ethical and regulatory challenges. Key integration options are examined, including external LLM service providers, in-house deployment, and training models on parliamentary data for enhanced relevance and accuracy. Performance benchmarks tailored to parliamentary needs are also discussed. The paper emphasizes the importance of LLM literacy and capacity building among MPs, parliamentary staff, and scientific services for responsible and effective use. Finally, it reflects on the implications of LLM use in parliaments and offers recommendations for actions, stressing the need for a balanced approach that leverages technological innovation while ensuring democratic integrity, transparency, and accountability. By providing a comprehensive overview, this contribution advances the discourse on AI in parliament and offers strategic recommendations for integrating LLMs into parliamentary operations effectively and responsibly.</p>
</abstract>
<kwd-group>
<kwd>generative AI</kwd>
<kwd>large language model</kwd>
<kwd>options</kwd>
<kwd>parliament</kwd>
<kwd>SWOT analysis</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was received for this work and/or its publication. This open access publication is funded by the Zeppelin University Open Access Publication Fund.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="76"/>
<page-count count="14"/>
<word-count count="12230"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Politics of Technology</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>In 2023, Zeppelin University hosted a workshop on AI use in parliaments. International experts from academia and legislatures discussed potential applications of large language models (LLMs) in parliamentary contexts. The event yielded valuable insights but also raised major concerns. Twenty-one participants debated whether, under what conditions, and how LLMs should be implemented, as well as who bears responsibility. Discussions covered methods to detect LLM use and risks such as dependencies and lock-in effects. Key concerns included lack of transparency, limited knowledge on training and improving LLMs, and difficulties in assessing accuracy, completeness, and quality of outputs. The need for new skill sets and training attracted strong interest. Broader implications for democratic governance and the state&#x2019;s role were also debated. Given the many actors seeking to influence legislation, integrating LLMs into parliamentary processes constitutes a clear high-risk area for AI use.</p>
<p>Drawing on workshop discussions, a graphical recording, ongoing literature reviews, and ChatGPT-based exercises, a SWOT analysis of LLM use in parliamentary work was conducted. The analysis identifies key strengths, weaknesses, opportunities, and threats from a public sector informatics perspective. Findings provide a basis for examining implementation options, potential benefits and critical challenges, particularly as institutional knowledge remains limited, and most parliaments are only beginning to explore these technologies.</p>
<p>The research paper therefore addresses the following research questions:</p>
<list list-type="bullet">
<list-item>
<p>RQ1: Which strengths, weaknesses, opportunities, and threats are expected with the introduction of LLMs in the parliamentary workspace?</p>
</list-item>
<list-item>
<p>RQ2: What are the options for the integration of a LLM in a parliament?</p>
</list-item>
</list>
<p>Drawing on insights from workshop discussions and a literature review (sections 2 and 3), this study systematically examines the integration of LLMs in parliamentary contexts. It begins with a SWOT analysis of their use in parliamentary work environments and complements this with action-oriented recommendations (section 4). Section 5 provides a detailed examination of current integration options for LLMs in parliaments. Section 6 offers broader reflections, including considerations on the European Union&#x2019;s (EU) AI Act. Finally, section 7 synthesizes key findings, addresses limitations, and outlines open questions for future research.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Background on artificial intelligence, generative AI and large language models</title>
<sec id="sec3">
<label>2.1</label>
<title>Artificial intelligence and generative artificial intelligence</title>
<p>Recent advances in artificial intelligence (AI) and generative AI (genAI) challenge established interaction patterns and create new opportunities for parliaments. The term &#x201C;artificial intelligence,&#x201D; introduced in 1956, reflected the belief that machine could simulate &#x201C;every aspect of learning or any other feature of intelligence&#x201D; (<xref ref-type="bibr" rid="ref45">McCarthy et al., 1956</xref>). Nowadays, there is neither a single technology nor a collection of niche applications (<xref ref-type="bibr" rid="ref64">Stanford University, 2021</xref>; <xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 2). The term AI stands for a bundle of different technologies, learning methods, system architectures, algorithms and approaches that replicate human cognitive capabilities for specific tasks. Autonomous systems, machine learning, deep learning, neural networks, pattern recognition, natural language processing, real-time translations, chatbots, and robots all fall under this umbrella (<xref ref-type="bibr" rid="ref2">Aldoseri et al., 2023</xref>: 1&#x2013;2). When using AI methods, users give up requirements for correctness and completeness. But AI proves valuable when problems are too complex for efficient computation, require extensive domain knowledge, or cannot be formally described (<xref ref-type="bibr" rid="ref60">Schmid et al., 2023</xref>). The capabilities provided by AI are intended to support or automate human activities and processes. Pattern and text recognition, speech and speaker recognition, image and spatial recognition, face and gesture recognition enable a broad spectrum of applications. AI-based systems for text, sound, speech, image, spatial and video generation as well as programming expand the range of applications. All of this leads to new types of systems, applications, and processes for AI-based perception, notification, recommendation, prediction, prevention, decision-making and situational awareness in real time (<xref ref-type="bibr" rid="ref13">Etscheid et al., 2020</xref>: 9&#x2013;12). Interest in these developments has grown across media, government reports (<xref ref-type="bibr" rid="ref3">Bundesregierung, 2018</xref>; <xref ref-type="bibr" rid="ref14">European Commission, 2020</xref>; <xref ref-type="bibr" rid="ref16">Executive Office of the President, 2019</xref>; <xref ref-type="bibr" rid="ref28">His Majesty&#x2019;s Government, 2018</xref>; <xref ref-type="bibr" rid="ref72">White House, 2022</xref>), parliaments (<xref ref-type="bibr" rid="ref8">Committee on Artificial Intelligence, 2023</xref>; <xref ref-type="bibr" rid="ref10">Council of Europe, 2021</xref>; <xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>), and expert reviewers (<xref ref-type="bibr" rid="ref55">Perlman, 2024</xref>).</p>
<p>GenAI is characterized by its capability to generate new artifacts on the foundation of what has been learned. It does not rely on coincidences, but instead on recognized and learned patterns to generate synthetic data. It can autonomously create data, texts, processes, programs, sounds, sound sequences, voice contributions, images, and videos (<xref ref-type="bibr" rid="ref61">Sengar et al., 2025</xref>). Based on this ability, GenAI can also offer translation services, make predictions and forecasts, and devise implementation plans. Data generation creates synthetic data for modeling, scenario planning, and optimizing policy formulation. Text generation accelerates reporting and the drafting of speeches and other legal-technical documents. The optimization of existing processes and the generation of new ones enhances procedural workflows in parliaments for increased efficiency and effectiveness. Thanks to generative programming, genAI also has the potential to automate planning and software engineering tasks. Speech and sound generation can improve communication but also entails risks of producing falsified messages and audio deepfakes that discredit individuals, parties, and parliamentary groups. Image and video generation create visual content for data visualization but can also produce forged photos and lip-synchronized deepfake-videos intended to cause confusion in parliaments, administrations, the press, and the public. Translation services semi-automatically convert text between languages. Predictive services, understood as probability-based estimates derived from patterns and data, generate projections and forecasts of future developments. These tools may, for instance, assist lawmakers in formulating policies based on accurate forecasts (<xref ref-type="bibr" rid="ref66">von Lucke, 2024</xref>: 10&#x2013;11). Some studies examine the socio-technical context, emphasizing the creative potential of generative models (<xref ref-type="bibr" rid="ref44">Mazzone and Elgammal, 2019</xref>; <xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>). Other question the suitability and quality of LLMs for legislation, expressing concern about strong Anglocentrism due to the dominance of English texts in pretraining corpora, and the possible unreflective use of LLMs by users like MPs and staff (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>).</p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Large languages models (LLMs)</title>
<p>A language model is a probabilistic model of a natural language, capable of generating probabilities for a series of words based on the text corpora it was trained on, which can include one or multiple languages (<xref ref-type="bibr" rid="ref37">Jurafsky and Martin, 2021</xref>). LLMs combine large training datasets, feed-forward neural networks, and transformers (<xref ref-type="bibr" rid="ref65">Vaswani et al., 2017</xref>). Building on concepts of natural language processing and natural language generation, these forms of generative AI produce novel types of text and have surpassed models based on recurrent neural networks as well as purely statistical approaches. LLMs can produce answers in natural, easily understandable language, as well as be embedded in chatrooms as chatbots and create summaries and analyses that make extensive text documents more accessible. LLMs require an infrastructure that includes hardware or cloud platforms for their development, training, and operation, such as high-performance computation optimized for AI and deep learning (<xref ref-type="bibr" rid="ref5">Chang and Pflugfelder, 2024</xref>). Moreover, they need dedicated applications or interfaces to enable user access and interaction. These applications serve as mediators between users and an underlying model, managing input formatting, access rights, data handling, and response presentation.</p>
<p>Since the release of ChatGPT, built on the LLM GPT-3.5 architecture by OpenAI in November 2022, there has been increased focus on researching the potential applications, deployment strategies, and the associated risks and challenges of LLMs (<xref ref-type="bibr" rid="ref1">Albrecht, 2023</xref>). The rapid development of many LLMs presents several opportunities. LLMs can automate many tasks currently performed by humans. They process texts and generate new, comprehensible content, demonstrating remarkable linguistic agility like word and language acrobats. Due to their training, they possess a vast amount of data and information to generate answers and solve problems. When prompted correctly, they can teach themselves to think before speaking (<xref ref-type="bibr" rid="ref74">Zeligman et al., 2024</xref>) and exhibit impressive creativity. Additionally, LLMs can assist with analyses, develop programs to meet specific requirements, generate new processes and make impact as an idea machine (<xref ref-type="bibr" rid="ref12">Di Fede et al., 2022</xref>), a brainstorming partner, or a devil&#x2019;s advocate. Notable examples of LLMs include cloud models with proprietary licenses (closed source) like OpenAI&#x2019;s GPT models (GPT-3.5 and GPT-4, used in ChatGPT and Microsoft&#x2019;s CoPilot), Google&#x2019;s PaLM and Gemini, Baidu&#x2019;s ERNIE 3.0 Titan, Anthropic&#x2019;s Claude, Aleph Alpha&#x2019;s Luminous, as well as open-source models that can be run locally without restrictions like Meta AI&#x2019;s Llama 2, Mistral&#x2019;s Mistral 7B or Starling 7B or BigScience&#x2019;s Bloom (<xref ref-type="bibr" rid="ref4">Chang and Pflugfelder, 2023</xref>: 26&#x2013;30; <xref ref-type="bibr" rid="ref19">Fitsilis et al., 2024b</xref>: 45&#x2013;46). These models are available via the Internet. They are often the subject of studies exploring topics such as their effects on scientific education and research (<xref ref-type="bibr" rid="ref9">Cooper, 2023</xref>; <xref ref-type="bibr" rid="ref54">Peres et al., 2023</xref>), or their policy implications (<xref ref-type="bibr" rid="ref26">Geertsema et al., 2023</xref>; <xref ref-type="bibr" rid="ref36">Jungherr, 2023</xref>). Some models could be downloaded and operated on local servers. The lack of transparency, what happens inside the black box LLM, and the quality of outcomes also play an important role (<xref ref-type="bibr" rid="ref4">Chang and Pflugfelder, 2023</xref>: 8&#x2013;14; <xref ref-type="bibr" rid="ref25">Floridi and Chiriatti, 2020</xref>; <xref ref-type="bibr" rid="ref53">Pedreschi et al., 2019</xref>; <xref ref-type="bibr" rid="ref62">Shumailov et al., 2023</xref>; <xref ref-type="bibr" rid="ref76">Zhang et al., 2022</xref>). LLMs frequently misinterpret complex socio-cultural contexts and may produce factual incorrect or fabricated information (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>: 9&#x2013;10; <xref ref-type="bibr" rid="ref49">Mittelstadt et al., 2023</xref>). Their reliance on biased training data can enforce existing stereotypes (<xref ref-type="bibr" rid="ref4">Chang and Pflugfelder, 2023</xref>: 11), while uncertainty about data provenance undermines trust in generated outputs (<xref ref-type="bibr" rid="ref62">Shumailov et al., 2023</xref>: 13).</p>
</sec>
<sec id="sec5">
<label>2.3</label>
<title>Technical improvements: retrieval-augmented generation systems and privacy proxies</title>
<p>Currently, two technical approaches are available for significantly enhancing LLMs. Retrieval-augmented generation (RAG) is particularly effective when LLMs are prone to hallucinations (incorrect outputs), when query-based models fail to deliver context-specific answers or when the accuracy of generative AI is insufficient. In a RAG-based LLM system, a query model retrieves relevant information from existing data sources. The generative model then synthesizes this information, producing a coherent and contextual response (<xref ref-type="bibr" rid="ref5">Chang and Pflugfelder, 2024</xref>; <xref ref-type="bibr" rid="ref7">Cohesity, 2024</xref>). RAG-based LLMs are already being evaluated for practical applications, particularly in areas that require interaction with complex legal corpora (<xref ref-type="bibr" rid="ref43">Mamalis et al., 2024</xref>).</p>
<p>A second approach addresses the challenge of balancing LLMs&#x2019; powerful capabilities with strict data privacy and security requirements. A privacy proxy becomes essential when queries transmitted to an LLM or its service provider include sensitive, confidential, or personal data that must not be disclosed to third parties. A privacy proxy ensures that such data is either anonymized or made unrecognizable before being processed by the LLM, thereby safeguarding individual privacy and protecting organizational confidentiality. Secondly, it can facilitate compliance with the General Data Protection Regulation (GDPR) in the European Union by filtering out or masking personal data, ensuring that the LLM&#x2019;s operations align with legal requirements and do not violate data protection laws. Thirdly, directly exposing raw data to an LLM introduces a risk if the model is compromised or misused. A privacy proxy mitigates these risks by pre-processing the data to remove unnecessary details, ensuring that only the minimum required information is exposed to the model and its service provider. This enhances user&#x2019;s trust while enabling the effective use of LLMs capabilities.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="sec6">
<label>3</label>
<title>Materials and methods</title>
<sec id="sec7">
<label>3.1</label>
<title>Methods</title>
<p>The study uses a multi-stage qualitative research design that combines a theoretical grounding with explorative elements. It begins with an interdisciplinary two-day workshop involving 21 participants within a research initiative on AI and LLM use in parliamentary contexts. Participants included software developers, consultants, parliamentary practitioners, and researchers (<xref ref-type="bibr" rid="ref75">Zeppelin University, 2023</xref>; <xref ref-type="bibr" rid="ref22">Fitsilis et al., 2023a</xref>). Invitations were sent to experts in AI and parliamentary informatics or IT and legislative processes who hold responsibilities within parliaments or actively work on these topics. The workshop discussions and the associated graphic protocol were systematically analyzed and complemented by a continuous literature review based on the documents discussed in the prior and following sections.</p>
<p>Additionally, a structured classification of documented examples of use cases involving LLMs in parliaments worldwide (<xref ref-type="bibr" rid="ref18">Fitsilis and De Almeida, 2024</xref>; <xref ref-type="bibr" rid="ref34">Inter-Parliamentary Union, 2024d</xref>) was conducted. Structured interactions with a generative AI system (ChatGPT) also served as explorative tools to identify potential perspectives and application scenarios (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>).</p>
<p>The core methodological element is a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats; <xref ref-type="bibr" rid="ref39">Kaplan and Norton, 2008</xref>: 49&#x2013;53), which systematically evaluates the strengths, weaknesses, opportunities, and threats of LLMs in parliamentary processes. This analysis is approached from the perspectives of administrative informatics and business informatics, while also integrating insights from related disciplines such as political science, administrative sciences, and parliamentary research. The author consolidated workshop insights and literature findings into a structured SWOT framework. Workshop participants contributed through expert presentations, discussion and debate, but did not directly draft the SWOT categories. Graphic recordings and notes served as primary sources for categorization. The process was exploratory, aiming to capture diverse perspectives rather than produce empirical validation. Based on these findings, a TOWS analysis (<xref ref-type="bibr" rid="ref71">Weihrich, 1982</xref>: 54&#x2013;66) is performed to derive actionable recommendations for the further deployment of LLMs in parliamentary settings.</p>
<p>The discussion synthesizes general findings, followed by a review of the EU AI Act in relation to parliamentary contexts. The review reveals that parliaments are explicitly or effectively excluded from the Act&#x2019;s scope so far, a position critically examined considering the insights gained.</p>
<p>The study is conceptually positioned ahead of large-scale implementations and adopts a theory-based, reflexive approach. Its aim is to provide a robust foundation for future technical, organizational, and normative design processes, particularly regarding the secure design, deployment and operation of LLMs in parliamentary institutions.</p>
</sec>
<sec id="sec8">
<label>3.2</label>
<title>AI in parliament</title>
<p>Parliaments in democracies are characterized by representation, plenary debates, legislative voting, and oversight functions. Paper- and file-based procedures ensure the permanent preservation of speeches, drafts, resolutions, and laws. Modern information and communication technologies (ICT) support records management, legislative workflows, and diverse parliamentary processes.</p>
<p>AI and genAI could potentially be utilized in parliamentary settings. Researchers are actively exploring the implications of AI and genAI on parliamentary operations, assessing their potential, opportunities, risks, and challenges posed by these technologies (<xref ref-type="bibr" rid="ref20">Fitsilis and von Lucke, 2023</xref>; <xref ref-type="bibr" rid="ref22">Fitsilis et al., 2023a</xref>, <xref ref-type="bibr" rid="ref24">b</xref>, <xref ref-type="bibr" rid="ref21">2024a</xref>; <xref ref-type="bibr" rid="ref68">von Lucke and Fitsilis, 2023</xref>; <xref ref-type="bibr" rid="ref67">von Lucke and Etscheid, 2020</xref>). Through collaborative efforts with scientists, administrative staff, and decision-makers from parliaments, 210 proposals for the use of AI in parliaments were generated and evaluated (<xref ref-type="bibr" rid="ref69">von Lucke et al., 2023</xref>). It became evident that ParlTech (<xref ref-type="bibr" rid="ref42">Koryzis et al., 2021</xref>) and the use of AI in parliaments are not just technical issues, even though several (generative) AI models and technologies are already suitable for use. This phenomenon also presents challenges in ethical, legal, and administrative areas, along with practical limitations that intersect with fundamental democratic principles of society. Security risks in the government context have also been addressed, precisely because most AI tools have neither been tested nor classified as secure (<xref ref-type="bibr" rid="ref41">Kipker, 2025</xref>). At least, several national security agencies and intelligence services have developed an international guideline for the secure development of AI systems (<xref ref-type="bibr" rid="ref51">National Cyber Security Centre et al., 2023</xref>).</p>
<p>A late 2022 study identified 39 AI-based solutions across nine parliaments (<xref ref-type="bibr" rid="ref18">Fitsilis and De Almeida, 2024</xref>). More recently, the <xref ref-type="bibr" rid="ref34">Inter-Parliamentary Union (2024d)</xref> listed over 60 parliamentary AI use cases as of December 2024. Its AI Cloverleaf Model (<xref ref-type="bibr" rid="ref38">Kamprath, 2025</xref>) includes 28 use cases for text processing and real estate management, drawn from more than 180 identified by the IPU.</p>
<p>Globally, debate continues on whether and how to regulate AI. In the European Union, this discussion spanned years before the AI Act took effect in August 2024, establishing a comprehensive framework for AI, its applications, and generated content. The Act applies directly across all member states, setting rules for AI use in the public sector and democratic processes (<xref ref-type="bibr" rid="ref15">European Union, 2024</xref>). Other governments and supranational bodies are developing similar frameworks. Numerous soft law initiatives, such as recommendations, strategy papers, declarations, and guidelines (<xref ref-type="bibr" rid="ref8">Committee on Artificial Intelligence, 2023</xref>; <xref ref-type="bibr" rid="ref14">European Commission, 2020</xref>; <xref ref-type="bibr" rid="ref21">Fitsilis et al., 2024a</xref>), address ethical principles for AI or critique these standards (<xref ref-type="bibr" rid="ref19">Fitsilis et al., 2024b</xref>; <xref ref-type="bibr" rid="ref27">High-Level Expert Group on Artificial Intelligence, 2019</xref>; <xref ref-type="bibr" rid="ref35">Jobin et al., 2019</xref>; <xref ref-type="bibr" rid="ref48">Mittelstadt, 2019</xref>; <xref ref-type="bibr" rid="ref52">Nguyen et al., 2023</xref>; <xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>).</p>
<p>Implementing AI in a parliamentary environment requires decisions on AI governance (responsibility, democratic control, organization; <xref ref-type="bibr" rid="ref38">Kamprath, 2025</xref>; <xref ref-type="bibr" rid="ref18">Fitsilis and De Almeida, 2024</xref>) and AI management (practical activities, lifecycle management, quality management, risk management). Parliamentary AI use should be understood within the broader trend of AI adoption in the public sector. As core democratic institutions, parliaments act both as users and standard-setters. They use AI for legislative analysis, transparency, and participation, while ensuring ethical, secure, and accountable use across government.</p>
</sec>
<sec id="sec9">
<label>3.3</label>
<title>LLMs in parliament</title>
<p>LLMs may reshape key dimensions of parliamentary work. Their use in legislative contexts opens new prospects for enhancing debates, drafting legislation, and conducting oversight. LLM-assisted debates, LLM-supported lawmaking, and LLM-based scrutiny mechanisms suggest a fundamental shift in parliamentary workflows. Integrating such models into routine operations marks not just technical augmentation, but a broader transformation in how legislative bodies function, deliberate, and govern. Understanding these shifts is essential to evaluating both the potential and the limitations of AI-assisted democratic processes. Ultimately, this also touches on our fundamental understanding of democracy and representative governance.</p>
<p>Given the potential of LLMs, key questions arise about whether, when, to what extent, and under what conditions Members of Parliament (MPs), parliamentary staff and research services may have access to AI-based tools for text generation, text and data analysis, as well as image and video generation as part of their duties. Even if access is prohibited by the parliament administration, MPs can at least access such tools available online independently and use them on their own responsibility. However, frustration may grow, when professional access is denied, while private use demonstrates the clear benefits of existing LLM services. Hence, LLMs should be recognized as essential instruments in the parliamentarian workspace. Early and proactive measures are also needed to prevent harm and unauthorized data disclosure to third parties.</p>
<p><xref ref-type="bibr" rid="ref18">Fitsilis and De Almeida (2024)</xref> found in 2022 that 30 out of 39 AI-based systems (approximately 77%) implemented by early adopter parliaments utilize natural language processing algorithms. These systems were primarily applied to speech-to-text conversion, conversational agents, and text summarization. The use of LLMs in the parliamentary workspace, in particular GPT-3 (generative pre-trained transformer), was pioneered by the Finnish Parliament (Eduskunta) and its Committee for the Future during an April 2021 experimental &#x201C;hearing&#x201D; on innovation and the United Nations Sustainable Development Goals. In this instance, members of parliament engaged in a conversational experience with AI-generated personalities, marking the first documented case of direct interaction between a parliamentary body and an LLM within a formal parliamentary process (<xref ref-type="bibr" rid="ref18">Fitsilis and De Almeida, 2024</xref>: 151&#x2013;152, 166&#x2013;167; <xref ref-type="bibr" rid="ref17">Fitsilis, 2021</xref>).</p>
<p>Since the release of ChatGPT by OpenAI in November 2022, interest in using LLMs in parliaments worldwide has grown rapidly. Some pioneers began experimenting with cloud-based models or hosted systems in collaboration with IT partners. The 2023 workshop held at Zeppelin University provided an early platform to exchange experiences, insights, concerns, and risks. Due to the sensitivity of parliamentary tasks and legislative issues, participants proceeded cautiously. Speakers presented projects and early prototypes, including examples from the European Union and Greece (<xref ref-type="bibr" rid="ref30">Hypernetica, 2023</xref>; <xref ref-type="bibr" rid="ref43">Mamalis et al., 2024</xref>; <xref ref-type="bibr" rid="ref46">Melides, 2023</xref>; <xref ref-type="bibr" rid="ref50">Moschopoulos, 2023</xref>). In February 2024, Parla<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> was introduced in the Berlin State Parliament as an AI assistant for handling written enquiries and red numbers. Based on ChatGPT and a RAG, it searches the parliamentary database, identifies relevant documents, and summarizes key content for responses. In March 2024, a manual on the use of generative AI in the U.S. Congress was published (<xref ref-type="bibr" rid="ref73">White House, 2024</xref>), after the U.S. House of Representatives limited the use of LLMs to ChatGPT Plus, the paid version with enhanced privacy features, only for research and evaluation purposes (<xref ref-type="bibr" rid="ref40">Kelly, 2023</xref>).</p>
<p>Since 2019, the Centre for Innovation in Parliament of the Inter-Parliamentary Union (IPU) in Geneva has systematically tracked AI innovations in parliaments worldwide. In April 2024, it published its first issue brief on the use of generative AI in parliamentary settings (<xref ref-type="bibr" rid="ref31">Inter-Parliamentary Union, 2024a</xref>). This brief outlines key expectations and recommendations: anticipating rapid technological change, encouraging parliaments to understand both the impact and risks of generative AI, fostering a robust culture of digital transformation, maintaining human oversight, recognizing technological limitations, and strengthening capacity building through collaboration.</p>
<p>In October 2024, the 149th IPU Assembly unanimously adopted the resolution &#x201C;The Impact of Artificial Intelligence on Democracy, Human Rights and the Rule of Law,&#x201D; which includes specific references to generative AI, particularly concerning the generation of textual and visual content (<xref ref-type="bibr" rid="ref32">Inter-Parliamentary Union, 2024b</xref>). The IPU guidelines for AI in parliaments, released in December 2024, reflect this focus: the term &#x201C;generative AI&#x201D; appears 43 times, compared to only three mentions of &#x201C;LLM&#x201D; (<xref ref-type="bibr" rid="ref33">Inter-Parliamentary Union, 2024c</xref>). These AI guidelines build upon a collection of use cases, although only three specific implementations of LLMs are cited, all from Italy. They include the automatic summarization of legislative documents in the Chamber of Deputies, machine translation in the Senate, and an AI-supported search engine for the Senate (<xref ref-type="bibr" rid="ref34">Inter-Parliamentary Union, 2024d</xref>; <xref ref-type="bibr" rid="ref6">Citino, 2024</xref>). However, most use cases consist of prototypes or conceptual ideas rather than production-ready systems. For the law-making process, the use of LLMs and its major dependencies was examined in a 2024 European Commission study (<xref ref-type="bibr" rid="ref19">Fitsilis et al., 2024b</xref>).</p>
<p>Meanwhile, in Germany, the Social Democratic Party (SPD) was the first party to publish a political guideline for the use of generative AI in parliamentary work, presented in July 2024 (<xref ref-type="bibr" rid="ref63">SPD Bundestagsfraktion, 2024</xref>). In October 2025, German Chancellor Merz (CDU) said he had &#x2018;tried out&#x2019; AI for the German government&#x2019;s legislative texts (<xref ref-type="bibr" rid="ref11">Der Spiegel, 2025</xref>). This shows that LLMs are already being used in party and parliamentary work, even if LLMs have not yet been approved.</p>
</sec>
</sec>
<sec id="sec10">
<label>4</label>
<title>Results: SWOT-analysis and TOWS-analysis for LLMs in the parliamentary workspace</title>
<p>Based on the discussion in the academic workshop on AI and LLM use in parliaments (July 2023: <xref ref-type="bibr" rid="ref22">Fitsilis et al., 2023a</xref>; <xref ref-type="bibr" rid="ref75">Zeppelin University, 2023</xref>) and subsequent prompt analysis (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>), a consolidated picture emerges that can be converted into a technology-orientated SWOT analysis. While LLM technologies offer notable strengths and opportunities, their weaknesses and threats must be considered (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 9&#x2013;12) before exploring integration options. The author synthesized workshop insights and literature findings into a structured SWOT framework. <xref ref-type="table" rid="tab1">Table 1</xref> presents a balanced evaluation addressing the first research question RQ1. It provides a comprehensive view of benefits and challenges of associated with incorporating these high-risk technologies into legislative workflows and representative institutions.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>SWOT-analysis for LLMs in parliament.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Strengths</th>
<th align="left" valign="top">Weaknesses</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">
<list list-type="bullet">
<list-item>
<p>Analytical Capabilities</p>
</list-item>
<list-item>
<p>Generative AI Skills</p>
</list-item>
<list-item>
<p>Enhanced Communication</p>
</list-item>
<list-item>
<p>Continuous Learning</p>
</list-item>
<list-item>
<p>Adaptability and Multilingual Skills</p>
</list-item>
<list-item>
<p>Scalability and Consistency</p>
</list-item>
<list-item>
<p>Rapid Response</p>
</list-item>
<list-item>
<p>Creativity</p>
</list-item>
</list>
</td>
<td align="left" valign="top">
<list list-type="bullet">
<list-item>
<p>Dependence on Technology</p>
</list-item>
<list-item>
<p>Dependence on Training Data</p>
</list-item>
<list-item>
<p>Technical Limitations</p>
</list-item>
<list-item>
<p>Hallucinations</p>
</list-item>
<list-item>
<p>Lack of Sensitivity to False Information, Irony and Sarcasm</p>
</list-item>
<list-item>
<p>Limited Critical Thinking</p>
</list-item>
<list-item>
<p>Confidentiality and Security Risks</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td align="left" valign="top">Opportunities</td>
<td align="left" valign="top">Threats</td>
</tr>
<tr>
<td align="left" valign="top">
<list list-type="bullet">
<list-item>
<p>Text Creation and Drafting Support</p>
</list-item>
<list-item>
<p>Scheduling</p>
</list-item>
<list-item>
<p>Transcription Services</p>
</list-item>
<list-item>
<p>Translation Services</p>
</list-item>
<list-item>
<p>Policy Analysis and Briefing</p>
</list-item>
<list-item>
<p>Speechwriting Assistance</p>
</list-item>
<list-item>
<p>Letter and Email Assistance</p>
</list-item>
<list-item>
<p>Legislative Research Assistance</p>
</list-item>
<list-item>
<p>Constituent Engagement</p>
</list-item>
<list-item>
<p>Training and Education for Staff</p>
</list-item>
</list>
</td>
<td align="left" valign="top">
<list list-type="bullet">
<list-item>
<p>Reduced Human Interaction</p>
</list-item>
<list-item>
<p>Lack of Human Judgment</p>
</list-item>
<list-item>
<p>Bias and Ethical Concerns</p>
</list-item>
<list-item>
<p>Unintended Legal Consequences</p>
</list-item>
<list-item>
<p>Authenticity and Accountability</p>
</list-item>
<list-item>
<p>Erosion of Human Skills</p>
</list-item>
<list-item>
<p>Impact on Human Employment</p>
</list-item>
<list-item>
<p>Quality of Literacy</p>
</list-item>
<list-item>
<p>Manipulation and Misuse</p>
</list-item>
<list-item>
<p>Unintended Political Influence</p>
</list-item>
</list>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="sec11">
<label>4.1</label>
<title>Strengths</title>
<p>In combination with RAG and privacy proxies, LLMs unlock new opportunities for parliaments, MPs, parliamentary staff, and research services by leveraging analytical, generative, and communicative AI capabilities. They can process vast data sets, identify patterns, and generate insights that human analysts might overlook, ensuring legislative proposals are supported by comprehensive data analysis for more informed policymaking. As assistants, LLMs could be used to draft policy documents, speeches, and reports, and pose challenging questions for public consultations. They can also be used to translate or adapt texts to specific styles or conversational tones (<xref ref-type="bibr" rid="ref1">Albrecht, 2023</xref>: 35&#x2013;38). Additionally, they enhance parliamentary oversight by generating detailed, thought-provoking content. Combined with RAGs, LLMs help articulate complex ideas clearly, concisely, and persuasively, making information accessible to MPs, the media, and the public. LLMs improve communication by drafting speeches, preparing press releases, and crafting responses to inquiries. One key goal is to strengthen communication and foster trust and transparency between parliament and the public.</p>
<p>With Internet access and enabled training capabilities, which are not always granted for security reasons, LLMs can continuously adapt to new data and respond to evolving political landscapes. They can address a wide range of political topics and meet the demands of daily political processes, generating documents in a desired style and providing near real-time feedback. Moreover, LLMs are capable of handling multiple topics and tasks simultaneously. Their multilingual capabilities ensure effective interaction and task interpretation in dialogs across diverse linguistic groups (<xref ref-type="bibr" rid="ref1">Albrecht, 2023</xref>: 38). They offer scalability, managing large workloads with consistency and accuracy while maintaining professionalism and respect in every interaction. LLMs deliver rapid responses to prompts and requests, generating professional content such as complex texts, images, and videos. Although not truly intelligent and lacking awareness, these language models draw on vast amounts of data and information, enabling them, especially when combined with RAGs, to provide accurate and timely responses (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 16&#x2013;17; <xref ref-type="bibr" rid="ref66">von Lucke, 2024</xref>: 10&#x2013;11).</p>
</sec>
<sec id="sec12">
<label>4.2</label>
<title>Weaknesses</title>
<p>IT managers in parliaments express concerns about reliance on available LLMs and their training data, as well as existing technical limitations. Key issues include generating incorrect outputs (hallucinations), limited contextual understanding, poor handling of false information, irony, and sarcasm, and a lack of critical reflection. This stems from the fact that LLMs are not truly intelligent, and their outputs are not always correct or complete. They function as probabilistic language models with broad knowledge, rapidly generating characters, words, data, and answers (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 19). Ultimately, humans must decide how to use technologies like LLMs, for better or worse. Their use in parliament raises numerous issues, including ethical ones, particularly regarding deployment motives, data protection, algorithmic fairness, and potential impacts on staff, MPs, the public, and human rights (<xref ref-type="bibr" rid="ref66">von Lucke, 2024</xref>: 13).</p>
<p>LLMs function as black boxes (<xref ref-type="bibr" rid="ref1">Albrecht, 2023</xref>: 43), lacking transparency regarding their operations, training energy consumption, and data sources. Users have limited insight into how LLM can be trained, improved, or influenced by document uploads. Crafting precise prompts and evaluating response quality remain major challenges. LLMs occasionally produce false outputs and artifacts, linguistically fluent but factually incorrect content (<xref ref-type="bibr" rid="ref1">Albrecht, 2023</xref>: 39&#x2013;42; <xref ref-type="bibr" rid="ref63">SPD Bundestagsfraktion, 2024</xref>: 2&#x2013;3). While some describe this as a technical limitation, other observers call it lying or a structural flaw. For parliaments, where accuracy is a high priority, robust methods for detecting and managing errors in LLM outputs are essential. This includes verification mechanisms, RAG frameworks, and clear error management protocols. Since LLMs treat learned content as accurate, regardless of their veracity, they are vulnerable to disinformation, fake news, propaganda, irony, and sarcasm (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 17). Politicians and their teams must therefore develop skills to recognize, adjust to, and counter misleading content. Without critical reflection, the use of LLMs risks devolving into AI-driven argumentation devoid of truth and, worse, the spread of fake news, misinformation, targeted manipulation, and propaganda.</p>
<p>Like most digital systems on the Internet, current LLMs provide only limited guarantees of confidentiality, data protection and IT security. These vulnerabilities pose significant risks, including cyberattacks, misuse, and unauthorized access. Particular concerning are LLM service providers in jurisdictions where political or military interests may enable surveillance of parliamentary queries and uploaded documents, especially where legal frameworks explicitly permit such access. These risks raise serious ethical and security concerns. However, some technical limitations of LLMs are expected to be mitigated in the coming years through advances in RAG and the integration of privacy proxies.</p>
</sec>
<sec id="sec13">
<label>4.3</label>
<title>Opportunities</title>
<p>LLMs offer a broad spectrum of innovative applications in parliamentary contexts, which vary according to user groups such as MPs, parliamentary staff and research services. These applications include the generation of high-quality, accessible text, the creation of summaries, facilitating brainstorming sessions, and the production of diverse forms of creative content. In particular, LLMs can substantially support the drafting of bills, policy proposals, and administrative documents by producing coherent, well-structured, and legally sound texts. This contributes to improving the quality of legislative output, minimizing inconsistencies and reducing potential conflicts. Their strength in ensuring precision and clarity makes LLMs especially valuable for political speeches and legislative drafting, where accuracy is of critical importance for the following political decisions. Furthermore, LLMs can enhance administrative efficiency by automating routine tasks such as scheduling, calendar management, reminders, and meeting coordination. During meetings, they support agenda setting, minute taking, and real-time transcription, thereby ensuring precise documentation of discussions. In addition, many LLMs offer integrated translation and interpretation services, enabling multilingual communication within parliamentary environments. This supports inclusivity by enabling all members, regardless of language proficiency, to actively engage in discussions and fully understand relevant documents. Collectively, these capabilities strengthen deliberative processes and contribute to more informed decision-making processes within parliaments (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 7).</p>
<p>LLMs offer significant potential in automating policy analysis and generating concise, insightful briefings. They can detect trends, compare policies across jurisdictions in easy understandable tables, and provide evidence-based recommendations. This supports legislators in making informed decisions and developing effective policies. By analyzing relevant data, LLMs can produce clear reports, summaries, and visualizations that assists MPs in forming opinions and decisions. Their ability to anticipate legal challenges and suggest proactive solutions makes them valuable tools in the policy-making process.</p>
<p>LLMs can substantially support speechwriting, letter drafting, and email correspondence, particularly when combined with privacy proxies. Drafts can be adapted to the speaker&#x2019;s style and audience expectations, integrating key messages, rhetorical strategies, and relevant data to enhance public communication. This ensures effective engagement with constituents, stakeholders, and officials while maintaining high standards and message accuracy. Moreover, paired with RAGs and privacy proxies, LLMs can efficiently manage complex legislative research by analyzing legal texts, case studies, laws, and precedents to deliver relevant insights.</p>
<p>For MPs and their staff, LLMs offer numerous applications in constituency engagement. They support analysis of public opinion, issue tracking, monitoring developments, and predictive analytics. LLMs enhance communication by managing channels, responding to inquiries, and collecting feedback. Through sentiment and trend analysis of constituent concerns and social media, LLMs enable legislators to better address public needs and views. They can also monitor emerging issues and key developments. Furthermore, LLMs can facilitate staff training by generating educational materials, conducting interactive training sessions, and providing ongoing support, ensuring that staff gain essential skills and knowledge.</p>
</sec>
<sec id="sec14">
<label>4.4</label>
<title>Threats</title>
<p>The integration of LLMs in parliaments poses several potential risks that require thorough assessment (<xref ref-type="bibr" rid="ref31">Inter-Parliamentary Union, 2024a</xref>: 2&#x2013;3). These challenges highlight the need for appropriate regulations to mitigate associated risks. Within the European Union, the European AI Act offers a foundational framework for such regulatory efforts. Additionally, tailored parliamentary guidelines have been developed (<xref ref-type="bibr" rid="ref24">Fitsilis et al., 2023b</xref>, <xref ref-type="bibr" rid="ref21">2024a</xref>) to address these issues. Key risks include diminished human interaction, reduced human judgment, bias, ethical concerns, unintended legal consequences, challenges to authenticity and accountability, skill erosion, employment effects, digital literacy gaps, potential misuse, and unintended political influence.</p>
<p>As LLMs begin to assume tasks traditionally performed by humans, concerns arise about reduced human interaction and the erosion of human judgment. The latter remains essential, particularly for crafting nuanced policies and legislation that reflect diverse perspectives and societal complexities. Machine-generated texts often lack the experience and contextual depth needed for such policymaking, risking flawed analyses and misaligned decisions. Additionally, LLMs trained on biased or incomplete data may reinforce or amplify existing biases, leading to discriminatory policy suggestions and legislative texts. This raises ethical issues and potential legal liabilities (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>: 48&#x2013;49). Reliance on LLMs for drafting legal documents entails risks of errors and ambiguity in legal language, potentially resulting in loopholes, misinterpretations, or court challenges. Likewise, their use in translation and interpretation may introduce inaccuracies, failing to capture precise meanings and cultural nuances, thereby undermining effective communication and decision-making.</p>
<p>Concerns also arise around authenticity and accountability when LLMs generate content for controversial or high-impact policies without clear human oversight. This lack of transparency may erode trust in parliament and its representatives, who are expected to assume responsibility. Long-term risks include skill erosion among MP&#x2019;s staff, as overreliance on LLMs for tasks like interpreting legal texts and drafting legislation could weaken critical thinking and communication skills essential for effective governance. Potential human job losses and knowledge losses also raise concern, as a decline in literacy quality can undermine the overall effectiveness of parliamentary work. Additionally, socio-political risks emerge if LLMs concentrate decision-making power in fewer individuals or entities, potentially threatening democratic processes and representative governance.</p>
<p>LLMs can assist in identifying key, controversial, or potentially harmful positions within debates, thereby revealing weaknesses in underlying arguments. This capability may enable opposition actors to mount more forceful critiques, necessitating greater preparedness for government representatives. Moreover, the risk of deliberate political manipulation by third parties should not be underestimated (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>: 41&#x2013;43). In political contexts, where robust arguments, well-founded proposals, and sound decisions are vital, there is concern that actors may intentionally exploit or corrupt LLM systems and their training data (see Russian PRAVDA network: <xref ref-type="bibr" rid="ref58">Sadeghi and Blachez, 2025</xref>). Detecting and preventing such manipulation and disinformation is essential to preserving the integrity of the parliamentary process (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>: 41&#x2013;42, 75&#x2013;76). There is also concern over the misuse of text-to-audio and text-to-video generators for deliberate falsification, such as falsely attributing statements to individuals. These deepfakes technologies can produce fake but convincing audio and video content, eroding trust and accountability.</p>
<p>Security vulnerabilities are critical, particularly regarding data protection and privacy. When LLMs process inputs containing real personal data, the risk of breaches and misuse of sensitive information increases. Copyright concerns also require attention to ensure that LLMs are not trained on protected material without proper authorization (<xref ref-type="bibr" rid="ref29">House of Lords, 2024</xref>: 66&#x2013;67, 78&#x2013;79; <xref ref-type="bibr" rid="ref63">SPD Bundestagsfraktion, 2024</xref>: 3).</p>
<p>Another major concern is the potential misuse of LLMs by system-crashers seeking to undermine and destabilize democratic governance. LLMs could provide strategies to circumvent democratic structures, weaken state institutions, dismantle bureaucracy, and accelerate administrative erosion, all without adhering to legal frameworks. This scenario raises profound questions about the legitimacy and methodology of deploying LLMs as instruments in a digitally enabled AI coup d&#x2019;&#x00E9;tat (<xref ref-type="bibr" rid="ref59">Salvaggio, 2025</xref>), as well as long-term implications for state integrity and the future need to strengthen democratic resilience.</p>
<p>Addressing all these risks demands strong regulatory frameworks, ethical guidelines, and proactive safeguards against misuse, bias, security flaws, and privacy breaches linked to LLMs in parliamentary settings (<xref ref-type="bibr" rid="ref63">SPD Bundestagsfraktion, 2024</xref>: 1). Comprehensive risk assessment and targeted mitigation measures are essential to minimize potential harm while maximizing the benefits of LLMs in parliamentary operations.</p>
</sec>
<sec id="sec15">
<label>4.5</label>
<title>First recommendations for action based on a TOWS-analysis</title>
<p>The following recommendations derive from a strategic TOWS analysis (<xref ref-type="bibr" rid="ref71">Weihrich, 1982</xref>: 54&#x2013;66), combining strengths and opportunities, strengths and threats, weaknesses and opportunities and weaknesses and threats. They aim to guide IT governance bodies and parliamentary decision-makers while supporting IT staff and users in selecting appropriate actions.</p>
<p>For the strengths-opportunities strategy, capitalizing on opportunities is crucial. Structured integration can systematically identify priority applications and ensure systematic training for staff and policymakers. Institutionalizing LLM-assisted workflows through formal guidelines can improve efficiency and transparency of legislative drafting, communication, and research. Combining LLMs with RAG and privacy proxies will improve precision, scalability, and accountability. Implementing LLM-driven automation for administrative tasks like scheduling, transcription, and summarization will streamline operations, enabling staff to focus on higher-level strategic work. These measures should be embedded in a robust change management framework.</p>
<p>The strengths-threats strategy focuses on mitigating risks by establishing oversight mechanisms to ensure human supervision of critical outputs, including legal texts and public statements. Mandatory AI literacy programs and continuous training for MPs and staff will prevent skill erosion and foster effective human-AI collaboration. Defining quality and security standards aligned with recognized frameworks is essential for validating LLM-generated content. Developing internal LLM systems or using trusted infrastructures can reduce dependence on insecure external providers and mitigate geopolitical risks. Implementing secure, ethical-by-design architectures is crucial for countering misinformation, deepfakes, and attempts at political destabilization.</p>
<p>The weaknesses-opportunities-strategy addresses weaknesses by embedding quality control layers and fact-checking routines into LLM workflows to minimize errors in legal and policy contexts. Leveraging LLMs for internal knowledge management and staff training can close digital literacy gaps and strengthen institutional competence. These measures ensure LLMs remain supportive tools rather than sole decision-makers, promoting responsible use by MPs, parliamentary staff, and the scientific services.</p>
<p>Finally, the weaknesses-threats strategy focuses on minimizing vulnerabilities through regular audits, risk assessments, and compliance with frameworks such as the EU AI Act. Establishing an AI ethics council within parliament, with a clear mandate for oversight and guidance, will safeguard responsible innovation, fairness, and ethical deployment. Without such measures, generative AI and LLM-driven system-crashers could unduly influence democratic processes in legislative, parliamentary and ministerial contexts.</p>
</sec>
</sec>
<sec id="sec16">
<label>5</label>
<title>Results: options for the integration of LLMs in the parliamentary workspace</title>
<p>A study identified the most relevant fields of application for LLMs in parliament (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>), suitable for use cases across different user groups such as MPs, parliamentary staff, and the scientific service. LLMs are particularly well-suited for tasks including drafting legislation, recording plenary minutes, translating documents, generating agendas, responding to inquiries, and preparing presentations, analyses, and reports. Additionally, LLMs could assist in creating speeches, letters, and emails. Other potential applications include legislative research and analysis, constituent engagement, policy drafting and analysis, committee support, fact-checking and misinformation detection, procedural guidance, public opinion analysis, training and onboarding, knowledge dissemination, issue tracking, and monitoring (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 6&#x2013;9). These examples illustrate the breadth of possible use cases and highlight the transformative potential of LLMs for parliamentary work. Their integration could significantly improve efficiency, reduce manual workload, and enhance the quality of information processing.</p>
<p>Today, LLMs are no longer only experimental technologies in labs. They are available and increasingly seen as promising tools for parliamentary innovation and daily usage. However, despite growing interest among stakeholders, concrete implementation strategies remain underdeveloped. Bridging this gap requires a clear roadmap that aligns technological opportunities with institutional requirements and legal constraints. Transitioning from experimentation to operational use requires adherence to established standards for secure AI development and IT governance. These include principles covering secure design, secure development, secure deployment, as well as secure operation and maintenance as outlined by the US National Cyber Security Centre and others (<xref ref-type="bibr" rid="ref51">National Cyber Security Centre, 2023</xref>).</p>
<p>Reflecting the digital strategy, the resources and the IT governance of a parliament, there are several options for integrating LLMs into the parliamentary workspaces. Knowledge about options and consequences helps to make the right decisions. This section examines the existing options for integrating LLMs, addressing the second research question about the core features and capabilities of LLMs relevant to parliamentary functions. Key factors include choosing between on-premises and cloud-based server architectures (outsourcing options), selecting suitable service providers, and assessing the scope and adaptability of available LLM services. Additional aspects involve training modalities (custom vs. general-purpose), developing benchmarks for evaluating effectiveness, bias mitigation, and alignment with democratic principles, and fostering LLM literacy among staff and MPs, implementing structured capacity-building.</p>
<sec id="sec17">
<label>5.1</label>
<title>LLMs in the parliamentary workspace</title>
<p>Regarding ownership, various make-or-buy options and technical implementation paths exist. Performance depends on specific requirements and differences in cybersecurity levels, national security needs, data privacy, neural network architecture, model accuracy, training data quality, supported official languages, linguistic diversity, and explainability of AI systems.</p>
<p>For the central make-or-buy decision regarding LLMs and their application, five strategic options can be distinguished (<xref ref-type="bibr" rid="ref4">Chang and Pflugfelder, 2023</xref>: 15&#x2013;17). The first option to buy an end-to-end application without LLM controllability means purchasing a complete application with a non-controllable LLM, offering no access to the model itself. The second option involves buying an application with limited LLM control, allowing some transparency and parameterization. The third option is to develop a custom application and integrate a controllable, externally procured LLM via APIs. The fourth option is to develop the application and fine-tune an existing LLM (commercial or open source) to align with specific parliamentary needs. The fifth option and the most-resource-intensive path entails building both the application and the LLM from scratch (<xref ref-type="bibr" rid="ref4">Chang and Pflugfelder, 2023</xref>: 15&#x2013;17).</p>
<p>In addition to the make-or-buy strategy, five server deployment options exist for hosting LLMs in parliamentary environments. These include using public cloud-based LLMs, private cloud-based LLMs tailored for parliamentary use, and fully internal LLMs hosted within parliamentary infrastructure. A hybrid approach combining internal and external solutions is also possible. Finally, parliaments may opt to tolerate the use of externally accessed, unregulated LLMs (called BYO LLMs or Dark LLMs).</p>
<p>Public cloud-based LLM services with proprietary licenses, such as those provided by OpenAI, Microsoft, or Google, are accessible via the public Internet and through API interfaces. Their use is governed by contractual agreements that define the rights and obligations of both the service provider and the user. Private cloud-based LLM services are only internally accessible. Alternatively, on-premises solutions allow LLMs to be implemented and operated on in-house servers or high-performance notebooks within parliamentary IT infrastructure, ensuring data protection and security. This is particularly suited for open-source LLMs, which can run locally on the intranet without restrictions. Local parameter adjustments and confidential training with proprietary documents are possible. The demand for scalable, trustworthy LLMs has grown due to sectors like legal tech and tax tech, where training data, prompts, and results must remain secure and private. This model could enable parliamentary groups to utilize their own dedicated LLMs. Hybrid solutions, the combination of public or private cloud-based and on-premises solutions to achieve a balance between flexibility and security, would be another option (<xref ref-type="bibr" rid="ref47">Microsoft, 2024</xref>: 47&#x2013;49). &#x201C;Bring Your Own LLM&#x201D; is an undesirable but nevertheless realistic option. It becomes necessary when parliament lacks access to an LLM or cannot provide one due to cost, data protection, security, or other constraints. In such cases, it may be difficult to prevent parliamentary staff from using publicly available LLMs through the Internet or mobile devices for parliamentary tasks.</p>
</sec>
<sec id="sec18">
<label>5.2</label>
<title>LLM service providers</title>
<p>The responsibilities of LLM service providers in parliamentary contexts go beyond granting access to reliable and secure language models. They must ensure full compliance with legal, ethical, and technical standards. This includes obligations regarding data protection, transparency about model training methods and limitations, continuous monitoring to prevent misuse, and the provision of robust documentation and interfaces. These measures are essential for enabling accountable and trustworthy integration of LLMs into parliamentary workflows.</p>
<p>Several major LLM operate globally, including OpenAI, xAI, Google, Anthropic, Meta, Baidu, Alibaba, DeepSeek, and Mistral AI, each offering models and servers with distinct strengths and limitations. All are continuously improving their LLMs, raising the strategic question for parliaments: Should they commit to a single service provider, or adopt a platform-based approach that ensures flexibility? A platform solution would need to make LLMs and tools from different providers accessible and comparable through a standardized interface, ideally allowing easy switching between models, providers, servers, and solutions. Given the rapid pace of development, solution developers face significant uncertainty in selecting and testing LLMs and assessing their suitability for specific parliamentary applications in the cloud or on premises. Nonetheless, adopting a modular, open architecture would help parliaments to meet a key requirement: avoiding vendor lock-in effects by ensuring that any LLM service or provider can be replaced at any time.</p>
<p>Open API interfaces allow also the seamless integration of LLM functionalities into other existing IT infrastructures systems, including document management systems, customer relationship management systems, and workflow tools. From an information management perspective, a crucial question is which LLM service providers and services can be deemed trustworthy and appropriate in a parliamentary context, and which should be excluded due to national security or cybersecurity risks.</p>
<p>An answer to this challenge is provided by Rode&#x2019;s ample principle (<xref ref-type="bibr" rid="ref57">Rode, 2025</xref>), which introduces red, yellow, and green zones for LLMs. This hybrid approach seeks to maintain digital sovereignty while leveraging the innovative potential of modern LLMs. The red zone denotes the use of powerful, cloud-based LLMs for complex tasks on a pay-per-use basis. However, such systems should only be applied to general inquiries that do not involve personal data or sensitive information, as foreign security agencies could potentially gain access. The yellow zone relies on external partners who provide privacy- and security-compliant cloud-based open-source models. These models can be fine-tuned by parliamentary institutions and are also billed according to usage. The green zone provides the highest level of security for processing of personal or highly sensitive data. It is, therefore, implemented on in-house servers within the parliament&#x2019;s own infrastructure. Key factors in determining the appropriate zone include data protection and compliance requirements, cost efficiency, customization needs, scalability, deployment and transparency (<xref ref-type="bibr" rid="ref57">Rode, 2025</xref>; <xref ref-type="bibr" rid="ref23">Fitsilis et al., 2026</xref>).</p>
</sec>
<sec id="sec19">
<label>5.3</label>
<title>LLM services</title>
<p>LLM providers must offer reliable LLM services tailored to parliamentary needs, including chatbots, virtual assistants, translation, multilingual support, data analysis and forecasting, information management, training, professional development, process design, and coding. These services may be components of a broader LLM application. While reflecting on potential LLM use cases, it is important to note that language models process language but need not necessarily produce exclusively text-based outputs. In addition to generating code or BPMN-process models, LLMs can also produce images and other visualizations (text-to-image), audio (text-to-audio) and video (text-to-video). These services enable new forms of visual output based on text input and training data. Such capabilities support MPs in creating visual representations for arguments, sketches, memes, or photorealistic images of construction projects. Visual content often proves more persuasive than text, as the brain processes it more rapidly. Further inputs for LLMs may include speech (via microphone or audio files), as well as images and videos, significantly enhancing usability.</p>
<p>In this context, parliaments must identify which prompt types (text, image, audio, video) and AI agents (specialized services that build on the reasoning, understanding, and generative capabilities of an LLM to perform autonomous or semi-autonomous tasks) are most effective for their intended use. Developers face the challenge of designing, testing, and selecting prompts and AI agents best suited to parliamentary workflows. Where best practices exist, a platform for knowledge sharing and transfer would be highly valuable. Such a portal should provide suitable prompts and AI agents for tasks such as document summarization, translation, meeting transcriptions, public engagement, training, onboarding, and the preparation of documents and speeches.</p>
<p>For text-to-image LLMs, the platform should provide prompt suggestions and templates to translate ideas, collections, and designs into clear, compelling images, figures, artworks, and other visual concepts.</p>
<p>For text-to-video or image-to-video LLMs, prompt suggestions enabling object visualization through short films would be valuable. This allows projects to be viewed from multiple angles, illustrating their three-dimensional integration into natural environments and fostering lasting engagement.</p>
</sec>
<sec id="sec20">
<label>5.4</label>
<title>LLM training with parliamentary data</title>
<p><xref ref-type="bibr" rid="ref19">Fitsilis et al. (2024b</xref>: 18) have shown that several well-known LLMs (BART, BERT, GPT 3.5, LLaMA, Longformer, RoBERTa) have been finetuned for legal language. They anticipate further advances through improved training methods and qualitatively curated corpora. Together with a Greek team from the University of Macedonia, they have also developed a legal assistant based on LLMs, designed to interact with legal resources for the purpose of creating legal assistants for parliamentary and governance applications (<xref ref-type="bibr" rid="ref43">Mamalis et al., 2024</xref>). To effectively train LLMs for such contexts, essential datasets should comprise official documents in the parliament&#x2019;s working languages, laws, ordinances, questions, reports, and ratified minutes, all of which can be used without violating data protection or copyright regulations (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 17). These datasets provide essential content and context for LLMs to comprehend and generate outputs relevant to parliamentary procedures and governance. Freely available data and documents from parliamentary information systems, legislation portals, and websites of ministries and agencies are also valuable. However, caution is necessary. As shown in the case of the well-funded Moscow-based global &#x2018;news&#x2019; network PRAVDA, which according to <xref ref-type="bibr" rid="ref58">Sadeghi and Blachez (2025)</xref> has infected Western AI tools with Russian propaganda, external servers and their training input cannot be blindly trusted.</p>
<p>Training takes time but is essential for compiling and applying these datasets effectively. LLMs tailored for parliamentary use can generate responses referencing official documents, including historical records from past legislative periods and earlier versions of laws. However, copyrighted material, personal data, or sensitive documents must not be used in publicly accessible LLMs. For internal systems, access to such data can be appropriate. Equally important is identifying and correcting biases in models, training data, and source documents to ensure fair and unbiased outputs.</p>
</sec>
<sec id="sec21">
<label>5.5</label>
<title>Need for LLM benchmarks</title>
<p>Establishing recognized benchmarks for LLMs and their services to reliably assess response quality is highly desirable. Such benchmarks would offer a standardized framework for evaluating LLM performance and effectiveness across various applications and contexts. Currently, several benchmarks exist for LLMs, such as the TIMETOACT LLM Benchmark<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> and the HuggingFace Open-LLM-Leaderboard<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>.</p>
<p>However, developing benchmarks tailored specifically for parliamentary use is a crucial area for future research. This will require standardized methods that yield reliable evaluations of response behavior. Future LLM benchmarks for parliaments would enable the evaluation of freely available LLMs against the specific needs of parliamentary settings. Benchmarking services could also assess the quality of internal LLMs trained and fine-tuned on parliamentary documents. Such comparisons could highlight differences in cybersecurity, national security requirements, support for official languages, linguistic diversity, or explainable AI capabilities. Recommendations based on proven benchmarks would guide parliamentary bodies and IT teams in selecting suitable LLMs. Comparison frameworks could establish minimum standards and identify leading solutions. Such initiatives would benefit not only national parliaments but also subnational legislatures and local councils, which face similar challenges in navigating the diverse LLM landscape (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>: 20).</p>
</sec>
<sec id="sec22">
<label>5.6</label>
<title>Need for LLM literacy and capacity building of parliamentary staff</title>
<p>The successful integration of LLMs in parliaments also depends on adherence to established AI guidelines (<xref ref-type="bibr" rid="ref24">Fitsilis et al., 2023b</xref>, <xref ref-type="bibr" rid="ref21">2024a</xref>; <xref ref-type="bibr" rid="ref56">Popvox Foundation, 2024</xref>; <xref ref-type="bibr" rid="ref63">SPD Bundestagsfraktion, 2024</xref>). Early investment in LLM literacy and staff training is essential. Key initial steps include forming expert teams, launching training programs, and promoting knowledge exchange (<xref ref-type="bibr" rid="ref21">Fitsilis et al., 2024a</xref>: 74&#x2013;76). Training must cover both technical foundations and practical applications of LLMs in parliamentary workflows, while strictly aligning with legal and regulatory frameworks. Hands-on training is vital to help staff to understand LLM functions relevant to their tasks and to support safe, bounded experimentation.</p>
<p>Data protection and ethical considerations must be addressed from the outset. Training programs require regular updates to reflect technological advances and reduce uncertainty. Embedding LLM deployment within broader change management efforts is also crucial. Demonstrating how LLMs and prompting can enhance efficiency and effectiveness will gain support from the staff councils and other stakeholders.</p>
</sec>
</sec>
<sec sec-type="discussion" id="sec23">
<label>6</label>
<title>Discussion</title>
<sec id="sec24">
<label>6.1</label>
<title>Overarching reflections</title>
<p>Reflecting on the SWOT and TOWS analyses reveals a range of strengths, opportunities, weaknesses and threats. What does the integration of LLMs mean for parliamentary workspaces? Their introduction promises substantial efficiency gains and improved output quality. Concrete steps depend on initial conditions, available financial resources, qualified personnel, infrastructure, political demand, LLM models, training data, confidentiality, sovereignty, and national security.</p>
<p>However, generalizations remain difficult, as effectiveness varies with factors such as make-or-buy decisions, LLM versions, and specific use cases. Parliaments differ and operate under strict confidentiality, making common technical approaches rare. The Centre for Innovation in Parliament of the Inter-Parliamentary Union (IPU) at least brings together parliaments worldwide to share experiences and foster collaborative learning.</p>
<p>LLMs can be either highly beneficial or problematic. At this early stage of transformation, it is unclear which implementations will ultimately succeed or fail. The global emergence of ChatGPT since late 2022 signals disruptive changes ahead, driven by experimentation, breakthroughs, and innovative solutions. The SWOT analysis provides critical insights for LLM adoption in parliaments, underscoring the need for cautious, well-managed integration to ensure operational readiness. To safeguard institutional autonomy, it is essential to maintain technical and contractual flexibility, particularly the ability to switch between LLM providers.</p>
<p>To support strategic decisions, LLM-specific benchmarks for parliamentary use should be established. These benchmarks would enable precise differentiation and evaluation of models and providers based on criteria such as quality, ethical principles, privacy, security, operational reliability, governance, oversight, interpretability, and democratic alignment. They would also provide a structured framework for determining when and under what conditions LLMs should be deployed.</p>
<p>Currently, only a few parliaments officially use LLMs. However, growing demand from MPs, parliamentary groups, political parties, administrative staff, and scientific services underscores the need for scalable LLM solutions and custom software tailored to the specific requirements of parliamentary work and trained on differentiated, context-sensitive datasets. Developing such capacities will be a strategic priority in the coming years. Parliaments must initiate an AI and LLM development process that enables intensive AI use, establish robust AI governance, and fosters inter-institutional cooperation. The process should reflect the first recommendations for action outlined in section 4.5. The Five Point Framework for the strategic integration of AI in parliaments (<xref ref-type="bibr" rid="ref23">Fitsilis et al., 2026</xref>) provides a solid foundation through its comprehensive guidance across the five dimensions: strategy, prioritization, training, implementation and governance. This framework includes Rode&#x2019;s ample principle for parliaments with red, yellow, and green zones for LLMs in parliamentary contexts (<xref ref-type="bibr" rid="ref57">Rode, 2025</xref>).</p>
</sec>
<sec id="sec25">
<label>6.2</label>
<title>Reflections on the European AI act and LLMs in parliaments</title>
<p>Given these weaknesses and threats, it is unsurprising that regulatory measures are being considered by several nations and the EU. Developing trustworthy AI-based systems for parliamentary applications requires careful attention to ethical, legal, social, and safety dimensions. The EU AI Act establishes a framework for regulating and prohibiting AI across Europe. It sets harmonized rules for vendors for placing AI systems on the market, deployment, and use of AI systems in the Union, prohibits certain practices, and defines obligations for high-risk AI systems (EU AI Act 2024, Art 1). Requirements include enhanced transparency, technical documentation, compliance with EU copyright law, and publication of training data summaries. A fundamental rights impact assessment is mandatory, with stricter duties for systemic-risk applications. Foundation models must mitigate systemic risks, undergo adversarial testing, and report serious incidents to the European Commission. Cybersecurity remains a key priority.</p>
<p>With respect to software vendors and parliamentary contexts, the AI Act (Annex III, No 8(b): Administration of justice and democratic processes), classifies as high-risk only &#x201C;AI systems intended to be used for influencing the outcome of an election or referendum or the voting behavior of natural persons in the exercise of their vote in elections or referenda.&#x201D; AI use within parliament is not explicitly regulated. While elections and voter manipulation are assessed as high-risk, the absence of provisions addressing direct AI influence on political decisions by parliaments or state representatives is notable.</p>
<p>The reasons for this omission remain unclear. It may have been intentional to avoid constraints on AI-based parliamentary work. Alternatively, it could reflect the limited relevance of generative AI and LLMs at the time, reluctance to impose strict regulations, or insufficient consultation. Given the vulnerabilities and threats identified, regulatory approaches to AI use in parliaments are likely to evolve to prevent actions or omissions that could undermine democratic governance.</p>
<p>Ultimately, regulating LLM use in parliamentary workspaces rests with each parliament and its elected representatives. As guardians of democratic principles, parliamentarians should establish comprehensive frameworks addressing ethical, legal, and societal dimensions, balancing innovation with institutional protection. Achieving this requires appropriate organizational and technical measures (<xref ref-type="bibr" rid="ref24">Fitsilis et al., 2023b</xref>, <xref ref-type="bibr" rid="ref21">2024a</xref>).</p>
</sec>
</sec>
<sec id="sec26">
<label>7</label>
<title>Conclusion and outlook</title>
<p>This contribution explored the opportunities and challenges of integrating LLMs into parliamentary contexts. LLMs offer significant potential: advanced analysis, generative capabilities, multilingual support, scalability, and rapid response can streamline legislative processes, enhance constituent engagement, and reduce administrative burdens. Key applications include text creation, scheduling, transcription, translation, policy analysis, speechwriting, correspondence, legislative research, and staff training. These functions promise improved decision-making and accessibility. However, risks remain. Technical limitations, hallucinations, weak critical reasoning, and vulnerability to false information, irony, and sarcasm pose challenges. Ethical concerns, biases, security risks, and unintended legal consequences further complicate adoption. Reduced human interaction and lack of judgment underline the need for cautious integration. To balance benefits and risks, parliaments must establish ethical frameworks, ensure transparency, and maintain human oversight.</p>
<p>Building on a SWOT and a TOWS analysis, this study provides actionable recommendations and identifies implementation options, including choices between on-premises and cloud architectures, provider selection, and scope assessments. Additional priorities include LLM literacy, tailored benchmarks, and guidelines to ensure responsible deployment.</p>
<p>Given global diversity in parliamentary structures and IT approaches, the Five-Point Framework for AI integration (<xref ref-type="bibr" rid="ref23">Fitsilis et al., 2026</xref>) offers strategic guidance for adapting solutions to institutional conditions. Rode&#x2019;s traffic-light model (<xref ref-type="bibr" rid="ref57">Rode, 2025</xref>) complements this by defining risk-based zones for LLM use. Adoption should proceed gradually, with reflective evaluation (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>).</p>
<p>To conclude, LLMs belong in parliaments, but with restrictions. Due to uncertainty regarding truthfulness, outputs should serve only as drafts for human review. This approach safeguards integrity while leveraging LLM benefits (<xref ref-type="bibr" rid="ref70">von Lucke and Frank, 2025</xref>:18; <xref ref-type="bibr" rid="ref31">Inter-Parliamentary Union, 2024a</xref>: 3).</p>
<p>A key limitation of this study is the absence of empirical validation, as findings rely on expert judgment rather than quantitative evidence, and categorization relied on the author&#x2019;s interpretation. The exploratory nature of SWOT and its context-specific insights restrict generalizability. Moreover, results represent a snapshot in time and lack verification through real-world performance data. This analysis reflects the state of technology in November 2025, 3&#x202F;years after ChatGPT&#x2019;s release. Future developments, driven by rapid progress and competition, will likely address current weaknesses and enable more tailored parliamentary solutions. Results may differ as technology evolves.</p>
<p>Open questions remain: Which technical features should parliamentary LLMs incorporate both now and as technological progress accelerates? How can benchmarks and risk assessment frameworks be developed, established and dynamically adapted? Will LLM-generated agendas improve meeting quality? Can speeches become clearer and laws more precise? Will LLMs accelerate bureaucracy reduction and strengthen democratic resilience? What societal and democratic impacts arise from integrating LLMs into parliamentary decision-making processes? How to deal with these changes in an optimistic scenario with strong governance, transparency, and training programs, in a pessimistic scenario with uncontrolled adoption and in a transformative scenario with hybrid human-AI governance? Can LLMs support compromise proposals in negotiations? Or will politicians reject this technology, feeling disconnected from the process? Linguistic differences in text generation also require deeper study.</p>
<p>Much work lies ahead. Responsible, transparent, and human-centered integration will determine whether LLMs become a tool for efficiency and democratic resilience or a source of new risks.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec27">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="author-contributions" id="sec28">
<title>Author contributions</title>
<p>JvL: Conceptualization, Investigation, Methodology, Visualization, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<sec sec-type="COI-statement" id="sec29">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec30">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was used in the creation of this manuscript. During the preparation of this work the author used DeepL to translate drafts from German to English. ChatGPT 3.5 was used for brainstorming. ChatGPT 4o and CoPilot were used for language editing. After using these services, the author reviewed and edited the machine-generated outputs and takes full responsibility for the content of the publication.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec31">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Albrecht</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <source>ChatGPT und andere Computermodelle zur Sprachverarbeitung &#x2013; Grundlagen, Anwendungspotenziale und m&#x00F6;gliche Auswirkungen. TAB-Hintergrundpapier Nr. 26</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>B&#x00FC;ro f&#x00FC;r Technikfolgenabsch&#x00E4;tzung beim Deutschen Bundestag</publisher-name>.</mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Aldoseri</surname> <given-names>A.</given-names></name> <name><surname>Al-Khalifa</surname> <given-names>A. N.</given-names></name> <name><surname>Hamouda</surname> <given-names>A. M. S.</given-names></name></person-group> (<year>2023</year>). <article-title>Re-think data strategy and integration for artificial intelligence - concepts, opportunities and challenges</article-title>. <source>Appl. Sci.</source> <volume>13</volume>:<fpage>7082</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app13127082</pub-id></mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll1">Bundesregierung</collab></person-group> (<year>2018</year>). <source>Strategie K&#x00FC;nstliche Intelligenz der Bundesregierung</source>. <publisher-loc>Berlin</publisher-loc>.</mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Chang</surname> <given-names>P. Y. C.</given-names></name> <name><surname>Pflugfelder</surname> <given-names>B.</given-names></name></person-group> (<year>2023</year>). <source>A guide for large language model - make-or-buy strategies - business and technical insights</source>. M&#x00FC;nchen: <publisher-name>appliedAI Initiative GmbH</publisher-name>.</mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Chang</surname> <given-names>P. Y. C.</given-names></name> <name><surname>Pflugfelder</surname> <given-names>B.</given-names></name></person-group> (<year>2024</year>). <source>Retrieval-augmented generation realized - Strategic &#x0026; Technical Insights for industrial applications</source>. M&#x00FC;nchen: <publisher-name>appliedAI Initiative GmbH</publisher-name>.</mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Citino</surname> <given-names>Y. M.</given-names></name></person-group> (<year>2024</year>). <article-title>Leveraging automated technologies for law-making in Italy - generative AI and constitutional challenges</article-title>. <source>Parliament. Aff.</source> <volume>78</volume>:<fpage>gsae040</fpage>. doi: <pub-id pub-id-type="doi">10.1093/pa/gsae040</pub-id></mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll2">Cohesity</collab></person-group> (<year>2024</year>). Retrieval augmented generation (RAG). Available online at: <ext-link xlink:href="https://www.cohesity.com/de/glossary/retrieval-augmented-generation-rag/" ext-link-type="uri">https://www.cohesity.com/de/glossary/retrieval-augmented-generation-rag/</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll3">Committee on Artificial Intelligence</collab></person-group> (<year>2023</year>). <source>Consolidated working draft of the framework convention on artificial intelligence, human rights, democracy and the rule of law</source>. <publisher-loc>Strasbourg</publisher-loc>: <publisher-name>Committee on Artificial Intelligence</publisher-name>.</mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cooper</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>Examining science education in ChatGPT: an exploratory study of generative artificial intelligence</article-title>. <source>J. Sci. Educ. Technol.</source> <volume>32</volume>, <fpage>444</fpage>&#x2013;<lpage>452</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10956-023-10039-y</pub-id></mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll4">Council of Europe</collab></person-group> (<year>2021</year>). <source>Artificial intelligence, human rights, democracy, and the rule of law &#x2013; A primer</source>. <publisher-loc>Strasbourg</publisher-loc>: <publisher-name>Council of Europe and The Alan Turing Institute</publisher-name>.</mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll5">Der Spiegel</collab></person-group>. (<year>2025</year>). <source>Formulierungshilfe bei Regierungsarbeit - Merz sagt, er habe KI f&#x00FC;r Gesetzestexte der Bundesregierung &#x201C;ausprobiert&#x201D;</source>, <publisher-name>Der Spiegel</publisher-name>, <publisher-loc>Hamburg</publisher-loc>. Available online at: <ext-link xlink:href="https://www.spiegel.de/politik/deutschland/friedrich-merz-sagt-er-habe-ki-fuer-gesetzestexte-der-bundesregierung-ausprobiert-a-1611ebfe-92c0-41f0-b1f8-4ae95a2507c1" ext-link-type="uri">https://www.spiegel.de/politik/deutschland/friedrich-merz-sagt-er-habe-ki-fuer-gesetzestexte-der-bundesregierung-ausprobiert-a-1611ebfe-92c0-41f0-b1f8-4ae95a2507c1</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><name><surname>Di Fede</surname> <given-names>G.</given-names></name> <name><surname>Rocchesso</surname> <given-names>D.</given-names></name> <name><surname>Dow</surname> <given-names>S. P.</given-names></name> <name><surname>Andolina</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <source>The idea machine: LLM-based expansion, rewriting, combination, and suggestion of ideas</source>. In <conf-name>ACM International Conference Proceedings Series</conf-name>. <publisher-name>Association for Computing Machinery</publisher-name>, <publisher-loc>New York</publisher-loc>, pp. <fpage>623</fpage>&#x2013;<lpage>627</lpage>.</mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Etscheid</surname> <given-names>J.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Stroh</surname> <given-names>F.</given-names></name></person-group> (<year>2020</year>). <source>K&#x00FC;nstliche Intelligenz in der &#x00F6;ffentlichen Verwaltung</source>. Stuttgart: <publisher-name>Digitalakademie@BW &#x0026; Fraunhofer IAO</publisher-name>.</mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll6">European Commission</collab></person-group> (<year>2020</year>). <source>White paper on artificial intelligence &#x2013; A European approach to excellence and trust</source>. <publisher-loc>Brussels</publisher-loc>: <publisher-name>European Commission</publisher-name>.</mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll7">European Union</collab></person-group> (<year>2024</year>). Regulation (EU) 2024/1689 of the European Parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act). Available online at: <ext-link xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689" ext-link-type="uri">https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll8">Executive Office of the President</collab></person-group> (<year>2019</year>). <source>Maintaining American leadership in artificial intelligence. Executive order 13859 of February 11, 2019. Federal Register Vol. 84, no. 31</source>. <publisher-loc>Washington D.C.</publisher-loc>: <publisher-name>Executive Office of the President</publisher-name>, <fpage>3967</fpage>.</mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>Artificial intelligence (AI) in parliaments &#x2013; preliminary analysis of the Eduskunta experiment</article-title>. <source>J. Legis. Stud.</source> <volume>27</volume>, <fpage>621</fpage>&#x2013;<lpage>633</lpage>. doi: <pub-id pub-id-type="doi">10.1080/13572334.2021.1976947</pub-id> (Accessed November 13. 2025)</mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>De Almeida</surname> <given-names>P. G. R.</given-names></name></person-group> (<year>2024</year>). &#x201C;<article-title>Artificial intelligence and its regulation in representative institutions</article-title>&#x201D; in <source>Research handbook on public management and AI</source>. eds. <person-group person-group-type="editor"><name><surname>Charalabidis</surname> <given-names>Y.</given-names></name> <name><surname>Medaglia</surname> <given-names>R.</given-names></name> <name><surname>van Noordt</surname> <given-names>C.</given-names></name></person-group> (Cheltenham: <publisher-name>Edward Elgar</publisher-name>), <fpage>151</fpage>&#x2013;<lpage>166</lpage>.</mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>Mikros</surname> <given-names>G.</given-names></name> <name><surname>Leventis</surname> <given-names>S.</given-names></name></person-group> (<year>2024b</year>). <source>Overview of smart functionalities in drafting legislation in LEOS. Augmented LEOS &#x2013; Final report</source>. Brussel: <publisher-name>European Commission</publisher-name>.</mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name></person-group> (<year>2023</year>). <article-title>Beyond contemporary parliamentary practice - unfolding the institutional potential of artificial intelligence</article-title>. <source>Parliament. J. Parliaments Commonwealth</source> <volume>104</volume>, <fpage>58</fpage>&#x2013;<lpage>59</lpage>. Available online at: <ext-link xlink:href="https://www.cpahq.org/media/nljdjbr0/parl2023iss1finalonlinesinglereduced.pdf" ext-link-type="uri">https://www.cpahq.org/media/nljdjbr0/parl2023iss1finalonlinesinglereduced.pdf</ext-link></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>De Vrieze</surname> <given-names>F.</given-names></name></person-group> (Eds.) (<year>2024a</year>). <source>Guidelines for AI in parliaments</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Westminster Foundation for Democracy</publisher-name>.</mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Frank</surname> <given-names>S.</given-names></name></person-group> (<year>2023a</year>). <article-title>A comprehensive research workshop on artificial intelligence in parliaments</article-title>. <source>Int. J. Parl. Stud.</source> <volume>3</volume>, <fpage>316</fpage>&#x2013;<lpage>324</lpage>. doi: <pub-id pub-id-type="doi">10.1163/26668912-bja10074</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Mikros</surname> <given-names>G.</given-names></name></person-group> (<year>2026</year>). <source>On the strategic integration of artificial intelligence in parliaments: A five-point framework, IRIS 2026 proceedings. Bern: Editions Weblaw. In print.</source></mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Mikros</surname> <given-names>G.</given-names></name> <name><surname>Ruckert</surname> <given-names>J.</given-names></name> <name><surname>de Alberto Oliveira Lima</surname> <given-names>J.</given-names></name> <name><surname>Hershowitz</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2023b</year>) <source>Guidelines on the introduction and use of artificial intelligence in the parliamentary workspace. Version 1</source>. London: Figshare. doi: <pub-id pub-id-type="doi">10.6084/m9.figshare.22687414</pub-id></mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Floridi</surname> <given-names>L.</given-names></name> <name><surname>Chiriatti</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>GPT-3: its nature, scope, limits, and consequences</article-title>. <source>Minds &#x0026; Mach.</source> <volume>30</volume>, <fpage>681</fpage>&#x2013;<lpage>694</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11023-020-09548-1</pub-id></mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Geertsema</surname> <given-names>P. G.</given-names></name> <name><surname>Bifet</surname> <given-names>A.</given-names></name> <name><surname>Green</surname> <given-names>R.</given-names></name></person-group> (<year>2023</year>). <source>ChatGPT and large language models: What are the implications for policy makers?</source></mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll9">High-Level Expert Group on Artificial Intelligence</collab></person-group> (<year>2019</year>). <source>Ethic guidelines for trustworthy AI</source>. <publisher-loc>Brussels</publisher-loc>: <publisher-name>High-Level Expert Group on Artificial Intelligence</publisher-name>.</mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll10">His Majesty&#x2019;s Government</collab></person-group> (<year>2018</year>). <source>Industrial strategy artificial intelligence sector Deal</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Crown Copyright</publisher-name>.</mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll11">House of Lords</collab></person-group> (<year>2024</year>). <source>Large language models and generative AI</source>. <publisher-loc>London</publisher-loc>: <publisher-name>HL Paper 54</publisher-name>.</mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll12">Hypernetica</collab></person-group> (<year>2023</year>). <source>Software Services for Parliament</source>. Athens and Friedrichshafen: <publisher-name>Zeppelin University</publisher-name>. Available online at: <ext-link xlink:href="https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/SL-230703-Hypernetica-Chatbot-Presentation-2023-Q3.pdf" ext-link-type="uri">https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/SL-230703-Hypernetica-Chatbot-Presentation-2023-Q3.pdf</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll13">Inter-Parliamentary Union</collab></person-group> (<year>2024a</year>). Using generative AI in parliaments. Available online at: <ext-link xlink:href="https://www.ipu.org/file/19126/download" ext-link-type="uri">https://www.ipu.org/file/19126/download</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll14">Inter-Parliamentary Union</collab></person-group> (<year>2024b</year>). The impact of artificial intelligence on democracy, human rights and the rule of law, resolution by the 149th IPU assembly. Available online at: <ext-link xlink:href="https://www.ipu.org/file/20061/download" ext-link-type="uri">https://www.ipu.org/file/20061/download</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll15">Inter-Parliamentary Union</collab></person-group> (<year>2024c</year>). Guidelines for AI in parliaments. Available online at: <ext-link xlink:href="https://www.ipu.org/file/20632/download" ext-link-type="uri">https://www.ipu.org/file/20632/download</ext-link>. (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll16">Inter-Parliamentary Union</collab></person-group> (<year>2024d</year>). Use cases for AI in parliaments. Available online at: <ext-link xlink:href="https://www.ipu.org/file/20635/download" ext-link-type="uri">https://www.ipu.org/file/20635/download</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jobin</surname> <given-names>A.</given-names></name> <name><surname>Ienca</surname> <given-names>M.</given-names></name> <name><surname>Vayena</surname> <given-names>E.</given-names></name></person-group> (<year>2019</year>). <article-title>The global landscape of AI ethics guidelines</article-title>. <source>Nat. Mach. Intell.</source> <volume>1</volume>, <fpage>389</fpage>&#x2013;<lpage>399</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s42256-019-0088-2</pub-id></mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jungherr</surname> <given-names>A.</given-names></name></person-group> (<year>2023</year>). <article-title>Artificial intelligence and democracy: a conceptual framework</article-title>. <source>Soc. Media Soc.</source> <volume>9</volume>, <fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi: <pub-id pub-id-type="doi">10.1177/20563051231186353</pub-id>, <pub-id pub-id-type="pmid">41405032</pub-id></mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Jurafsky</surname> <given-names>D.</given-names></name> <name><surname>Martin</surname> <given-names>J. H.</given-names></name></person-group> (<year>2021</year>). <source>N-gram language models - speech and language processing</source>. <edition>3rd</edition> Edn. Standford: Stanford University.</mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Kamprath</surname> <given-names>M.</given-names></name></person-group> (<year>2025</year>). <source>The AI cloverleaf model &#x2013; Mapping AI use cases in parliaments</source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>Inter-Parliamentary Union</publisher-name>.</mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Kaplan</surname> <given-names>R. S.</given-names></name> <name><surname>Norton</surname> <given-names>D. P.</given-names></name></person-group> (<year>2008</year>). &#x201C;<article-title>Identifying strengths, weaknesses, opportunities, and threats (SWOT)</article-title>&#x201D; in <source>The execution premium - linking strategy to operations for competitive advantage</source>. eds. <person-group person-group-type="editor"><name><surname>Kaplan</surname> <given-names>R. S.</given-names></name> <name><surname>Norton</surname> <given-names>D. P.</given-names></name></person-group> (<publisher-loc>Boston</publisher-loc>: <publisher-name>Harvard Business Press</publisher-name>), <fpage>49</fpage>&#x2013;<lpage>53</lpage>.</mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kelly</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <source>House restricts congressional use of ChatGPT - no other chatbots are currently authorized for use in the House</source>. Washington DC: <publisher-name>The Verge, Vox Media LLC</publisher-name>. Available online at: <ext-link xlink:href="https://www.theverge.com/2023/6/26/23774286/chatgpt-sam-altman-congress-house-ai-chatbots" ext-link-type="uri">https://www.theverge.com/2023/6/26/23774286/chatgpt-sam-altman-congress-house-ai-chatbots</ext-link> (Accessed November 13, 2025).</mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kipker</surname> <given-names>D.</given-names></name></person-group> (<year>2025</year>). Schatten-KI macht Deutschland angreifbar, Tagesspiegel Background, Berlin. Available online at: <ext-link xlink:href="https://background.tagesspiegel.de/it-und-cybersicherheit/briefing/schatten-ki-macht-deutschland-angreifbar" ext-link-type="uri">https://background.tagesspiegel.de/it-und-cybersicherheit/briefing/schatten-ki-macht-deutschland-angreifbar</ext-link> (Accessed November 14, 2025)</mixed-citation></ref>
<ref id="ref42"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Koryzis</surname> <given-names>D.</given-names></name> <name><surname>Dalas</surname> <given-names>A.</given-names></name> <name><surname>Spiliotopoulos</surname> <given-names>D.</given-names></name> <name><surname>Fitsilis</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>Parltech: transformation framework for the digital parliament</article-title>. <source>Big Data Cogn. Comput.</source> <volume>5</volume>:<fpage>15</fpage>. doi: <pub-id pub-id-type="doi">10.3390/bdcc5010015</pub-id></mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><name><surname>Mamalis</surname> <given-names>M.E.</given-names></name> <name><surname>Kalampokis</surname> <given-names>E.</given-names></name> <name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>Theodorakopoulos</surname> <given-names>G.</given-names></name> <name><surname>Tarabanis</surname> <given-names>K</given-names></name></person-group> (<year>2024</year>). <article-title>A large language model based legal assistant for governance applications</article-title>. In: <conf-name>Electronic Government: 23rd IFIP WG 8.5 International Conference Proceedings, Lecture Notes in Computer Science</conf-name>, <volume>14841</volume>, pp. <fpage>286</fpage>&#x2013;<lpage>301</lpage>.</mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mazzone</surname> <given-names>M.</given-names></name> <name><surname>Elgammal</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Art, creativity, and the potential of artificial intelligence</article-title>. <source>Art</source> <volume>8</volume>:<fpage>26</fpage>. doi: <pub-id pub-id-type="doi">10.3390/arts8010026</pub-id></mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>McCarthy</surname> <given-names>J.</given-names></name> <name><surname>Minsky</surname> <given-names>M. L.</given-names></name> <name><surname>Rochester</surname> <given-names>N.</given-names></name> <name><surname>Shannon</surname> <given-names>C. E.</given-names></name></person-group> (<year>1956</year>). <article-title>A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955</article-title>. <source>AI Mag.</source> <volume>27</volume>, <fpage>12</fpage>&#x2013;<lpage>14</lpage>.</mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Melides</surname> <given-names>A.</given-names></name></person-group> (<year>2023</year>). <source>Transforming parliaments with AI: The role of EDIHs and citizen participation</source>. Athens and Friedrichshafen: <publisher-name>Zeppelin University</publisher-name>. Available online at: <ext-link xlink:href="https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/AM-230704-GR-digiGOV-innoHUB-AI-in-Parliaments.pptx.pdf" ext-link-type="uri">https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/AM-230704-GR-digiGOV-innoHUB-AI-in-Parliaments.pptx.pdf</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll17">Microsoft</collab></person-group> (<year>2024</year>). <source>K&#x00FC;nstliche Intelligenz in der &#x00F6;ffentlichen Verwaltung &#x2013; Eine nutzenfokussierte Orientierung, Whitepaper. Munich: Microsoft Deutschland.</source></mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mittelstadt</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Principles alone cannot guarantee ethical AI</article-title>. <source>Nat. Mach. Intell.</source> <volume>1</volume>, <fpage>501</fpage>&#x2013;<lpage>507</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s42256-019-0114-4</pub-id>, <pub-id pub-id-type="pmid">41407778</pub-id></mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mittelstadt</surname> <given-names>B.</given-names></name> <name><surname>Wachter</surname> <given-names>S.</given-names></name> <name><surname>Russell</surname> <given-names>C.</given-names></name></person-group> (<year>2023</year>). <article-title>To protect science, we must use LLMs as zero-shot translators</article-title>. <source>Nat. Hum. Behav.</source> <volume>7</volume>, <fpage>1830</fpage>&#x2013;<lpage>1832</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41562-023-01744-0</pub-id>, <pub-id pub-id-type="pmid">37985912</pub-id></mixed-citation></ref>
<ref id="ref50"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Moschopoulos</surname> <given-names>C.</given-names></name></person-group> (<year>2023</year>). <source>AI applications in European Parliament administration - current status</source>. Strasbourg and Friedrichshafen: <publisher-name>Zeppelin University</publisher-name>. Available online at: <ext-link xlink:href="https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/CM-230704-AI-applications-in-EP.03.07.2023.pdf" ext-link-type="uri">https://www.zu.de/institute/togi/assets/pdf/ai-parliament-2023/CM-230704-AI-applications-in-EP.03.07.2023.pdf</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref51"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll18">National Cyber Security Centre</collab></person-group> (<year>2023</year>). Guidelines for secure AI system development. Available online at: <ext-link xlink:href="https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf?sc_src=email_3765039&#x0026;sc_lid=361294739&#x0026;sc_uid=ynhmtE2zh2&#x0026;sc_llid=116" ext-link-type="uri">https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf?sc_src=email_3765039&#x0026;sc_lid=361294739&#x0026;sc_uid=ynhmtE2zh2&#x0026;sc_llid=116</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>A.</given-names></name> <name><surname>Ngo</surname> <given-names>H. N.</given-names></name> <name><surname>Hong</surname> <given-names>Y.</given-names></name> <name><surname>Dang</surname> <given-names>B.</given-names></name> <name><surname>Nguyen</surname> <given-names>B.-P. T.</given-names></name></person-group> (<year>2023</year>). <article-title>Ethical principles for artificial intelligence in education</article-title>. <source>Educ. Inf. Technol.</source> <volume>28</volume>, <fpage>4221</fpage>&#x2013;<lpage>4241</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10639-022-11316-w</pub-id>, <pub-id pub-id-type="pmid">36254344</pub-id></mixed-citation></ref>
<ref id="ref53"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><name><surname>Pedreschi</surname> <given-names>D.</given-names></name> <name><surname>Giannotti</surname> <given-names>F.</given-names></name> <name><surname>Guidotti</surname> <given-names>R.</given-names></name> <name><surname>Monreale</surname> <given-names>A.</given-names></name> <name><surname>Ruggieri</surname> <given-names>S.</given-names></name> <name><surname>Turini</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <article-title>Meaningful explanations of black box AI decision systems</article-title>. <conf-name>Proceedings of the AAAI Conference on Artificial Intelligence</conf-name>, <volume>33</volume>: pp. <fpage>9780</fpage>&#x2013;<lpage>9784</lpage>.</mixed-citation></ref>
<ref id="ref54"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Peres</surname> <given-names>R.</given-names></name> <name><surname>Schreier</surname> <given-names>M.</given-names></name> <name><surname>Schweidel</surname> <given-names>D.</given-names></name> <name><surname>Sorescu</surname> <given-names>A.</given-names></name></person-group> (<year>2023</year>). <article-title>On ChatGPT and beyond: how generative artificial intelligence may affect research, teaching, and practice</article-title>. <source>Int. J. Res. Mark.</source> <volume>40</volume>, <fpage>269</fpage>&#x2013;<lpage>275</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijresmar.2023.03.001</pub-id></mixed-citation></ref>
<ref id="ref55"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Perlman</surname> <given-names>A.</given-names></name></person-group> (<year>2024</year>). <source>The legal ethics of generative AI. Suffolk law review. Boston: Suffolk university, legal studies research paper series, research paper 24&#x2013;17</source>, pp. <fpage>1</fpage>&#x2013;<lpage>19</lpage>. Available online at: <ext-link xlink:href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4735389" ext-link-type="uri">https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4735389</ext-link> (Accessed November 13, 2025).</mixed-citation></ref>
<ref id="ref56"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll19">Popvox Foundation</collab></person-group> (<year>2024</year>). The modern intern - 3 steps for working in congress with GenAI. Available online at: <ext-link xlink:href="https://static1.squarespace.com/static/60450e1de0fb2a6f5771b1be/t/65ef47b9d15f07697a42eaab/1710180282939/The_Modern_Intern.pdf" ext-link-type="uri">https://static1.squarespace.com/static/60450e1de0fb2a6f5771b1be/t/65ef47b9d15f07697a42eaab/1710180282939/The_Modern_Intern.pdf</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref57"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Rode</surname> <given-names>T.</given-names></name></person-group> (<year>2025</year>). <source>Sichere KI f&#x00FC;r Kommunen - Das Ampelkonzept f&#x00FC;r eine souver&#x00E4;ne KI-Zukunft - Offen, lokal &#x0026; flexibel, Nettetal: Stadt Nettetal</source>.</mixed-citation></ref>
<ref id="ref58"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Sadeghi</surname> <given-names>M.</given-names></name> <name><surname>Blachez</surname> <given-names>I.</given-names></name></person-group> (<year>2025</year>). <source>A well-funded Moscow-based global &#x2018;news&#x2019; network has infected Western artificial intelligence tools worldwide with Russian propaganda</source>. New York: <publisher-name>Newsguard</publisher-name>. Available online at: <ext-link xlink:href="https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global" ext-link-type="uri">https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref59"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Salvaggio</surname> <given-names>E.</given-names></name></person-group> (<year>2025</year>). Anatomy of an AI coup, tech policy press. Available online at: <ext-link xlink:href="https://www.techpolicy.press/anatomy-of-an-ai-coup" ext-link-type="uri">https://www.techpolicy.press/anatomy-of-an-ai-coup</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref60"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Schmid</surname> <given-names>U.</given-names></name> <name><surname>Slany</surname> <given-names>E.</given-names></name> <name><surname>Scheele</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). Understanding the why and how of trustworthy AI, Fraunhofer IIS, Fraunhofer Group. Available online at: <ext-link xlink:href="https://websites.fraunhofer.de/smart-sensing-insights/trustworthy-ai/" ext-link-type="uri">https://websites.fraunhofer.de/smart-sensing-insights/trustworthy-ai/</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref61"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sengar</surname> <given-names>S. S.</given-names></name> <name><surname>Hasan</surname> <given-names>A. B.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Carroll</surname> <given-names>F.</given-names></name></person-group> (<year>2025</year>). <article-title>Generative artificial intelligence - a systematic review and applications</article-title>. <source>Multimed. Tools Appl.</source> <volume>84</volume>, <fpage>23661</fpage>&#x2013;<lpage>23700</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11042-024-20016-1</pub-id> (Accessed November 13. 2025)</mixed-citation></ref>
<ref id="ref62"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shumailov</surname> <given-names>I.</given-names></name> <name><surname>Shumaylov</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Gal</surname> <given-names>Y.</given-names></name> <name><surname>Papernot</surname> <given-names>N.</given-names></name> <name><surname>Anderson</surname> <given-names>R.</given-names></name></person-group> (<year>2023</year>). <article-title>The curse of recursion: training on generated data makes models forget</article-title>. <source>ArXiv</source>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2305.17493</pub-id></mixed-citation></ref>
<ref id="ref63"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll20">SPD Bundestagsfraktion</collab></person-group> (<year>2024</year>). Leitlinien zur Nutzung von generativer KI in der parlamentarischen Arbeit, Positionspapier der SPD Bundestagsfraktion, Berlin. Available online at: <ext-link xlink:href="https://www.spdfraktion.de/system/files/documents/position-ki-leitlinien.pdf" ext-link-type="uri">https://www.spdfraktion.de/system/files/documents/position-ki-leitlinien.pdf</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref64"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll21">Stanford University</collab></person-group> (<year>2021</year>). <source>Artificial intelligence index report 2021</source>. <publisher-loc>Stanford</publisher-loc>: <publisher-name>Stanford University</publisher-name>.</mixed-citation></ref>
<ref id="ref65"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><name><surname>Vaswani</surname> <given-names>A.</given-names></name> <name><surname>Shazeer</surname> <given-names>A.</given-names></name> <name><surname>Parmar</surname> <given-names>N.</given-names></name> <name><surname>Uszkoreit</surname> <given-names>J.</given-names></name> <name><surname>Jones</surname> <given-names>L.</given-names></name> <name><surname>Gomez</surname> <given-names>AN.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Attention is all you need</article-title>. <conf-name>31st Conference on Neural Information Processing Systems (NIPS 2017)</conf-name>, <publisher-loc>Long Beach, CA, USA</publisher-loc>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.1706.03762</pub-id></mixed-citation></ref>
<ref id="ref66"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>von Lucke</surname> <given-names>J.</given-names></name></person-group> (<year>2024</year>). <article-title>KI-Technologien und ihre Auswirkungen auf die Verwaltungsarbeit</article-title>. <source>PDV News</source> <volume>2024</volume>, <fpage>6</fpage>&#x2013;<lpage>11</lpage>.</mixed-citation></ref>
<ref id="ref67"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Etscheid</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>How artificial intelligence approaches could change public administration and justice</article-title>. <source>Jusletter IT</source>. doi: <pub-id pub-id-type="doi">10.38023/fafb2543-2746-4061-b898-5ee71c91cef8</pub-id></mixed-citation></ref>
<ref id="ref68"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Fitsilis</surname> <given-names>F.</given-names></name></person-group> (<year>2023</year>). &#x201C;<article-title>Using artificial intelligence in parliament - the Hellenic case</article-title>&#x201D; in <source>Electronic government. EGOV 2023. Lecture notes in computer science</source>. ed. <person-group person-group-type="editor"><name><surname>Lindgren</surname> <given-names>I.</given-names></name></person-group>, vol. <volume>14130</volume> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>174</fpage>&#x2013;<lpage>191</lpage>.</mixed-citation></ref>
<ref id="ref69"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Fitsilis</surname> <given-names>F.</given-names></name> <name><surname>Etscheid</surname> <given-names>J.</given-names></name></person-group> <year>2023</year> <article-title>Research and Development agenda for the use of AI. In parliaments</article-title>, in: <person-group person-group-type="editor"><name><surname>Cid</surname> <given-names>David Duenas</given-names></name> <name><surname>Sabatini</surname> <given-names>Nadzeya</given-names></name> <name><surname>Hagen</surname> <given-names>Loni</given-names></name></person-group> und <person-group person-group-type="editor"><name><surname>Liao</surname> <given-names>Hsin-Chung</given-names></name></person-group> (Eds.): <conf-name>DGO '23: Proceedings of the 24th Annual International Conference on Digital Government Research</conf-name>, <publisher-name>Association for Computing Machinery (ACM)</publisher-name>, <publisher-loc>New York</publisher-loc> pp. <fpage>423</fpage>&#x2013;<lpage>433</lpage></mixed-citation></ref>
<ref id="ref70"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>von Lucke</surname> <given-names>J.</given-names></name> <name><surname>Frank</surname> <given-names>S.</given-names></name></person-group> (<year>2025</year>). <article-title>A few thoughts on the use of ChatGPT, GPT 3.5, GPT-4 and LLMs in parliaments - reflecting on the results of experimenting with LLMs in the parliamentarian context</article-title>. <source>Digital Govern. Res. Pract.</source> <volume>6</volume>, <fpage>1</fpage>&#x2013;<lpage>21</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3665333</pub-id> (Accessed November 13. 2025)</mixed-citation></ref>
<ref id="ref71"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Weihrich</surname> <given-names>H.</given-names></name></person-group> (<year>1982</year>). <article-title>The TOWS matrix - a tool for situational analysis</article-title>. <source>Long Range Plan.</source> <volume>15</volume>, <fpage>54</fpage>&#x2013;<lpage>66</lpage>. doi: <pub-id pub-id-type="doi">10.1016/0024-6301(82)90120-0</pub-id></mixed-citation></ref>
<ref id="ref72"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll22">White House</collab></person-group> (<year>2022</year>). <source>Blueprint for an AI bill of rights</source>. <publisher-loc>Washington D.C.</publisher-loc>: <publisher-name>White House</publisher-name>.</mixed-citation></ref>
<ref id="ref73"><mixed-citation publication-type="book"><person-group person-group-type="author"><collab id="coll23">White House</collab></person-group> (<year>2024</year>). <source>Advancing governance, innovation, and risk Management for Agency use of artificial intelligence, memorandum M-24-10 for the heads of executive departments and agencies</source>. <publisher-loc>Washington D.C.</publisher-loc>: <publisher-name>White House</publisher-name>.</mixed-citation></ref>
<ref id="ref74"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zeligman</surname> <given-names>E.</given-names></name> <name><surname>Harik</surname> <given-names>G.</given-names></name> <name><surname>Shao</surname> <given-names>Y.</given-names></name> <name><surname>Jayasiri</surname> <given-names>V.</given-names></name> <name><surname>Haber</surname> <given-names>N.</given-names></name> <name><surname>Goodman</surname> <given-names>N. D.</given-names></name></person-group> (<year>2024</year>). <article-title>Quiet-STaR: language models can teach themselves to think before speaking</article-title>. <source>ArXiv</source>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2403.09629</pub-id></mixed-citation></ref>
<ref id="ref75"><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll24">Zeppelin University</collab></person-group> (<year>2023</year>). International research workshop AI in parliaments - 03.-04.07.2023. Available online at: <ext-link xlink:href="https://www.zu.de/institute/togi/ai-parliament-2023.php" ext-link-type="uri">https://www.zu.de/institute/togi/ai-parliament-2023.php</ext-link> (Accessed November 13, 2025)</mixed-citation></ref>
<ref id="ref76"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Roller</surname> <given-names>S.</given-names></name> <name><surname>Goyal</surname> <given-names>N.</given-names></name> <name><surname>Artetxe</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>OPT: open pre-trained transformer language models</article-title>. <source>ArXiv</source>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2205.01068</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0004">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1351962/overview">Noella Edelmann</ext-link>, University for Continuing Education Krems, Austria</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0005">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1376257/overview">Sandeep Singh Sengar</ext-link>, Cardiff Metropolitan University, United Kingdom</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3254616/overview">Zhamilia Klycheva</ext-link>, TransResearch Consortium, United States</p>
</fn>
</fn-group>
<fn-group>
<fn id="fn0001">
<label>1</label>
<p>
<ext-link xlink:href="https://www.parla.berlin" ext-link-type="uri">https://www.parla.berlin</ext-link>
</p>
</fn>
<fn id="fn0002">
<label>2</label>
<p>
<ext-link xlink:href="https://www.timetoact-group.at/en/insights/llm-benchmarks" ext-link-type="uri">https://www.timetoact-group.at/en/insights/llm-benchmarks</ext-link>
</p>
</fn>
<fn id="fn0003">
<label>3</label>
<p>
<ext-link xlink:href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard" ext-link-type="uri">https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard</ext-link>
</p>
</fn>
</fn-group>
</back>
</article>