<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="brief-report">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Res. Metr. Anal.</journal-id>
<journal-title>Frontiers in Research Metrics and Analytics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Res. Metr. Anal.</abbrev-journal-title>
<issn pub-type="epub">2504-0537</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frma.2024.1486600</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Metrics and Analytics</subject>
<subj-group>
<subject>Perspective</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Navigating algorithm bias in AI: ensuring fairness and trust in Africa</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Pasipamire</surname> <given-names>Notice</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2826693/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Muroyiwa</surname> <given-names>Abton</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2871696/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Library and Information Science, National University of Science and Technology</institution>, <addr-line>Bulawayo</addr-line>, <country>Zimbabwe</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Languages and Arts, Nyatsime College</institution>, <addr-line>Chitungwiza</addr-line>, <country>Zimbabwe</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Patrick Ngulube, University of South Africa, South Africa</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Amogelang Molaudzi, University of Limpopo, South Africa</p>
<p>Mashilo Modiba, University of South Africa, South Africa</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Notice Pasipamire <email>npasipamire&#x00040;gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>10</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>9</volume>
<elocation-id>1486600</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>08</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>10</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2024 Pasipamire and Muroyiwa.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Pasipamire and Muroyiwa</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>This article presents a perspective on the impact of algorithmic bias on information fairness and trust in artificial intelligence (AI) systems within the African context. The author&#x00027;s personal experiences and observations, combined with relevant literature, formed the basis of this article. The authors demonstrate why algorithm bias poses a substantial challenge in Africa, particularly regarding fairness and the integrity of AI applications. This perspective underscores the urgent need to address biases that compromise the fairness of information dissemination and undermine public trust. The authors advocate for the implementation of strategies that promote inclusivity, enhance cultural sensitivity, and actively engage local communities in the development of AI systems. By prioritizing ethical practices and transparency, stakeholders can mitigate the risks associated with bias, thereby fostering trust and ensuring equitable access to technology. Additionally, the article explores the potential consequences of inaction, including exacerbated social disparities, diminished confidence in public institutions, and economic stagnation. Ultimately, this work argues for a collaborative approach to AI that positions Africa as a leader in responsible development, ensuring that technology serves as a catalyst for sustainable development and social justice.</p></abstract>
<kwd-group>
<kwd>Africa</kwd>
<kwd>algorithmic bias</kwd>
<kwd>Artificial Intelligence (AI)</kwd>
<kwd>AI trust</kwd>
<kwd>information fairness</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="41"/>
<page-count count="7"/>
<word-count count="5783"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Scholarly Communication</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Algorithm bias significantly impacts information fairness and trust, which are vital for the successful acceptance of AI technologies (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). Recent years have seen a significant focus on bias and fairness in AI (Shams et al., <xref ref-type="bibr" rid="B36">2023</xref>) as generative AI and large language models process vast amounts of data which raises concerns about privacy, discrimination, data security, and copyright infringement (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). In Africa, the deployment of AI systems has sparked critical discussions about algorithmic biases and their implications for information fairness and ethics. Key concerns include the lack of diverse datasets, implicit biases in algorithms, insufficient transparency in AI systems, and limited access to technology. Additionally, issues surrounding data privacy, ethical considerations in AI deployment, community engagement, capacity building, partnerships, and regulatory frameworks are paramount (Buolamwini and Gebru, <xref ref-type="bibr" rid="B10">2018</xref>; Obermeyer et al., <xref ref-type="bibr" rid="B34">2019</xref>; Jobin and Ienca, <xref ref-type="bibr" rid="B25">2019</xref>).</p>
<p>Several definition of algorithm bias exists but they all point unfair outcomes. AI bias occurs when an algorithm&#x00027;s output becomes prejudiced due to false assumptions based on the data fed into it (Silberg and Manyika, <xref ref-type="bibr" rid="B38">2019</xref>). According to Ferrara (<xref ref-type="bibr" rid="B18">2023</xref>), bias, is defined as systematic error in decision-making processes leading to unfair outcomes, is a critical concern. Ntoutsi et al. (<xref ref-type="bibr" rid="B33">2020</xref>) defined algorithm bias the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair. Algorithmic bias manifests as systematic and unfair discrimination when algorithms are employed to make decisions or disseminate information. This bias can take various forms, such as racial or gender bias, with profound consequences for individuals and communities. In the context of AI, bias can stem from diverse sources, including data collection, algorithm design, policy decisions, and human interpretation (Ferrara, <xref ref-type="bibr" rid="B18">2023</xref>). Bias in AI can lead to unfair and incorrect decisions, undermining both fairness and trust. Bias mitigation is a crucial aspect in the development of fair-AI models, aimed at reducing or eliminating biases that can skew outcomes and perpetuate discrimination (Alvarez et al., <xref ref-type="bibr" rid="B6">2024</xref>).</p>
<p>Without careful consideration of fairness and the implementation of safeguards, AI tools risk becoming instruments of discrimination, perpetuating existing injustices (Tibebu, <xref ref-type="bibr" rid="B40">2024</xref>). Fairness in AI entails the absence of bias or discrimination, ensuring no favoritism is shown toward individuals or groups based on protected characteristics such as race, gender, age, or religion (Ananny and Crawford, <xref ref-type="bibr" rid="B7">2016</xref>; Dwork et al., <xref ref-type="bibr" rid="B15">2012</xref>). The literature proposes several types of fairness, including group fairness, individual fairness, and counterfactual fairness (Ferrara, <xref ref-type="bibr" rid="B18">2023</xref>). AI and trust share an inseparable relationship (Fancher et al., <xref ref-type="bibr" rid="B17">2024</xref>). Equally, trust is essential for successful human-agent interactions and significantly influences the future adoption of AI systems (Omrani et al., <xref ref-type="bibr" rid="B35">2022</xref>). Trust is the expectation that digital technologies and services and the organizations providing them will protect all stakeholders&#x00027; interests and uphold societal values (Dobrygowski, <xref ref-type="bibr" rid="B14">2023</xref>). Rwanda&#x00027;s National AI Policy state that trust is critical to public confidence and acceptance of AI (Nshimiyimana, <xref ref-type="bibr" rid="B32">2023</xref>).</p>
<p>In developing countries, algorithmic bias can exacerbate existing inequalities and impede progress toward social and economic development goals. Researchers have documented biases in AI systems against various demographics, including ethnicity, social groups, cultural backgrounds, age, and gender (Mehrabi et al., <xref ref-type="bibr" rid="B28">2021</xref>; Ntoutsi et al., <xref ref-type="bibr" rid="B33">2020</xref>). While AI systems themselves are not consciously biased, their decisions are influenced by the data they learn from and the algorithms they employ (Ferrer et al., <xref ref-type="bibr" rid="B19">2021</xref>; Hellstr&#x000F6;m et al., <xref ref-type="bibr" rid="B22">2020</xref>). It is crucial to recognize that these inherent biases significantly impact information fairness and trust, particularly in developing countries.</p>
<p>The purpose of this article is contribute to the discourse of algorithm bias and its impact on fairness and trust with a bias toward Africa. This paper argues that addressing algorithm bias is essential for ensuring information fairness and fostering trust in AI systems in Africa. These elements are critical for successful implementation in Africa. There is a need for inclusive practices that engage local communities. Such engagement is vital for promoting equitable technological development. To this end, the following questions guided this perspective article:</p>
<list list-type="bullet">
<list-item><p>What is the current state of AI adoption and algorithmic bias in Africa?</p>
</list-item>
<list-item><p>How does algorithmic bias impact information fairness and trust in AI systems within the African context?</p></list-item>
<list-item><p>What empirical evidence illustrates the effects of algorithmic bias on fairness and trust in African AI applications?</p></list-item>
<list-item><p>What strategies can mitigate algorithmic bias and promote inclusive AI development in Africa?</p></list-item>
</list></sec>
<sec sec-type="methods" id="s2">
<title>Methodology</title>
<p>This perspective article is based on the author&#x00027;s personal insights and opinions, drawing from their experiences and observations related to algorithmic bias in AI systems within the African context.</p>
<sec>
<title>Literature review</title>
<p>A review of existing literature on algorithmic bias, AI fairness, and trust was conducted to contextualize the author&#x00027;s reflections and provide supporting evidence.</p></sec>
<sec>
<title>Data sources</title>
<p>The author&#x00027;s personal experiences and observations, combined with relevant literature, formed the basis of this article.</p></sec>
<sec>
<title>Limitations</title>
<p>This article&#x00027;s limitations include:</p>
<list list-type="bullet">
<list-item><p>Subjective nature of personal insights and opinions.</p></list-item>
<list-item><p>Limited generalizability due to focus on African context.</p></list-item>
</list></sec></sec>
<sec id="s3">
<title>Current state of AI technology in Africa</title>
<p>Interest in AI has surged across the continent, driven by advancements in large language models like ChatGPT and currently, Africa is home to over 2,400 AI companies, with 40% founded in the last 5 years (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). In Africa, AI has found a wide application where it is applied in key sectors such as banking, e-commerce, health, agriculture, energy, education, and industrial manufacturing. African governments like Zambia have used AI technologies to fight electoral disinformation and misinformation; those in Libya have used AI to deploy autonomous weapon systems; in Zimbabwe, they have strengthened their surveillance systems using biometrics; and in Kenya, Ghana, and Togo, among others, the same technology has been used to develop micro-lending apps, to distribute social funds, and other initiatives (Center of Intellectual Property and Technology Law [CIPIT], <xref ref-type="bibr" rid="B11">2023</xref>). Countries such as South Africa have used this technology to understand the retention of health workers in the public sector, while Kenya is home to various e-health start-ups. In Ghana, they use deep learning to automate radiology, while in Egypt, they use AI for triage and tele-nursing services. Despite the active uptake of AI systems, their use has been seen to undermine human rights and segregate marginalized groups in the society (Akello, <xref ref-type="bibr" rid="B4">2022</xref>). Many African nations lack comprehensive national strategies, institutions, and regulatory frameworks to manage AI technologies effectively (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). Notable early adopters, such as Egypt, Rwanda, Ghana, Senegal, Tunisia, and Nigeria, have made strides by developing or initiating AI national strategies. For instance, Egypt launched its National AI Strategy in 2021 and established a National Council for Artificial Intelligence.</p>
<p>The adoption of AI technologies in Africa faces challenges, including a lack of technical skills, uncertainty, structured data, government policies, ethics, and user attitudes (Ade-Ibijola and Okonkwo, <xref ref-type="bibr" rid="B2">2023</xref>). Access to digital tools is hindered by insufficient infrastructure, disproportionately affecting certain groups (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). Although internet penetration increased from 9.6% in 2010 to 33% in 2021, it remains significantly lower than in developed countries like the U.S., where it stands at 92% (Getao, <xref ref-type="bibr" rid="B20">2024</xref>). A significant portion of Africa&#x00027;s population remains unconnected, limiting contributions to global AI models and leading to less accurate representations for local users. Due to the low internet connectivity rate, the lack of mobile phones, and the analog nature of business and transactions, critical data necessary for predictive models is lacking in Africa (Center of Intellectual Property and Technology Law [CIPIT], <xref ref-type="bibr" rid="B11">2023</xref>).</p>
<p>Bias in AI can exacerbate existing social divisions in a continent characterized by diverse cultures and communities (Getao, <xref ref-type="bibr" rid="B20">2024</xref>). The lack of a culture of sharing ideas online, rooted in historically unequal access to digital technology, complicates the situation. The National Artificial Intelligence Policy for the Republic of Rwanda recognizes the challenge of data sharing and emphasizes the importance of organizing workshops and training sessions for senior management of public departments and private companies to showcase the benefits of data sharing (Ministry of ICT Innovation, <xref ref-type="bibr" rid="B29">2023</xref>). Africa is rich in data, but it has not been aggregated (Center of Intellectual Property and Technology Law [CIPIT], <xref ref-type="bibr" rid="B11">2023</xref>), as many Africans primarily consume content rather than contribute to it (Getao, <xref ref-type="bibr" rid="B20">2024</xref>). This creates a significant data deficit for AI development, compounded by high capital and operational costs (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). As noted by Getao, &#x0201C;It costs money to go online,&#x0201D; excluding many users from the digital landscape and increasing their vulnerability to misinformation. The high cost of mobile internet data or home-based broadband connections limits the market size and uptake of services (Center of Intellectual Property and Technology Law [CIPIT], <xref ref-type="bibr" rid="B11">2023</xref>). In 2018, only 45% of Sub-Saharan Africans had mobile phones, and many devices were older models unable to support high-tech apps (Besaw and FilitZ, <xref ref-type="bibr" rid="B9">2019</xref>).</p>
<p>Consequently, much of the data used for training AI models originates from the Global North, resulting in an overrepresentation of these populations&#x00027; demographics, preferences, and behaviors (Coutts, <xref ref-type="bibr" rid="B12">2024</xref>). Africa often finds itself relegated to a role of a data mine, where personal information and cultural knowledge are extracted to fuel AI models in the North (Tibebu, <xref ref-type="bibr" rid="B40">2024</xref>). Unfortunately, the economic benefits generated from this data rarely return to the communities from which it was sourced, perpetuating a cycle of economic dependency and stripping Africa of agency in the burgeoning AI-powered economy (Tibebu, <xref ref-type="bibr" rid="B40">2024</xref>).</p>
<sec>
<title>Algorithmic bias: authors&#x00027; perspective on fairness and trust concerns in African context</title>
<p>In Africa, the concepts of fairness and trust are shaped by social and historical contexts. Historical mistrust of foreign technologies due to past exploitation influences perceptions of AI. Through mechanisms of unequal exchange, the global economy perpetuates a framework dominated by the West, siphoning Africa&#x00027;s wealth in the form of minerals and labor (Aseka, <xref ref-type="bibr" rid="B8">1993</xref>). The question of imperialist exploitation and technological abuse increasingly occupies a central place in African discourse and algorithm bias has aggravated the issue. Senegalese expert Seydina Moussa Ndiaye warns that the biggest threat from AI is colonization, suggesting that large multinationals may impose their solutions throughout the continent, leaving little room for local innovation (UN News, <xref ref-type="bibr" rid="B41">2024</xref>). Sabelo Mhlambi has called for a &#x0201C;decolonization&#x0201D; of AI (Kohnert, <xref ref-type="bibr" rid="B27">2022</xref>). Democratization of AI will level the playing field in term of systems development and skills acquisition.</p>
<p>The perception of fairness and trust is often shaped by cultural norms, dictating what is considered equitable in various contexts. AI development and use in Africa has not been sensitive to African cultural values, beliefs, and ethical principles. For instance, the African concepts of personality contradict the notion that AI could ever be a &#x0201C;person&#x0201D;. AI systems that are viewed as impersonal or devoid of spiritual significance might be met with resistance, as people may prefer solutions that align with their cultural and spiritual values. Many African cultures emphasize social equity, community cohesion, and collective wellbeing, impacting how AI solutions are perceived. In cultures where traditional authority figures (like elders or community leaders) play a crucial role, there may be a reluctance to embrace AI technologies perceived as lacking human oversight. The philosophy of Ubuntu provides a framework for considering how AI should be developed, emphasizing that without careful handling, &#x0201C;through our technology and scientific developments we can easily destroy each other and the world&#x0201D; (Jahnke, <xref ref-type="bibr" rid="B24">2021</xref>). Befittingly, Eke et al. (<xref ref-type="bibr" rid="B16">2023</xref>) are concerned that African values, beliefs and ethical principles are currently lacking in global discussions on AI ethics and guidelines.</p>
<p>Economic inequality may lead to skepticism about technologies that seem to benefit only certain groups. The Western colonial and neo-colonial interventions have fostered economic mechanisms detrimental to Africa&#x00027;s environment and development (Aseka, <xref ref-type="bibr" rid="B8">1993</xref>). The power of AI, combined with advances in technology, could be harnessed; however, communities excluded from technological advancements may develop mistrust toward new technologies, viewing them as tools benefiting the affluent.</p>
<p>AI algorithms are increasingly weaponised against unsuspecting users, posing threats rather than necessities. The rise of spyware collecting personal data without consent raises significant privacy and security concerns. This misuse of AI tools can infringe on individual rights and be leveraged for illegal purposes, necessitating ethical and accountable deployment to ensure favorable outcomes. Lack of regulations can lead to distrust in data usage and management.</p></sec>
<sec>
<title>Empirical evidence on algorithm bias on information fairness and trust in Africa</title>
<p>Empirical literature indicates the existence of biases inherent in AI algorithms, which must be addressed to avoid perpetuating discrimination and exacerbating inequalities (Shihas, <xref ref-type="bibr" rid="B37">2024</xref>; Akello, <xref ref-type="bibr" rid="B4">2022</xref>; Kelly and Mirpourian, <xref ref-type="bibr" rid="B26">2021</xref>; Gwagwa et al., <xref ref-type="bibr" rid="B21">2020</xref>; Buolamwini and Gebru, <xref ref-type="bibr" rid="B10">2018</xref>). Trust cannot flourish in an environment reliant on flawed AI (Fancher et al., <xref ref-type="bibr" rid="B17">2024</xref>). Notably, most algorithms are trained on biased data, compromising their effectiveness and leading to results that perpetuate discrimination. These biases stem from flawed training data, leading to discriminatory outcomes in critical sectors such as finance, healthcare, and law enforcement (Agbo, <xref ref-type="bibr" rid="B3">2024</xref>). Buolamwini and Gebru (<xref ref-type="bibr" rid="B10">2018</xref>) examined the accuracy of commercial facial recognition APIs across genders and skin tones. Their study of 1,270 participants from Rwanda, Senegal, South Africa, Iceland, Finland, and Sweden revealed significant disparities. Dark-skinned women were misclassified at substantially higher rates than light-skinned men, who received the most accurate results. These findings highlight facial recognition technology&#x00027;s discriminatory outcomes, disproportionately affecting marginalized groups. However, the study&#x00027;s reliance on a limited dataset may underestimate the issue&#x00027;s true extent. This limitation underscores the need for more comprehensive research to fully capture the scope of facial recognition bias.</p>
<p>The integration of AI systems in various sectors poses significant risks, including personal data misuse, inaccuracies in AI outputs, and systemic biases, which can erode trust. Research has shown that algorithmic biases can perpetuate existing inequalities, particularly in financial access and hiring practices. For instance, studies have found that loan repayment prediction algorithms exhibit gender bias, resulting in lower approval rates for female borrowers (Akello, <xref ref-type="bibr" rid="B4">2022</xref>; Kelly and Mirpourian, <xref ref-type="bibr" rid="B26">2021</xref>; Gwagwa et al., <xref ref-type="bibr" rid="B21">2020</xref>). In Kenya&#x00027;s fintech sector, digital lending apps rely on automated analysis of micro-behavioral data, such as browsing history and social media information, leading to biased outcomes (Akello, <xref ref-type="bibr" rid="B4">2022</xref>). This disproportionately affects women with limited internet and mobile access, resulting in unfair credit scores due to inadequate digital footprints. Similarly, hiring algorithms in India have been found to discriminate against candidates from marginalized communities, perpetuating workplace exclusion (Shihas, <xref ref-type="bibr" rid="B37">2024</xref>). The intersectionality of biases in AI systems compounds these issues, as overlapping forms of discrimination (race, gender, socioeconomic status) can exacerbate disadvantage.</p>
<p>Algorithm bias can have devastating consequences, particularly in Africa where access to digital technologies is uneven and regulatory frameworks are weak (Singh, <xref ref-type="bibr" rid="B39">2022</xref>). This can lead to discriminatory outcomes, such as service denial, which undermines trust in AI technologies. For instance, predictive policing algorithms in South Africa have been found to target low-income communities, increasing surveillance and harassment of innocent individuals (Singh, <xref ref-type="bibr" rid="B39">2022</xref>). Moreover, AI tools can be used to target perceived &#x0201C;enemies of the state,&#x0201D; as seen with the COMPAS software, which discriminated against African-American populations in recidivism predictions (Institut Montaigne, <xref ref-type="bibr" rid="B23">2020</xref>). This highlights the urgent need for ethical guidelines and regulations to prevent potential misuse and harm. Biased algorithms can lead to unfair outcomes for marginalized communities. For instance, Obermeyer et al. (<xref ref-type="bibr" rid="B34">2019</xref>) revealed that a healthcare algorithm was biased against Black patients, resulting in poorer health outcomes. Similarly, studies have shown that facial recognition systems exhibit higher error rates for darker-skinned individuals (Buolamwini and Gebru, <xref ref-type="bibr" rid="B10">2018</xref>).</p>
<p>Algorithm bias in Africa manifests significantly through the underrepresentation of diverse voices in training datasets, resulting in skewed outcomes that reinforce existing power dynamics. A striking example is the finding by Algorithm Watch Africa (<xref ref-type="bibr" rid="B5">2021</xref>) that recruitment algorithms often favor candidates from privileged backgrounds, perpetuating job market inequalities. Language barriers further exacerbate these challenges in Africa&#x00027;s diverse socio-economic landscape. The dominance of languages like English, Chinese, and French on search engines and social media limits access to information for speakers of local languages. This threatens linguistic diversity and marginalizes African voices in an increasingly AI-dependent society (Tibebu, <xref ref-type="bibr" rid="B40">2024</xref>). The lack of diversity in AI datasets is a critical concern. Over-reliance on Western data leads to biases, as evidenced by Buolamwini and Gebru (<xref ref-type="bibr" rid="B10">2018</xref>) study on facial recognition systems. These systems exhibited higher error rates for darker-skinned individuals, with profound implications for people of African descent.</p>
<p>The rapid growth of AI systems has raised significant concerns about data privacy and surveillance (Deloitte, <xref ref-type="bibr" rid="B13">2024</xref>). With AI&#x00027;s endless need for data, there&#x00027;s a risk of collecting vast amounts of personal and sensitive information without a clear purpose, leading to unethical and potentially illegal practices. This is particularly worrying in Africa, where governments are increasingly using AI-powered surveillance technologies to monitor citizens, often with biometric and facial recognition capabilities (Munoriyarwa and Mare, <xref ref-type="bibr" rid="B30">2022</xref>). In countries like Zimbabwe, Tanzania, Angola, and Mozambique, surveillance technologies are being deployed to track citizens&#x00027; activities, create profiles, and locate them (Akello, <xref ref-type="bibr" rid="B4">2022</xref>). For instance, Huawei&#x00027;s Safe City project in Nairobi has installed 1800 HD cameras and 200 HD traffic surveillance systems, raising concerns about mass surveillance (Akello, <xref ref-type="bibr" rid="B4">2022</xref>). The COVID-19 pandemic accelerated the use of surveillance technologies, with global monitoring efforts tracking the spread and severity of the disease (Center of Intellectual Property and Technology Law [CIPIT], <xref ref-type="bibr" rid="B11">2023</xref>). But research has shown that AI algorithms can predict sensitive information from seemingly innocuous data (Acquisti et al., <xref ref-type="bibr" rid="B1">2015</xref>). This means that even if citizens aren&#x00027;t actively being targeted, their personal data can still be compromised. The implications are far reaching, especially in regions with limited data protection laws.</p></sec></sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Empirical literature points to the existence of biases inherent in AI algorithms, which need to be addressed to avoid perpetuating discrimination against certain groups and exacerbating existing inequalities. Transparency in AI systems is crucial, as many algorithms are complex and opaque, making it difficult for users to understand how decisions are made. This lack of transparency can lead to mistrust and skepticism about AI technologies, especially in a continent like Africa, where concerns about data privacy and security are prevalent (Ade-Ibijola and Okonkwo, <xref ref-type="bibr" rid="B2">2023</xref>). Establish independent auditing bodies to assess AI systems for bias and fairness, ensuring accountability. Addressing transparency issues is key to stimulating trust and eroding skepticism among the continent. For example, the National AI Policy for Responsible AI Adoption of Rwanda emphasizes the importance of responsible AI adoption, ensuring fairness, transparency, and accountability in AI systems (Nshimiyimana, <xref ref-type="bibr" rid="B32">2023</xref>).</p>
<p>The ethical implications of AI technologies in Africa must be carefully considered, as there are growing concerns about their use in surveillance and law enforcement, which could infringe on people&#x00027;s rights to privacy and freedom. Countries in Africa with weak legal frameworks risk that AI technologies could be used for authoritarian purposes, eroding democratic norms. There is need adoption of ethical frameworks in AI design that prioritize human rights and social justice.</p>
<p>There is an urgent need to address issues of data bias and representativeness in the development of AI technologies in Africa. Literature indicates that many AI algorithms are trained on biased datasets, perpetuating stereotypes and reinforcing existing power dynamics. To mitigate these sources of bias, various approaches have been proposed, including dataset augmentation, bias-aware algorithms, and user feedback mechanisms (Ferrara, <xref ref-type="bibr" rid="B18">2023</xref>). Dataset augmentation adds diverse data to training sets to enhance representativeness and mitigate bias. Bias-aware algorithms are crafted to account for various biases and reduce their influence on system outputs. User feedback mechanisms collect input from users to identify and rectify biases within the system (Ferrara, <xref ref-type="bibr" rid="B18">2023</xref>).</p>
<p>Furthermore, there is a need for greater diversity and inclusion in the field of AI in Africa. Sabelo Mhlambi points out that community involvement is essential in building AI systems (Jahnke, <xref ref-type="bibr" rid="B24">2021</xref>). Involving local leaders and representatives in the development and implementation of AI systems to ensure they reflect community values and needs. Women and minority groups are often underrepresented in the tech industry, leading to biases in the design and implementation of AI technologies. Encouraging diverse voices and perspectives in AI development can help prevent bias and ensure that AI systems are fair and equitable for all users. As Tibebu (<xref ref-type="bibr" rid="B40">2024</xref>) states, empowering Africa to develop AI rooted in its cultural experiences and knowledge systems is essential for decolonising technological knowledge production. This approach offers a pathway to creating alternative narratives and perspectives that enrich the global AI landscape.</p>
<p>Additionally, the regulatory frameworks surrounding AI in Africa need strengthening to ensure ethical standards are upheld. Marginalized groups within Africa are particularly vulnerable to the misuse of algorithms in sensitive domains like predictive policing or social welfare allocation. Many countries in Africa lack comprehensive laws governing AI technologies, leaving them exposed to potential abuses. Policymakers must collaborate with industry stakeholders and civil society organizations to develop clear guidelines promoting fairness, transparency, and accountability in AI use. Support interdisciplinary research that examines the cultural implications of AI across different regions, particularly in Africa.</p>
<p>There is also an urgent need for collaboration and partnerships between African countries and international organizations. This will be key in addressing AI bias, information fairness, and ethics. African countries must prioritize the inclusion of local data, enhance digital resource access, and foster collaboration among all AI ecosystem stakeholders (Coutts, <xref ref-type="bibr" rid="B12">2024</xref>). Societal progress is best achieved through collaboration and mutual support. Collaboration with cultural experts and community representatives is essential, as their insights can help ensure AI is sensitive to cultural nuances and avoids perpetuating biases (Naliaka, <xref ref-type="bibr" rid="B31">2024</xref>). By sharing best practices and lessons learned, African countries can develop common standards for responsible AI use, while international organizations can provide technical assistance and capacity-building support to strengthen regulatory frameworks.</p>
<p>Promoting awareness and education about AI bias and ethics is essential in Africa. Many people are unaware of the potential risks and pitfalls of AI technologies, leading to unintended consequences and harms. Conduct workshops and forums to educate communities about AI technologies and their implications. Rwanda&#x00027;s National AI Policy note that educating the public through workshops can assist in the adoption AI (Ministry of ICT Innovation, <xref ref-type="bibr" rid="B29">2023</xref>). Cultural norms surrounding education and technology literacy also play a role. By raising awareness and promoting digital literacy, policymakers can empower individuals and communities to make informed decisions about AI technologies in their daily lives. In communities where there is limited understanding of AI, misconceptions can lead to fear and mistrust.</p>
<p>Lastly, fostering a culture of accountability and responsibility among AI developers and practitioners is crucial. Developers, companies, and governments must ensure that AI systems are designed and used fairly and transparently (Ferrara, <xref ref-type="bibr" rid="B18">2023</xref>). Create policies that mandate the inclusion of diverse cultural perspectives in AI development processes will be vital for ensuring fairness and ethics in AI use in Africa. Companies and organizations that develop and deploy AI technologies should be held accountable for any harms or biases resulting from their products. By encouraging a culture of transparency and accountability, stakeholders can work together to promote trust and confidence in AI technologies in Africa.</p></sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>With increased community engagement and transparency, trust in AI systems is likely to grow. As local populations see their values and needs reflected in AI applications, acceptance will rise, leading to greater utilization of technology in various sectors. By prioritizing information fairness and addressing algorithm bias, AI can contribute to more equitable economic growth. Technologies tailored to local contexts can empower underserved communities, providing access to resources, education, and opportunities that were previously out of reach. AI systems that respect and incorporate cultural norms can help preserve local traditions while also fostering innovation. For instance, AI could be used to support indigenous languages or promote traditional crafts, creating a blend of modern technology and cultural heritage. By addressing biases and ensuring inclusivity, AI can empower marginalized groups, providing them with tools to advocate for their rights and interests. AI can play a pivotal role in achieving the United Nations Sustainable Development Goals by improving access to education, healthcare, and economic opportunities.</p></sec>
<sec id="s6">
<title>Recommendations</title>
<list list-type="bullet">
<list-item><p>AI developers, researchers, and policymakers should collaborate to create more inclusive and equitable algorithms</p>
</list-item>
<list-item><p>Governments across Africa should collaborate to create a pan-African AI regulatory framework,</p></list-item>
<list-item><p>Data sets used to train algorithms should be diversified to ensure underrepresented groups are included</p>
</list-item>
<list-item><p>African languages must be considered in the design of training of algorithms</p>
</list-item>
<list-item><p>A culture of accountability and responsibility must be fostered amongst AI developers and practitioners</p>
</list-item>
<list-item><p>Companies and organizations that develop and deploy AI technologies should be held accountable for any harms or biases that result from their products</p>
</list-item>
<list-item><p>Stakeholders should work together to promote trust and confidence in AI technologies in Africa, and</p>
</list-item>
<list-item><p>African communities must be engaged to understand their information needs and preferences.</p>
</list-item>
</list></sec>
</body>
<back>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>NP: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing. AM: Investigation, Writing &#x02013; review &#x00026; editing.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acquisti</surname> <given-names>A.</given-names></name> <name><surname>Brandimarte</surname> <given-names>L.</given-names></name> <name><surname>Loewenstein</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Privacy and human behavior in the age of information</article-title>. <source>Science</source> <volume>347</volume>, <fpage>509</fpage>&#x02013;<lpage>514</lpage>. <pub-id pub-id-type="doi">10.1126/science.aaa1465</pub-id><pub-id pub-id-type="pmid">25635091</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ade-Ibijola</surname> <given-names>A.</given-names></name> <name><surname>Okonkwo</surname> <given-names>C.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Artificial intelligence in Africa: emerging challenges,&#x0201D;</article-title> in <source>Responsible AI in Africa. Social and Cultural Studies of robots and AI</source>, eds. D. O. Eke, K. Wakunuma, and S. Akintoye (Cham: Palgrave Macmillan). 101&#x02212;117. <pub-id pub-id-type="doi">10.1007/978-3-031-08215-3_5</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Agbo</surname> <given-names>K.</given-names></name></person-group> (<year>2024</year>). &#x0201C;Africans&#x00027; Contributions to AI Can Reduce Bias,&#x0201D; in<italic>THISDAYLIVE</italic>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.thisdaylive.com/index.php/2024/06/24/africans-contributions-to-ai-can-reduce-bias/">https://www.thisdaylive.com/index.php/2024/06/24/africans-contributions-to-ai-can-reduce-bias/</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B4">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Akello</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Artificial intelligence in Kenya. Policy brief,&#x0201D;</article-title> in <source>Paradigm Initiative</source>, ed. E. Nabenyo. Available at: <ext-link ext-link-type="uri" xlink:href="https://paradigmhq.org/wp-content/uploads/2022/02/Artificial-Inteligence-in-Kenya-1.pdf">https://paradigmhq.org/wp-content/uploads/2022/02/Artificial-Inteligence-in-Kenya-1.pdf</ext-link> (accessed July 15, 2024).<pub-id pub-id-type="pmid">33433753</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="web"><person-group person-group-type="author"><collab>Algorithm Watch Africa</collab></person-group> (<year>2021</year>). <source>Algorithms and Discrimination in Africa: Challenges and Opportunities</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://algorithmwatch.org/en/fellows-investigate-discrimination-in-financial-sector/">https://algorithmwatch.org/en/fellows-investigate-discrimination-in-financial-sector/</ext-link> (accessed July 13, 2024).</citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alvarez</surname> <given-names>J. M.</given-names></name> <name><surname>Colmenarejo</surname> <given-names>A. B.</given-names></name> <name><surname>Elobaid</surname> <given-names>A.</given-names></name> <name><surname>Fabbrizzi</surname> <given-names>S.</given-names></name> <name><surname>Fahimi</surname> <given-names>M.</given-names></name> <name><surname>Ferrara</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>Policy advice and best practices on bias and fairness in AI</article-title>. <source>Ethics and Inform. Technol.</source> <volume>26</volume>:<fpage>2</fpage>. <pub-id pub-id-type="doi">10.1007/s10676-024-09746-w</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ananny</surname> <given-names>M.</given-names></name> <name><surname>Crawford</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability</article-title>. <source>New Media Soc.</source> <volume>20</volume>, <fpage>973</fpage>&#x02013;<lpage>989</lpage>. <pub-id pub-id-type="doi">10.1177/1461444816676645</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aseka</surname> <given-names>E. M.</given-names></name></person-group> (<year>1993</year>). <article-title>Historical roots of underdevelopment and environmental degradation in Africa</article-title>. <source>Transafrican J. Hist.</source> <volume>22</volume>, <fpage>193</fpage>&#x02013;<lpage>205</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Besaw</surname> <given-names>C.</given-names></name> <name><surname>FilitZ</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;AI in Africa is a double-edged sword,&#x0201D;</article-title> in <source>AI &#x00026; Global Governance. United 60 Nations University &#x02013; Centre for Policy Research.</source> Available at: <ext-link ext-link-type="uri" xlink:href="https://ourworld.unu.edu/en/ai-in-africa-is-a-double-edged-sword">https://ourworld.unu.edu/en/ai-in-africa-is-a-double-edged-sword</ext-link> (accessed July 18, 2024).</citation>
</ref>
<ref id="B10">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Buolamwini</surname> <given-names>J.</given-names></name> <name><surname>Gebru</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Gender shades: intersectional accuracy disparities in commercial gender classification,&#x0201D;</article-title> in <source>Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Proceedings of Machine Learning Research</source> (<publisher-loc>MLResearch Press</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>15</lpage>. Available at: <ext-link ext-link-type="uri" xlink:href="https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf">https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf</ext-link></citation>
</ref>
<ref id="B11">
<citation citation-type="web"><person-group person-group-type="author"><collab>Center of Intellectual Property and Technology Law [CIPIT]</collab></person-group> (<year>2023</year>). <source>The State of AI in Africa - A Policy Brief</source> . Available at: <ext-link ext-link-type="uri" xlink:href="https://cipit.strathmore.edu/wp-content/uploads/2023/09/The-State-of-AI-in-Africa-A-Policy-Brief110923-1.pdf">https://cipit.strathmore.edu/wp-content/uploads/2023/09/The-State-of-AI-in-Africa-A-Policy-Brief110923-1.pdf</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B12">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Coutts</surname> <given-names>L.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Empowering Africa with AI: overcoming data deficits and bias for inclusive growth,&#x0201D;</article-title> in <source>Publications and Media at Good Governance Africa</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.linkedin.com/pulse/empowering-africa-ai-overcoming-data-deficits-bias-inclusive-coutts-bxsyf/">https://www.linkedin.com/pulse/empowering-africa-ai-overcoming-data-deficits-bias-inclusive-coutts-bxsyf/</ext-link> (accessed July 18, 2024).</citation>
</ref>
<ref id="B13">
<citation citation-type="web"><person-group person-group-type="author"><collab>Deloitte</collab></person-group> (<year>2024</year>). <article-title>&#x0201C;AI for inclusive development in Africa &#x02013; Part I: Governance,&#x0201D;</article-title> in <source>Deloitte_ai-adoption-africa-2024</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.deloitte.com/content/dam/Deloitte/fpc/Documents/secteurs/technologies-medias-et-telecommunications/deloitte_ai-adoption-africa-2024.pdf">https://www.deloitte.com/content/dam/Deloitte/fpc/Documents/secteurs/technologies-medias-et-telecommunications/deloitte_ai-adoption-africa-2024.pdf</ext-link> (accessed July 21, 2024).</citation>
</ref>
<ref id="B14">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Dobrygowski</surname> <given-names>D.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Companies need to prove they can be trusted with technology,&#x0201D;</article-title> in <source>Harvard Business Review</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://hbr.org/2023/07/companies-need-to-prove-they-can-be-trusted-with-technology">https://hbr.org/2023/07/companies-need-to-prove-they-can-be-trusted-with-technology</ext-link> (accessed July 20, 2024).</citation>
</ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dwork</surname> <given-names>C.</given-names></name> <name><surname>Hardt</surname> <given-names>M.</given-names></name> <name><surname>Pitassi</surname> <given-names>T.</given-names></name> <name><surname>Reingold</surname> <given-names>O.</given-names></name> <name><surname>Zemel</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Fairness through awareness,&#x0201D;</article-title> in <source>Proceedings of the 3rd Innovations in Theoretical Computer Science Conference</source> (<publisher-loc>Cambridge, MA</publisher-loc>), 214&#x02013;226. <pub-id pub-id-type="doi">10.1145/2090236.2090255</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Eke</surname> <given-names>D. O.</given-names></name> <name><surname>Wakunuma</surname> <given-names>K.</given-names></name> <name><surname>Akintoye</surname> <given-names>S.</given-names></name></person-group> (eds.). (<year>2023</year>). <article-title>&#x0201C;Introducing responsible AI in Africa,&#x0201D;</article-title> in <source>Responsible AI in Africa. Social and Cultural Studies of Robots and AI</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Palgrave Macmillan</publisher-name>). 1&#x02013;11. <pub-id pub-id-type="doi">10.1007/978-3-031-08215-3_1</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Fancher</surname> <given-names>D.</given-names></name> <name><surname>Ammanath</surname> <given-names>B.</given-names></name> <name><surname>Holdowsky</surname> <given-names>J.</given-names></name> <name><surname>Buckley</surname> <given-names>N.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;AI model bias can damage trust more than you may know. But it doesn&#x00027;t have to,&#x0201D;</article-title> in <source>Deloitte Insights</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www2.deloitte.com/xe/en/insights/focus/cognitive-technologies/ai-model-bias.html">https://www2.deloitte.com/xe/en/insights/focus/cognitive-technologies/ai-model-bias.html</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2023</year>). <article-title>Fairness and Bias in Artificial intelligence: a brief survey of sources, impacts, and mitigation strategies</article-title>. <source>Science</source> <volume>6</volume>:<fpage>3</fpage>. <pub-id pub-id-type="doi">10.3390/sci6010003</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferrer</surname> <given-names>X.</given-names></name> <name><surname>Van Nuenen</surname> <given-names>T.</given-names></name> <name><surname>Such</surname> <given-names>J. M.</given-names></name> <name><surname>Cote</surname> <given-names>M.</given-names></name> <name><surname>Criado</surname> <given-names>N.</given-names></name></person-group> (<year>2021</year>). <article-title>Bias and discrimination in AI: a cross-disciplinary perspective</article-title>. <source>IEEE Technol. Soc. Magaz</source>. <volume>40</volume>, <fpage>72</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1109/mts.2021.3056293</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Getao</surname> <given-names>K.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Lack of data makes AI more biased in African countries,&#x0201D;</article-title> in <source>Munich Cyber Security Conference Proceedings</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://therecord.media/lack-of-data-makes-ai-more-biased-in-africa">https://therecord.media/lack-of-data-makes-ai-more-biased-in-africa</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gwagwa</surname> <given-names>A.</given-names></name> <name><surname>Kraemer-Mbula</surname> <given-names>E.</given-names></name> <name><surname>Rizk</surname> <given-names>N.</given-names></name> <name><surname>Rutenberg</surname> <given-names>I.</given-names></name> <name><surname>De Beer</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Artificial intelligence (AI) deployments in Africa: benefits, challenges, and policy dimensions</article-title>. <source>African J. Inform. Commun.</source> <volume>26</volume>:<fpage>7</fpage>. <pub-id pub-id-type="doi">10.23962/10539/30361</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hellstr&#x000F6;m</surname> <given-names>T.</given-names></name> <name><surname>Dignum</surname> <given-names>V.</given-names></name> <name><surname>Bensch</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Bias in machine learning &#x02013; what is it good for?</article-title> <source>arXiv [Preprint]</source>. arXiv:2004.00686. <pub-id pub-id-type="doi">10.48550/arxiv.2004.00686</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="web"><person-group person-group-type="author"><collab>Institut Montaigne</collab></person-group> (<year>2020</year>). <source>Algorithms: Please Mind the Bias! Report.</source> Available at: <ext-link ext-link-type="uri" xlink:href="https://www.institutmontaigne.org/ressources/pdfs/publications/algorithms-please-mind-bias.pdf">https://www.institutmontaigne.org/ressources/pdfs/publications/algorithms-please-mind-bias.pdf</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B24">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Jahnke</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Can an ancient african philosophy save us from AI bias?,&#x0201D;</article-title> in <source>BU Today | Boston University</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.bu.edu/articles/2021/can-an-ancient-african-philosophy-save-us-from-ai-bias/">https://www.bu.edu/articles/2021/can-an-ancient-african-philosophy-save-us-from-ai-bias/</ext-link> (accessed 19 Sep, 2024).</citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jobin</surname> <given-names>A.</given-names></name> <name><surname>Ienca</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>The global landscape of AI ethics guidelines</article-title>. <source>Nat. Mach. Intellig.</source> <volume>1</volume>, <fpage>389</fpage>&#x02013;<lpage>399</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-019-0088-2</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Kelly</surname> <given-names>S.</given-names></name> <name><surname>Mirpourian</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Algorithmic bias, financial inclusion, and gender a primer on opening up new credit to women in emerging economies,&#x0201D;</article-title> in <source>Women&#x00027;s World Banking</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.womensworldbanking.org/wp-content/uploads/2021/02/2021_Algorithmic_Bias_Report.pdf">https://www.womensworldbanking.org/wp-content/uploads/2021/02/2021_Algorithmic_Bias_Report.pdf</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B27">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Kohnert</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Machine ethics and African identities: perspectives of artificial intelligence in Africa,&#x0201D;</article-title> in <source>GIGA Institute for African Affairs</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://mpra.ub.uni-muenchen.de/113799/">https://mpra.ub.uni-muenchen.de/113799/</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehrabi</surname> <given-names>N.</given-names></name> <name><surname>Morstatter</surname> <given-names>F.</given-names></name> <name><surname>Saxena</surname> <given-names>N.</given-names></name> <name><surname>Lerman</surname> <given-names>K.</given-names></name> <name><surname>Galstyan</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>A survey on bias and fairness in machine learning</article-title>. <source>ACM Comput. Surv</source>. <volume>54</volume>, <fpage>1</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1145/3457607</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="web"><person-group person-group-type="author"><collab>Ministry of ICT and Innovation</collab></person-group> (<year>2023</year>). <source>The National Artificial Intelligence Policy and Innovation.</source> Available at: <ext-link ext-link-type="uri" xlink:href="https://www.minict.gov.rw/index.php?eID=dumpFileandt=fandf=67550&#x00026;token=6195a53203e197efa47592f40ff4aaf24579640e">https://www.minict.gov.rw/index.php?eID=dumpFileandt=fandf=67550&#x00026;token=6195a53203e197efa47592f40ff4aaf24579640e</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Munoriyarwa</surname> <given-names>A.</given-names></name> <name><surname>Mare</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Digital surveillance in Southern Africa,&#x0201D;</article-title> in <source>Policies, Politics and Practices</source>. (London: Pulgrave Macmillan).</citation>
</ref>
<ref id="B31">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Naliaka</surname> <given-names>F.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;AI for Africa by Africans: How cultural diversity can be attained in AI globalization,&#x0201D;</article-title> in <source>Citizen Digital</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.citizen.digital/tech/ai-for-africa-by-africans-how-cultural-diversity-can-be-attained-in-ai-globalization-n339103">https://www.citizen.digital/tech/ai-for-africa-by-africans-how-cultural-diversity-can-be-attained-in-ai-globalization-n339103</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B32">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Nshimiyimana</surname> <given-names>J. C.</given-names></name></person-group> (<year>2023</year>). <source>Rwanda&#x00027;s National AI Policy: A Blueprint for Responsible AI Leadership</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.linkedin.com/pulse/rwandas-national-ai-policy-blueprint-responsible-nshimiyimana/">https://www.linkedin.com/pulse/rwandas-national-ai-policy-blueprint-responsible-nshimiyimana/</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ntoutsi</surname> <given-names>E.</given-names></name> <name><surname>Fafalios</surname> <given-names>P.</given-names></name> <name><surname>Gadiraju</surname> <given-names>U.</given-names></name> <name><surname>Iosifidis</surname> <given-names>V.</given-names></name> <name><surname>Nejdi</surname> <given-names>W.</given-names></name> <name><surname>Vidal</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Bias in data-driven artificial intelligence systems&#x02014;An introductory survey</article-title>. <source>Wiley Interdisc. Rev. Data Mining Knowl. Discov.</source> <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1002/widm.1356</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Obermeyer</surname> <given-names>Z.</given-names></name> <name><surname>Powers</surname> <given-names>B.</given-names></name> <name><surname>Vogeli</surname> <given-names>C.</given-names></name> <name><surname>Mullainathan</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Dissecting racial bias in an algorithm used to manage the health of populations</article-title>. <source>Science</source> <volume>366</volume>, <fpage>447</fpage>&#x02013;<lpage>453</lpage>. <pub-id pub-id-type="doi">10.1126/science.aax2342</pub-id><pub-id pub-id-type="pmid">31649194</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Omrani</surname> <given-names>N.</given-names></name> <name><surname>Rivieccio</surname> <given-names>G.</given-names></name> <name><surname>Fiore</surname> <given-names>U.</given-names></name> <name><surname>Schiavone</surname> <given-names>F.</given-names></name> <name><surname>Agreda</surname> <given-names>S. G.</given-names></name></person-group> (<year>2022</year>). <article-title>To trust or not to trust? An assessment of trust in AI-based systems: concerns, ethics and contexts</article-title>. <source>Technol. Forecast. Social Change</source> <volume>181</volume>:<fpage>121763</fpage>. <pub-id pub-id-type="doi">10.1016/j.techfore.2022.121763</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shams</surname> <given-names>R. A.</given-names></name> <name><surname>Zowghi</surname> <given-names>D.</given-names></name> <name><surname>Bano</surname> <given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>AI and the quest for diversity and inclusion: a systematic literature review</article-title>. <source>AI Ethics</source>. <pub-id pub-id-type="doi">10.1007/s43681-023-00362-w</pub-id><pub-id pub-id-type="pmid">33276112</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Shihas</surname> <given-names>H.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Unequal access, biased algorithms: gender divide in India&#x00027;s AI landscape,&#x0201D;</article-title> in <source>Maktoob Media</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://maktoobmedia.com/more/science-technology/unequal-access-biased-algorithms-gender-divide-in-indias-ai-landscape/">https://maktoobmedia.com/more/science-technology/unequal-access-biased-algorithms-gender-divide-in-indias-ai-landscape/</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B38">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Silberg</surname> <given-names>J.</given-names></name> <name><surname>Manyika</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <source>Tackling Bias in Artificial Intelligence (and in Humans)</source>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans">https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>policing by design: artificial intelligence, predictive policing and human rights in South Africa</article-title>. <source>Just Africa</source> <volume>7</volume>, <fpage>41</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.37284/eajit.7.1.2141</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Tibebu</surname> <given-names>H.</given-names></name></person-group> (<year>2024</year>). Why Africa must demand a fair share in AI development and governance. Austin, TX: Tech Policy Press. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.techpolicy.press/why-africa-must-demand-a-fair-share-in-ai-development-and-governance/">https://www.techpolicy.press/why-africa-must-demand-a-fair-share-in-ai-development-and-governance/</ext-link> (accessed July 17, 2024).</citation>
</ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><collab>UN News</collab></person-group> (<year>2024</year>). <article-title>&#x0201C;Interview: AI expert warns of &#x0201C;digital colonization&#x0201D;</article-title> in Africa,&#x0201D; in <italic>Africa Renewal</italic>. Available at: <ext-link ext-link-type="uri" xlink:href="https://www.un.org/africarenewal/magazine/january-2024/interview-ai-expert-warns-digital-colonization-africa">https://www.un.org/africarenewal/magazine/january-2024/interview-ai-expert-warns-digital-colonization-africa</ext-link> (accessed July 17, 2024).</citation>
</ref>
</ref-list>
</back>
</article>