AUTHOR=Ahmad Amar , Vallès Yvonne , Idaghdour Youssef TITLE=Bias in AI systems: integrating formal and socio-technical approaches JOURNAL=Frontiers in Big Data VOLUME=Volume 8 - 2025 YEAR=2026 URL=https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2025.1686452 DOI=10.3389/fdata.2025.1686452 ISSN=2624-909X ABSTRACT=Artificial Intelligence (AI) systems are increasingly embedded in high-stakes decision-making across domains such as healthcare, finance, criminal justice, and employment. Evidence has been accumulated showing that these systems can reproduce and amplify structural inequities, leading to ethical, social, and technical concerns. In this review, formal mathematical definitions of bias are integrated with socio-technical perspectives to examine its origins, manifestations, and impacts. Bias is categorized into four interrelated families: historical/representational, selection/measurement, algorithmic/optimization, and feedback/emergent, and its operation is illustrated through case studies in facial recognition, large language models, credit scoring, healthcare, employment, and criminal justice. Current mitigation strategies are critically evaluated, including dataset diversification, fairness-aware modeling, post-deployment auditing, regulatory frameworks, and participatory design. An integrated framework is proposed in which statistical diagnostics are coupled with governance mechanisms to enable bias mitigation across the entire AI lifecycle. By bridging technical precision with sociological insight, guidance is offered for the development of AI systems that are equitable, accountable, and responsive to the needs of diverse populations.