AUTHOR=Fajardo-Ramos Diana Carolina , Chiappe Andrés , Mella-Norambuena Javier TITLE=Human-in-the-loop assessment with AI: implications for teacher education in Ibero-American universities JOURNAL=Frontiers in Education VOLUME=Volume 10 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1710992 DOI=10.3389/feduc.2025.1710992 ISSN=2504-284X ABSTRACT=This scoping review examines how artificial intelligence (AI) reshapes assessment in Ibero-American higher education and specifies the teacher-training capacities and ethical safeguards needed for responsible adoption. Guided by PRISMA procedures and an eligibility scheme based on PPCDO (Population–Phenomenon–Context–Design–Outcomes), we searched Scopus and screened records (2015–2025; English/Spanish/Portuguese), yielding 76 peer-reviewed studies. Synthesis combined qualitative thematic analysis with quantitative descriptors and an exploratory correlational look at tool–outcome pairings. Rather than listing generic ICT, we propose a function-by-purpose taxonomy of assessment technologies that distinguishes pre-AI baselines from AI-specific mechanisms–generativity, adaptivity, and algorithmic feedback/analytics. Read through this lens, AI’s value emerges when benefits are paired with conditions of use: explainability practices, data stewardship, audit trails, and clearly communicated assistance limits. The review translates these insights into a decision-oriented agenda for teacher education, specifying five competency clusters: (1) feedback literacy with AI (criterion-anchored prompting, sampling and audits, revision-based workflows); (2) rubric/item validation and traceability; (3) data interpretation and fairness; (4) integrity and transparency in AI-involved assessment; and (5) orchestration of platforms and moderation/double-marking when AI assists scoring. Exploratory correlations reinforce these priorities, signaling where training should concentrate. We conclude that Ibero-American systems are technically ready yet pedagogically under-specified: progress depends less on adding tools and more on professionalizing human-in-the-loop assessment within robust governance. The article offers a replicable taxonomy, actionable training targets, and a research agenda on enabling conditions for trustworthy, AI-enhanced assessment.