<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frai.2026.1764283</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>ZamYOLO-maize: a YOLOv8n-based deep learning framework for automated detection and classification of maize leaf diseases in field conditions in Zambia</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Kalunga</surname>
<given-names>Prudence</given-names>
</name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/3310941"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kunda</surname>
<given-names>Douglas</given-names>
</name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/3311530"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="supervision" vocab-term-identifier="https://credit.niso.org/contributor-roles/supervision/">Supervision</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zimba</surname>
<given-names>Aaron</given-names>
</name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><label>1</label><institution>Computer Science Department, ZCAS University</institution>, <city>Lusaka</city>, <country country="zm">Zambia</country></aff>
<aff id="aff2"><label>2</label><institution>DMI St Eugene University</institution>, <city>Lusaka</city>, <country country="zm">Zambia</country></aff>
<aff id="aff3"><label>3</label><institution>Computer Science Department, ZCAS University</institution>, <city>Lusaka</city>, <country country="zm">Zambia</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Prudence Kalunga, <email xlink:href="mailto:prudence.kalunga@zcasu.edu.zm">prudence.kalunga@zcasu.edu.zm</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-25">
<day>25</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>9</volume>
<elocation-id>1764283</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>03</day>
<month>02</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Kalunga, Kunda and Zimba.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Kalunga, Kunda and Zimba</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-25">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Maize, a critical staple crop in Zambia, faces persistent threats from foliar diseases such as Gray Leaf Spot, Northern Corn Leaf Blight, and Maize Streak Virus, significantly affecting smallholder productivity. Limited access to expert diagnostics, coupled with complex field conditions including occlusions and variable lighting, necessitates accessible, real-time disease detection systems tailored to local environments. To address this gap, this study first developed a novel field-captured dataset of Zambian maize leaf images, annotated with bounding boxes for disease lesions and labeled by disease type and severity to reflect real-world agri-ecological variability. Building on this dataset, we propose ZamYOLO-Maize, a multi-stage automated diagnostic framework integrating lesion detection, hierarchical disease classification, and severity assessment. A comparative evaluation was conducted using four state-of-the-art object detection models: YOLOv5n, YOLOv8s, YOLOv10s, and YOLOv8n, with performance assessed using precision, recall, F1-score, and inference speed. Experimental results demonstrate that YOLOv10s achieved the highest predictive performance (Precision = 0.997, Recall = 0.999, F1-score = 0.999), while YOLOv8n provided the optimal trade-off for edge deployment, achieving the fastest inference speed (4.65 ms/image) with a competitive F1-score of 0.995. The framework exhibited strong robustness under field variability, confirming its practical applicability. By integrating a locally representative dataset with an efficient deep learning pipeline, this study establishes a scalable foundation for mobile-based maize disease diagnostics, contributing to precision agriculture and supporting food security initiatives in Zambia and comparable agricultural regions.</p>
</abstract>
<kwd-group>
<kwd>deep learning in agriculture</kwd>
<kwd>maize disease detection</kwd>
<kwd>object detection</kwd>
<kwd>precision agriculture</kwd>
<kwd>YOLOv8n</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="9"/>
<table-count count="6"/>
<equation-count count="5"/>
<ref-count count="52"/>
<page-count count="17"/>
<word-count count="10654"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>AI in Food, Agriculture and Water</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Maize (<italic>Zea mays</italic>) is the principal staple crop in Zambia, fundamental to national food security and the livelihoods of millions of smallholder farmers. The production of maize is highly affected by numerous viruses, viroids, fungi, and bacteria which are the primary causes of maize diseases. Discoloration, rot, scab, blight, necrosis, wilting, and abnormalities are common signs of infection and are used to identify foliar diseases in maize (<xref ref-type="bibr" rid="ref29">Masood et al., 2023</xref>). Traditional disease detection in Zambia relies on manual scouting, a method that is inherently subjective, slow, and hampered by a scarcity of expert pathologists in rural areas (<xref ref-type="bibr" rid="ref18">Kalunga and Kunda, 2025</xref>; <xref ref-type="bibr" rid="ref40">Rua et al., 2023</xref>). This leads to a critical &#x201C;detection latency,&#x201D; where interventions are often applied too late to prevent significant yield loss (<xref ref-type="bibr" rid="ref31">Mogili and Deepak, 2018</xref>). <xref ref-type="bibr" rid="ref29">Masood et al. (2023)</xref> states that the accurate detection of maize leaf diseases is currently a crucial guarantee for maize yield for farmers without professional knowledge.</p>
<p>The manual leaf inspection method used in the traditional method of identifying maize diseases relies on the expertise of agricultural specialists and their understanding of plant pathology. Ineffective pesticide treatments, which not only contaminate the environment but also worsen the damaging effects on maize, are usually the consequence of misinterpreting the disease (<xref ref-type="bibr" rid="ref45">Tian et al., 2019</xref>; <xref ref-type="bibr" rid="ref41">Shen et al., 2025</xref>; <xref ref-type="bibr" rid="ref19">Kamaleshkanna et al., 2024</xref>). Smallholder farming techniques, which are typified by inexpensive hardware, restricted access to professional plant pathology services, and extremely changeable field conditions, such as unregulated lighting, background clutter, and mixed symptom expression, dominate Zambia&#x2019;s maize output. Therefore, to monitor the maize crops and cure the diseases, quick and precise methods are needed. There is a clear and urgent need for rapid, accurate, and scalable diagnostic tools to empower farmers and extension officers with timely information.</p>
<p>Plant disease monitoring and forecasting have recently made extensive use of digital technologies such as remote sensing, global positioning, and geographic information systems (<xref ref-type="bibr" rid="ref27">Liu and Wang, 2021</xref>; <xref ref-type="bibr" rid="ref3">Akhter and Sofi, 2022</xref>). Traditional plant disease diagnostic techniques are gradually being replaced by automated approaches based on computer vision and machine learning (ML) algorithms due to the significant advancements in artificial intelligence (<xref ref-type="bibr" rid="ref46">Vishnoi et al., 2021</xref>). A few automated plant disease detection techniques based on digital images have been presented recently as a possible substitute for manual examination (<xref ref-type="bibr" rid="ref33">Ngugi et al., 2021</xref>). Agriculture first used machine learning techniques with manually created features to enhance decision-making. Many current models are trained and validated on datasets collected under controlled or semi-controlled conditions, which restricts their applicability in actual African agricultural situations, despite recent advancements in deep learning-based plant disease identification.</p>
<p>Gray level co-occurrence matrix (GLCM) (<xref ref-type="bibr" rid="ref21">Kaur et al., 2018</xref>), local binary patterns (LBP) (<xref ref-type="bibr" rid="ref37">Pantazi et al., 2019</xref>), scale-invariant feature transform (SIFT) (<xref ref-type="bibr" rid="ref12">Chouhan et al., 2021</xref>), histogram of oriented gradient (HOG) (<xref ref-type="bibr" rid="ref50">Wani et al., 2021</xref>), and other methods are used in previous work as feature descriptors for the description of the images. These methods provide a simplified depiction of the plant disease by extracting visual features such as shape, hue, structure, and other statistical properties (<xref ref-type="bibr" rid="ref44">Thakur et al., 2022</xref>). To classify leaf diseases, the collected characteristics are subsequently used to train machine learning models such decision trees (DT) (<xref ref-type="bibr" rid="ref36">Panigrahi et al., 2020</xref>), support vector machines (SVM) (<xref ref-type="bibr" rid="ref13">Chung et al., 2016</xref>; <xref ref-type="bibr" rid="ref5">Aravind et al., 2018</xref>), and artificial neural networks (ANN) (<xref ref-type="bibr" rid="ref38">Patil et al., 2017</xref>). Hand-coding feature calculation methods involve human knowledge and take a lot of processing time, even if they are simple to use and use little data. Deep learning (DL) has emerged as a transformative technology in precision agriculture, demonstrating exceptional capability in automating plant disease detection from leaf imagery (<xref ref-type="bibr" rid="ref8">Barbedo, 2019a</xref>; <xref ref-type="bibr" rid="ref25">Li et al., 2021</xref>).</p>
<p>Convolutional Neural Networks (CNNs) have repeatedly achieved high accuracy in controlled settings (<xref ref-type="bibr" rid="ref20">Kamilaris and Prenafeta-Bold&#x00FA;, 2018</xref>). However, the translation of these technologies from research to practical field deployment in countries like Zambia faces significant hurdles. Recent developments in deep learning (DL) and artificial intelligence (AI) have made it possible to create automated plant disease detection frameworks that are faster and more objective than conventional methods. In controlled settings, leaf diseases have been accurately classified using convolutional neural networks (CNNs) and contemporary object-detection architectures. For instance, on gadgets like mobile phones and embedded computers, lightweight YOLO-based models have been used to identify leaf disease symptoms in real time (<xref ref-type="bibr" rid="ref2">Ahmad et al., 2024</xref>; <xref ref-type="bibr" rid="ref43">Tang et al., 2023</xref>; <xref ref-type="bibr" rid="ref4">Ali et al., 2023</xref>; <xref ref-type="bibr" rid="ref48">Wang et al., 2024</xref>).</p>
<p>Despite these developments, there are still large gaps in the application of these technologies to Zambia&#x2019;s smallholder maize systems: Firstly, a lot of current research relies on publicly accessible or laboratory-grade datasets like Plant Village, which are unable to represent the noise and unpredictability present in actual field settings (<xref ref-type="bibr" rid="ref8">Barbedo, 2019a</xref>; <xref ref-type="bibr" rid="ref25">Li et al., 2021</xref>). When applied to real-world farming situations, models developed on such datasets frequently have poor generalizability. Secondly, there are few region-specific databases that accurately capture the agronomic and environmental realities of Zambia and other sub-Saharan African nations, where crop conditions and disease prevalence are different from those seen in global benchmarks (<xref ref-type="bibr" rid="ref20">Kamilaris and Prenafeta-Bold&#x00FA;, 2018</xref>). Third, even though high-performing deep learning architectures have been developed, many of them are computationally demanding and inappropriate for implementation in settings with limited resources, where farmers and extension agents do not have access to sophisticated hardware (<xref ref-type="bibr" rid="ref18">Kalunga and Kunda, 2025</xref>; <xref ref-type="bibr" rid="ref14">Ferentinos, 2018</xref>). The &#x201C;black box&#x201D; nature of complex models hinders adoption, as end-users require transparent and interpretable predictions to trust and act upon the AI&#x2019;s recommendations (<xref ref-type="bibr" rid="ref51">Wu et al., 2020</xref>; <xref ref-type="bibr" rid="ref1">Adadi and Berrada, 2018</xref>).</p>
<p>The environmental complexity found on actual farms such as uneven lighting, occlusion, and mixed infections is absent from laboratory or benchmark datasets used in the development and validation of many current models (<xref ref-type="bibr" rid="ref11">Chen et al., 2023</xref>; <xref ref-type="bibr" rid="ref42">Singh et al., 2022</xref>). Instead of concentrating on maize in African field circumstances, most of the lightweight detection or DL (deep learning) models concentrate on detecting generic leaf diseases or other crops. To enable deployment on resource-constrained platforms utilized by farmers and extension agents, it is imperative to strike a compromise between detection accuracy and computing efficiency. Although better YOLO variations (like YOLO-DBW) have been developed, they frequently raise hardware requirements or complexity. To support scalable, farmer-centered diagnostic tools and enable more accurate disease detection under natural field conditions, a locally curated dataset and validation framework reflecting the realities of Zambian smallholder farms was developed. Thus, the purpose of this study was to create and validate ZamYOLO-Maize, a lightweight deep learning framework for the in-field, real-time detection of maize leaf diseases. The following research questions were specifically addressed by the study:</p>
<list list-type="order">
<list-item>
<p>Does the suggested YOLOv8n-based framework detect and categorize key maize leaf diseases with state-of-the-art accuracy in a variety of Zambian field conditions?</p>
</list-item>
<list-item>
<p>How does the model compare to existing deep learning algorithms in terms of precision, recall, and on-device inference speed for real-time deployment?</p>
</list-item>
<list-item>
<p>Is the framework resilient to common field challenges such as complex backgrounds, partial occlusions, and variable lighting?</p>
</list-item>
</list>
<p>The paper proposes ZamYOLO-Maize, a lightweight, field-adapted deep learning system built on theYOLOv8n object detection architecture (You Only Look Once version 8 nano), as a solution to these problems. For resource-constrained agricultural situations where deployment on mobile devices or edge-computing platforms is crucial, the YOLOv8n model was chosen because of its ideal balance of accuracy and computational economy (<xref ref-type="bibr" rid="ref22">Khan et al., 2023</xref>). Its nano offers cutting-edge performance with low processing requirements and real-time inference, which are essential features for field deployment. This paper proposes a deep learning-based framework for maize disease detection that is specifically designed for the challenges of the Zambian smallholder context. This work&#x2019;s distinctiveness resides in a few distinctive scientific and practical advances that go far beyond optimizing an already-existing YOLO architecture: The key contributions of our work are:</p>
<list list-type="order">
<list-item>
<p>We present a new field-captured dataset of photos of maize leaves that are uniquely labeled with severity labels, disease type classifications, and bounding boxes for disease lesions. This dataset addresses the significant shortage of region-specific, multi-tier agricultural data and serves as a benchmark for future research in automated plant disease identification.</p>
</list-item>
<list-item>
<p>We propose ZamYOLO-Maize, a comprehensive deep learning framework that includes optimal lesion identification, hierarchical disease classification, severity estimate, and a rule-based treatment advising system. This is the first comprehensive automated diagnostic pipeline designed specifically for maize diseases in smallholder farming settings.</p>
</list-item>
<list-item>
<p>Through extensive comparative and ablation studies, we demonstrate that our YOLOv8n-based detector achieves an optimal balance of high accuracy (F1-score &#x003E; 0.995) and real-time inference speed (4.65&#x202F;ms/image), proving its practical viability for resource-constrained, mobile-based deployment in field settings.</p>
</list-item>
</list>
<p>ZamYOLO-Maize seeks to offer a workable solution for real-time, scalable disease diagnosis in smallholder maize farming systems in Zambia by combining field-relevant data with a computationally effective detection model, thereby improving crop management and food security. The rest of the paper is arranged as follows: Section II examines previous studies related to identifying diseases in plants, specifically maize crop diseases. In Section III, the adopted methodology and detailed architecture of the proposed framework are explained, details of the selected dataset, implementation, and experimental setup. Section IV presents the results obtained and discussion. Lastly, we concluded our work and suggested some future directions in Section V.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Related works</title>
<p>In this section, we review the literature on deep learning-based maize disease detection and classification, with an emphasis on YOLO-based techniques and other lightweight strategies. We outline the main developments, their drawbacks, and how ZamYOLO-Maize either builds on them or differs from them.</p>
<p>Deep learning has been used in several research to identify maize diseases using CNN-based categorization. Using both deep (ResNet) and lightweight (MobileNet) architectures, (<xref ref-type="bibr" rid="ref8">Barbedo, 2019a</xref>) created a two-stage transfer learning technique. Their research showed that even when training data originates from controlled datasets like PlantVillage, transfer learning on field-collected maize leaf photos may achieve excellent accuracy (up to 99.11% on MobileNet). <xref ref-type="bibr" rid="ref31">Mogili and Deepak (2018)</xref> developed a Dense CNN model that effectively classified maize leaf diseases, achieving over 98% accuracy on controlled datasets. However, both approaches relied on non-field datasets, limiting their adaptability to real-world farming conditions. <xref ref-type="bibr" rid="ref25">Li et al. (2021)</xref> explored hyperspectral imagery for detecting maize leaf spot, demonstrating the potential of spectral features to capture subtle disease symptoms. Although hyperspectral methods offer high precision, their cost and complexity restrict use in smallholder farming systems. <xref ref-type="bibr" rid="ref20">Kamilaris and Prenafeta-Bold&#x00FA; (2018)</xref> introduced a ShuffleNetV2-based CNN optimized for lightweight mobile deployment, highlighting a growing shift toward efficiency-focused architectures for field applications. <xref ref-type="bibr" rid="ref15">Jiang et al. (2024)</xref> used MobileNetV2 to create a CNN-based system for classifying maize diseases. Although their approach focuses on deployment in resource-constrained situations and targets many leaf disease classes (such as Common Rust, Gray Leaf Spot, and Blight), it manages whole-image classification instead of explicit object localization.</p>
<p>Despite their success in recognition, these classification-based methods frequently lack the ability to precisely localize (bounding boxes), which is crucial for comprehending the distribution of diseases on leaves and for focused therapies (<xref ref-type="bibr" rid="ref32">Mohanty et al., 2016</xref>).</p>
<p>By automatically learning hierarchical feature representations from raw pixel data, deep learning in particular, Convolutional Neural Networks (CNNs) overcame many of the drawbacks of conventional techniques, signaling a paradigm shift in the area. Using the PlantVillage dataset, the seminal study by <xref ref-type="bibr" rid="ref25">Li et al. (2021)</xref> showed that CNNs such as AlexNet and GoogLeNet could classify 26 crop-disease pairs with expert-level accuracy, much exceeding conventional approaches (<xref ref-type="bibr" rid="ref31">Mogili and Deepak, 2018</xref>).</p>
<p>Further studies solidified CNNs&#x2019; hegemony. Transfer learning, which involves fine-tuning models pre-trained on large-scale datasets like ImageNet on smaller plant disease datasets, has becoming commonplace. <xref ref-type="bibr" rid="ref14">Ferentinos (2018)</xref> investigated a few deep CNN architectures and discovered that on the PlantVillage dataset, VGG and ResNet models could attain over 99% accuracy (<xref ref-type="bibr" rid="ref8">Barbedo, 2019a</xref>). According to <xref ref-type="bibr" rid="ref9">Barbedo (2019b)</xref>, who examined the extensive success of CNNs in several agricultural areas (<xref ref-type="bibr" rid="ref25">Li et al., 2021</xref>), the use of these models went beyond classification to encompass segmentation and severity estimation. However, models trained on laboratory-grade photos (e.g., PlantVillage) frequently fail to generalize to field circumstances, revealing a substantial &#x201C;domain shift&#x201D; problem (<xref ref-type="bibr" rid="ref20">Kamilaris and Prenafeta-Bold&#x00FA;, 2018</xref>). This is a perennial criticism, as mentioned by (<xref ref-type="bibr" rid="ref28">Liu et al., 2018</xref>).</p>
<p>Object detection offers a more detailed analysis by both categorizing and localizing several disease cases inside a single image, whereas classification models give a single label to an entire image. In real-world situations, where leaves may display several diseases or symptoms at various times, this is essential. When these models are evaluated using field-acquired imagery, empirical evaluations have demonstrated significant declines in classification and detection accuracy, especially in smallholder scenarios where symptoms vary in severity and co-occur with environmental noise (<xref ref-type="bibr" rid="ref8">Barbedo, 2019a</xref>; <xref ref-type="bibr" rid="ref7">Arsenovic et al., 2019</xref>). These results emphasize the necessity of geographically and agronomically representative datasets to enhance model generalization and assist practical agricultural decision-making, particularly for underrepresented regions like Sub-Saharan Africa.</p>
<p>Two-stage detectors, like Faster R-CNN, which first produce region recommendations and then categorize them, were used in agriculture&#x2019;s early acceptance. For instance, <xref ref-type="bibr" rid="ref9">Barbedo (2019b)</xref> employed a Faster R-CNN to accurately identify apple diseases. However, these models&#x2019; computational complexity frequently led to sluggish inference times.</p>
<p>A better speed-accuracy trade-off was provided by the following move to one-stage detectors, which carry out localization and classification in a single pass. Adoption of models such as the You Only Look Once (YOLO) family and Single Shot MultiBox Detector (SSD) was rapid. Early iterations, such as YOLOv3, demonstrated real-time capabilities that two-stage detectors lacked and were effectively used to identify pests and diseases in crops like rice and tomatoes (<xref ref-type="bibr" rid="ref35">Pan et al., 2025</xref>; <xref ref-type="bibr" rid="ref49">Wang et al., 2021</xref>; <xref ref-type="bibr" rid="ref26">Liu and Wang, 2020</xref>). This tendency was supported by a thorough analysis of object detection advancements by (<xref ref-type="bibr" rid="ref7">Arsenovic et al., 2019</xref>; <xref ref-type="bibr" rid="ref19">Kamaleshkanna et al., 2024</xref>), which noted the growing inclination for one-stage detectors in real-time performance-demanding applications.</p>
<p>Object detection frameworks like YOLO (You Only Look Once), SSD (Single Shot Detector), and Faster R-CNN have gained popularity due to their capacity to concurrently localize and categorize several diseases, whereas most of the previous research concentrated on image-level classification (<xref ref-type="bibr" rid="ref14">Ferentinos, 2018</xref>; <xref ref-type="bibr" rid="ref41">Shen et al., 2025</xref>). <xref ref-type="bibr" rid="ref22">Khan et al. (2023)</xref> used YOLO-based models to identify maize leaf disease, and on a mobile-based system, they recorded an accuracy of 99.04% using YOLOv8n. Nevertheless, their dataset lacked the environmental diversity typical of smallholder farms in Africa and was not region-specific. An enhanced YOLOv8 model was suggested by <xref ref-type="bibr" rid="ref52">Yang et al. (2024)</xref> to identify maize leaf spot disease in actual field settings. To improve feature extraction, their version integrates a Global Attention Mechanism (GAM) and a Slim-neck module into the network. In real-world circumstances, their upgraded version outperformed baseline YOLOv8 with precision of 95.18%, recall of 89.11%, and mAP@50 of 94.65. Similarly, <xref ref-type="bibr" rid="ref47">Waheed et al. (2020)</xref> presented YOLOv8-GO, a lightweight version of YOLOv8 that has an omni-dimensional dynamic convolution (ODConv) module and an additional Global Attention Mechanism before the SPPF layer. With a mAP@50 of 88.4% and a very high FPS, this architecture strikes a balance between accuracy and computational cost, making it suitable for real-time field detection. Recent object-detection research has progressed rapidly: Ultralytics&#x2019; YOLOv8 has become widely adopted for real-time applications due to its efficient backbone and streamlined training utilities, and newer YOLO family variants (e.g., YOLOv9) propose architectural and training improvements such as programmable gradient information (PGI) and GELAN for improved parameter utilization (<xref ref-type="bibr" rid="ref16">Jocher et al., 2023</xref>; <xref ref-type="bibr" rid="ref42">Singh et al., 2022</xref>).</p>
<p>Concurrently, transformer-based detectors (DETR and its variations) have shown great performance in complex settings, although they frequently incur higher computing cost and longer training times, making YOLO variants still suitable for mobile agricultural deployments (<xref ref-type="bibr" rid="ref10">Carion et al., 2020</xref>). Several recent studies have already deployed YOLOv8 to plant disease diagnosis, revealing promising mAP and inference-time trade-offs under field settings, which highlights the significance of our Zambia-specific evaluation and optimization efforts (<xref ref-type="bibr" rid="ref11">Chen et al., 2023</xref>; <xref ref-type="bibr" rid="ref4">Ali et al., 2023</xref>).</p>
<p>These YOLO-based methods demonstrate how the trade-off between accuracy and inference speed can be improved by architectural improvements (attention, efficient convolution, feature fusion). But most of the current work is constrained in the following ways: They frequently optimize for performance rather than balancing computational efficiency for edge deployment; (1) they might not have been validated on datasets that accurately reflect the variability of smallholder farms in sub-Saharan Africa; and (2) they might not have undergone rigorous field testing. Other studies that focus on lightweight deep learning for plant disease detection go beyond YOLO.</p>
<p>For instance, <xref ref-type="bibr" rid="ref34">Osouli et al. (2022)</xref> presented a successful method for identifying maize diseases by integrating pre-trained MobileNetV2 and Inception networks with data augmentation and transfer learning. On a small amount of training data, they reported an accuracy of about 97%.</p>
<p>Explainable models that integrate CNN topologies with Vision Transformers have been developed, albeit they are not exclusive to maize. For example, the PlantXViT model employs a hybrid CNN-ViT architecture to identify plant diseases (including maize) while offering interpretability using methods like Grad-CAM and LIME. In agricultural settings, this explainability is beneficial for acceptance and confidence, but it usually comes with a higher computational cost. Although two-stage architectures like Faster R-CNN, which frequently achieve excellent accuracy, have been used in previous research, their computational expense makes them unsuitable for the real-time, in-field deployment scenarios that are the focus of contemporary precision agriculture (<xref ref-type="bibr" rid="ref43">Tang et al., 2023</xref>).</p>
<p>The accuracy gap has been reduced while preserving remarkable speed because of the recent development of one-stage detectors, especially the YOLO family (<xref ref-type="bibr" rid="ref51">Wu et al., 2020</xref>). From version 5 to the most recent version 10, the YOLO architecture has experienced fast iteration, with advancements in neck architecture, backbone design, and training techniques that improve efficiency and performance (<xref ref-type="bibr" rid="ref1">Adadi and Berrada, 2018</xref>; <xref ref-type="bibr" rid="ref6">Arrieta et al., 2020</xref>). For the Zambian context and comparable settings, where solutions must operate on inexpensive, resource-constrained hardware, this emphasis on efficiency is essential (<xref ref-type="bibr" rid="ref39">Redmon et al., 2016</xref>). To meet the urgent need for solutions that are both accurate and practically deployable, this work purposefully concentrates on benchmarking the most recent and effective models from the YOLO family, namely YOLOv5, v8n, v8s, and v10. This paper offers crucial insights for choosing the best architecture for edge deployment in actual agricultural settings by performing a thorough comparison analysis of these models using a maize disease dataset unique to Zambia.</p>
<p>Three primary research gaps are apparent from the evaluated literature: 1. Even when they are accurate, a lot of models are trained on datasets that do not accurately represent African smallholder farming settings. 2. Even though YOLOv8 and other detection models are strong, implementation on mobile or edge devices requires additional optimization (attention modules, lightweight convolutions, effective featurefusion).</p>
<p>Spatial localization, which is essential for mapping disease severity and directing therapies, is absent from pure classification models (CNN-based). 3. Few maize disease models presently meet the need for models that can explain their predictions to agronomists and farmers, which is necessary for the adoption of AI in agriculture.</p>
<p>ZamYOLO-Maize sets itself apart by specifically filling in these gaps: it uses YOLOv8n for computational efficiency, integrates design decisions to maintain high accuracy while being deployable on edge/mobile devices, and is refined on a field-collected maize leaf dataset unique to Zambia. Through a balanced, useful detection system, it seeks to promote trust and acceptance by local farmers and extension agents, in contrast to most previous works.</p>
</sec>
<sec sec-type="methods" id="sec3">
<label>3</label>
<title>Methodology</title>
<p>This section covers the methodological framework utilized in building and assessing ZamYOLO-Maize, an upgraded deep-learning model for automated maize leaf disease detection under Zambian field settings. Strict dataset preparation, a repeatable training process, architectural advances, statistical analysis, ablation investigations, and comparative benchmarking against cutting-edge detectors are all incorporated into the methodology.</p>
<sec id="sec4">
<label>3.1</label>
<title>Data set preparation</title>
<p>The collection includes annotated images of four different maize leaf conditions: Northern Leaf Blight (NLB), Gray Leaf Spot (GLS), Maize Streak Virus (MSV), and Healthy. Images were taken under varied real-field situations in Zambia including changeable illumination, shadows, occlusions, and complicated backgrounds to improve model robustness in practical deployments. All images were manually inspected for quality, and corrupted or ambiguous samples were eliminated. Bounding boxes were annotated in YOLO format (&#x1D465;&#x1D450;&#x1D452;&#x1D45B;&#x1D461;&#x1D452;&#x1D45F;, &#x1D466;&#x1D450;&#x1D452;&#x1D45B;&#x1D461;&#x1D452;&#x1D45F;, &#x1D464;&#x1D456;&#x1D451;&#x1D461;h, &#x210E;&#x1D452;&#x1D456;&#x1D454;&#x210E;&#x1D461;) normalized to [0,1].</p>
<p>To ensure a robust and unbiased evaluation, the dataset was partitioned into:</p>
<list list-type="simple">
<list-item>
<p>70% Training set which was used for model learning.</p>
</list-item>
<list-item>
<p>20% Validation set which was used for hyperparameter tuning and early stopping.</p>
</list-item>
<list-item>
<p>10% Test set was held out set for final performance evaluation (<xref ref-type="fig" rid="fig1">Figure 1</xref>).</p>
</list-item>
</list>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>System workflow from raw in-field image acquisition &#x2192; annotation &#x0026; preprocessing &#x2192; YOLOv8n training &#x2192; model inference &#x0026; deployment.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g001.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart illustrating the process of maize disease detection and classification using the ZMLD dataset, data splitting, deep learning model architecture YOLOv8n, model training, performance evaluation, and assessment metrics such as accuracy, precision, recall, and F1-score. An example maize leaf with visible spots is shown at the top.</alt-text>
</graphic>
</fig>
<p>To maintain distributional balance across disease categories, splits were carried out using class-stratified sampling. Images of maize leaves from farms in Zambia&#x2019;s Lusaka, Central, Eastern, and Northern provinces were gathered to create a bespoke dataset. There are three main disease classes in the dataset. Mobile phone cameras were used to take pictures in the field to replicate actual operational settings with fluctuating lighting, foliage occlusions, motion blur caused by wind, and mixed disease signs. The final dataset consists of 19,990 images spread throughout the four classes. Gray Leaf Spot (GLS) 5,000 samples, Northern Leaf Blight (NLB) 4,990 samples, and Maize Streak Virus (MSV) 5,000, Healthy 5,000 samples. The three being the main disease classes that frequently impair Zambian maize output. Images were taken at various times of day, plant growth stages, and farm management techniques to guarantee diversity.</p>
<p>The lack of localized maize disease datasets in sub-Saharan Africa, where environmental circumstances are significantly different from controlled datasets like PlantVillage, is addressed by this region-specific dataset. The Zambia Maize Leaf Dataset, a publicly accessible annotated dataset with 19,990 photos in four classes, was used for the experiments (<xref ref-type="bibr" rid="ref17">Kalunga, 2026</xref>; <xref ref-type="table" rid="tab1">Table 1</xref>).</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Summary of image collection details.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Category</th>
<th align="left" valign="top">Details</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Camera models</td>
<td align="left" valign="top">Canon EOS Rebel T7 DSLR Camera with 18-55&#x202F;mm Lens. Samsung A10 Phone: 6.2-inch HD&#x202F;+&#x202F;Infinity-V Display with a 720&#x202F;&#x00D7;&#x202F;1,520 resolution; 155.6&#x202F;mm&#x202F;&#x00D7;&#x202F;75.6&#x202F;mm&#x202F;&#x00D7;&#x202F;7.9&#x202F;mm and weighs 168&#x202F;g; 32&#x202F;GB internal storage, expandable up to 512&#x202F;GB via MicroSD and 2&#x202F;GB RAM. It has a non-removable 3,400 mAh battery</td>
</tr>
<tr>
<td align="left" valign="top">Temporal distribution</td>
<td align="left" valign="top">Images captured at various times of day; precise timestamp embedded in image metadata.</td>
</tr>
<tr>
<td align="left" valign="top">Growth stages</td>
<td align="left" valign="top">Multiple crop growth stages represented varying between 1to 4&#x202F;months</td>
</tr>
<tr>
<td align="left" valign="top">Geographic distribution</td>
<td align="left" valign="top">Lusaka, Central, Northern, and Eastern provinces.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The collection includes photos taken in four provinces (Lusaka, Central, Northern, and Eastern) by agriculture extension officers using Samsung smartphones and Canon cameras. The photos, which have metadata timestamps attached to them, show different times of the day when they were captured. The different stages of crop growth, range from one to 4&#x202F;months.</p>
</sec>
<sec id="sec5">
<label>3.2</label>
<title>Dataset annotation</title>
<p>A Python script was created to annotate the dataset and create bounding boxes around disease areas for each image using the local machine using Cursor IDE. Algorithm 1 was used to collect the raw YOLO-format annotations and transform them into a structured YAML database. Each YOLO annotation file is methodically processed by this program, which also arranges all bounding box coordinates into a single format and translates numerical class IDs to human-readable disease names therefore creating an annotations.yaml file. The yaml file facilitates effective dataset loading and preprocessing in later model training by offering a machine-readable index connecting each image to its full set of disease lesion annotations.</p>
<p>The performed steps for dataset annotation aggregation and formatting are shown in Algorithm 1 below:</p>
<statement id="algo1" content-type="algorithm">
<label>ALGORITHM 1</label>
<p>Dataset annotation aggregation and formatting.<preformat>Initialize empty list dataset &#x2190; []
Define class_labels &#x2190; [&#x2018;gray leaf spot&#x2019;, &#x2018;maize streak virus&#x2019;, &#x2018;northern leaf blight&#x2019;, &#x2018;healthy&#x2019;]
FOR each filename in LIST_FILES(yolo_labels_dir) DO
IF filename ends with &#x2018;.txt&#x2019; THEN
image_name &#x2190; REPLACE(filename, &#x2018;.txt&#x2019;, &#x2018;.jpg&#x2019;)
image_path &#x2190; JOIN(images_dir, image_name)
annotation_list &#x2190; []
Open label_file &#x2190; OPEN(JOIN(yolo_labels_dir, filename))
FOR each line in READ_LINES(label_file) DO
Parse: class_id, x_center, y_center, width, height &#x2190;
SPLIT(line) and CONVERT_TO_FLOAT
Create annotation_entry &#x2190; {
&#x2018;class&#x2019;: class_labels[INTEGER(class_id)],
&#x2018;x_center&#x2019;: x_center,
&#x2018;y_center&#x2019;: y_center,
&#x2018;width&#x2019;: width,
&#x2018;height&#x2019;: height}
APPEND annotation_entry to annotation_list
END FOR
CLOSE(label_file)
Create image_record &#x2190; {
&#x2018;image&#x2019;: image_path,
&#x2018;annotations&#x2019;: annotation_list}
APPEND image_record to dataset
END IF
END FOR
Open yaml_file &#x2190; OPEN(&#x2018;annotations.yaml&#x2019;, write_mode)
YAML_DUMP(dataset, yaml_file, sort_keys&#x202F;=&#x202F;False)
CLOSE(yaml_file)
RETURN &#x2018;annotations.yaml created successfully&#x2019;</preformat>
</p>
</statement>
</sec>
<sec id="sec6">
<label>3.3</label>
<title>Data augmentation</title>
<p>Several augmentation strategies, including rotation, brightness/contrast modifications, random scaling, and horizontal and vertical flipping, were used to improve model generalization and boost dataset diversity. These additions improve resilience and mimic field variability.</p>
</sec>
<sec id="sec7">
<label>3.4</label>
<title>Model architecture</title>
<p>The proposed ZamYOLO-Maize architecture is an end-to-end diagnostic pipeline designed to interpret field-captured maize leaf photos and provide full disease reports. As demonstrated in <xref ref-type="fig" rid="fig2">Figure 2</xref> and detailed in Algorithm 2, the system includes sequential and parallel modules for detection, classification, severity evaluation, and treatment advice.</p>
<statement id="algo2" content-type="algorithm">
<label>ALGORITHM 2</label>
<p>Decision logic and sequential steps of ZamYolo Maize workflow<preformat><bold>Input</bold>
: Raw maize leaf images
<bold>Output:</bold> Trained ZamYOLO-Maize model for deployment
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Data Acquisition</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;1.1 Collect maize leaf images from field conditions
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;1.2 Store images in structured dataset folders
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Data Preprocessing</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;2.1 For each image in dataset:
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;a. Resize image to 640&#x202F;&#x00D7;&#x202F;640 resolution
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;b. Apply augmentation (flip, rotate, scale, color-jitter)
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;c. Annotate bounding boxes and class labels
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;2.2 Save processed images into Zambia Maize Leaf Dataset (ZMLD)
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Model Initialization (YOLOv8n Base)</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;3.1 Load YOLOv8n backbone
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;3.2 Initialize model components:
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;&#x2022; Stem layer
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;&#x2022; Backbone
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;&#x2022; Neck (FPN&#x202F;+&#x202F;PAN)
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;&#x2022; Detection head
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Model Training</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;4.1 Split dataset into train/validation/test sets
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;4.2 For each training epoch <italic>E</italic>:&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;a. Feed batch of preprocessed images
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;b. Extract multi-scale features through backbone
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;c. Fuse features through FPN/PAN
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;d. Predict bounding boxes &#x0026; class scores
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;e. Compute losses (cls&#x202F;+&#x202F;box + obj)
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;f. Backpropagate gradients
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;g. Update model weights
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;4.3 Evaluate performance using precision, recall, F1, and mAP@50
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Model Optimization (Optional Enhancements)</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;5.1 <italic>If applying channel pruning:</italic>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;a. Identify low-importance channels
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;b. Remove redundant channels
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;5.2 <italic>If applying knowledge distillation:</italic>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x2003;&#x2003;a. Train student model using teacher predictions
&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;<bold>Deployment</bold>&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;&#x00A0;6.1 Export trained ZamYOLO-Maize model
6.2 Deploy on mobile device for field-level inference
<bold>End Algorithm</bold></preformat>
</p>
</statement>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Detailed workflow of the ZamYOLO-Maize diagnostic framework showing decision logic and parallel processing.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g002.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Detailed workflow of the ZamYOLO-Maize diagnostic framework showing decision logic and parallel processing.</alt-text>
</graphic>
</fig>
<p>The best balance between detection accuracy and computational performance led to the selection of Ultralytics&#x2019; YOLOv8n architecture (<xref ref-type="bibr" rid="ref49">Wang et al., 2021</xref>). The YOLOv8n-based architecture (ZamYOLO-Maize) consists of: Backbone (Feature Extraction), C2f modules, CSPDarknet structure, SPPF (Spatial Pyramid Pooling &#x2013; Fast). The backbone extracts multi-scale features from raw images. The Neck (Feature Fusion), PANet + FPN, top-down and bottom-up feature fusion, produces P3, P4, P5 multi-scale output maps. The Detection Head, a decoupled detection head predicts, bounding box coordinates, objectness scores, class probabilities. This separation improves convergence speed and detection accuracy (<xref ref-type="bibr" rid="ref1">Adadi and Berrada, 2018</xref>). YOLOv8n is perfect for small-farm deployments since it uses less GPU RAM and provides faster inference than heavier variants (YOLOv8m, YOLOv8x) (<xref ref-type="fig" rid="fig3">Figures 3</xref>, <xref ref-type="fig" rid="fig4">4</xref>).</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Schematic diagram of the proposed YOLOv8n-based maize leaf disease detection workflow.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g003.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart diagram illustrating a machine learning pipeline for classifying maize leaf diseases. Steps shown are data acquisition, preprocessing using the Zambia Maize Leaf Dataset, YOLOv8n model training with backbone and FPN|PAN modules, followed by mobile deployment.</alt-text>
</graphic>
</fig>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Architecture of the proposed YOLOv8n maize leaf disease detection model.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g004.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart diagram showing an image detection pipeline: an input image enters a backbone with CSPD and C2f layers, passes to a neck with FPN and PAN, then to a detection head yielding bounding box regression, classification, and objectness, and outputs bounding boxes, class labels, and confidence scores.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec8">
<label>3.5</label>
<title>Loss functions</title>
<p>The total loss <inline-formula>
<mml:math id="M1">
<mml:mi>L</mml:mi>
<mml:mo>_</mml:mo>
<mml:mtext mathvariant="italic">total</mml:mtext>
</mml:math>
</inline-formula> used in YOLOv8n combines: CIoU Loss (Bounding Box Regression) Complete Intersection over Union (CIoU) improves bounding box accuracy by accounting for:</p>
<list list-type="bullet">
<list-item>
<p>IoU overlap</p>
</list-item>
<list-item>
<p>Distance between center points</p>
</list-item>
<list-item>
<p>Aspect ratio consistency</p>
</list-item>
</list>
<p>It accelerates convergence and improves localization in complex leaf shapes.</p>
<p>BCE Loss (Classification and Objectness).</p>
<p>Binary Cross-Entropy (BCE) is used for:</p>
<list list-type="bullet">
<list-item>
<p>Objectness prediction</p>
</list-item>
<list-item>
<p>Class label prediction</p>
</list-item>
</list>
<p>BCE is preferred for multi-label environments and improves stability when detecting multiple diseases within a single leaf.</p>
<p><xref ref-type="fig" rid="fig5">Figure 5</xref> demonstrates the architecture of the ZamYOLO-Maize framework. The pipeline starts with pre-processing field images, then uses our modified YOLOv8n model (ZamYOLO Detector) to detect lesions. A hierarchical CNN for disease classification and a regression network for severity assessment receive the detected areas of interest (ROIs) in simultaneously. The confidence scores from detection and classification are merged in a weighted decision module. Based on the combined disease kind and severity level, a rule-based therapy advising engine proposes specific treatment. The system provides a structured diagnostic report with bounding box coordinates, disease descriptions, severity scores, confidence metrics, and therapy recommendations.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Detailed workflow of the ZamYOLO-Maize diagnostic framework showing decision logic and parallel processing. It should read as maize leaf notation summary.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g005.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Maize leaf notation summary.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec9">
<label>3.6</label>
<title>Proposed framework and experimental setup</title>
<p>The YOLOv8n architecture, which is optimized for lightweight, real-time performance in resource-constrained agricultural applications, is the foundation of the suggested detection system, ZamYOLO-Maize.</p>
<sec id="sec10">
<label>3.6.1</label>
<title>Training and optimization</title>
<p>We used the Yolov8n algorithm on the Zambian Maize Leaf Dataset, and the network was trained using, batch size, learning rate, and the number of epochs with the training optimizer to achieve the best results. The recommended ZamYolo-Maize framework was implemented using Python in Google Colab Pro, and all necessary packages and libraries were installed. Training was conducted using the Ultralytics YOLOv8 framework on a Google Colab Pro+ environment with NVIDIA A100 GPU. The annotated dataset was then exported in YOLO v8 format (YAML configuration with class names and image locations) after being split into subsets for training (70%), validation (20%), and testing (10%).</p>
</sec>
<sec id="sec11">
<label>3.6.2</label>
<title>Software and libraries</title>
<p>Python 3.10.12 (Google Colab default) was used to set up the experimental environment. Key libraries included OpenCV2 (OpenCV-python) for image processing, PyTorch (version 2.1.0&#x202F;+&#x202F;cu118) as the core deep learning framework with support for CUDA 11.8, and Ultralytics YOLO (version 8.0.196) for model architecture and training. Seaborn, NumPy, Matplotlib, and pandas were tools for data processing and visualization. To manage the dataset, Roboflow and Kaggle packages were installed.</p>
</sec>
<sec id="sec12">
<label>3.6.3</label>
<title>Training hyper parameters</title>
<p>Early stopping and checkpointing (based on best validation mAP) were used to prevent over-fitting.</p>
</sec>
</sec>
<sec id="sec13">
<label>3.7</label>
<title>Evaluation metrics</title>
<p>The model&#x2019;s performance was evaluated using standard object-detection metrics, with values of metrics such as precision score, recall score, F1-score, IoU score, and mean average precision (mAP) score, mAP@50&#x2013;95 for each. Precision (P) is the ability to avoid false positives, Recall (R) is the ability to detect all true positives. F1-Score is the balance between P and R; mAP@50 is the mean Average Precision at 0.5 IoU. @50 refers to an IoU threshold of 0.50 (i.e., a predicted bounding box is considered correct if its Intersection over Union with the ground truth is at least 50%). mAP@50&#x2013;95 is the mean Average Precision averaged over Intersection over Union (IoU) thresholds from 0.5 to 0.95 in steps of 0.05. IoU (Intersection over Union) measures the overlap between predicted and ground truth bounding boxes. It ranges from 0 to 1 (<xref ref-type="table" rid="tab2">Table 2</xref>).</p>
<disp-formula id="E1">
<mml:math id="M2">
<mml:mi>IoU</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mtext>Area of overlap</mml:mtext>
<mml:mtext>Area of Union</mml:mtext>
</mml:mfrac>
</mml:math>
</disp-formula>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Software libraries and versions used in the experimental environment.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Software/Library</th>
<th align="center" valign="top">Version</th>
<th align="left" valign="top">Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">Python</td>
<td align="center" valign="middle">3.10.12</td>
<td align="left" valign="middle">Primary programming language</td>
</tr>
<tr>
<td align="left" valign="middle">PyTorch</td>
<td align="center" valign="middle">2.1.0&#x202F;+&#x202F;cu118</td>
<td align="left" valign="middle">Core deep learning framework</td>
</tr>
<tr>
<td align="left" valign="middle">Ultralytics YOLO</td>
<td align="center" valign="middle">8.0.196</td>
<td align="left" valign="middle">Object detection model training</td>
</tr>
<tr>
<td align="left" valign="middle">OpenCV</td>
<td align="center" valign="middle">(Latest from pip)</td>
<td align="left" valign="middle">Image processing</td>
</tr>
<tr>
<td align="left" valign="middle">CUDA Toolkit</td>
<td align="center" valign="middle">11.8</td>
<td align="left" valign="middle">GPU acceleration support</td>
</tr>
<tr>
<td align="left" valign="middle">Pandas/NumPy</td>
<td align="center" valign="middle">(Latest from pip)</td>
<td align="left" valign="middle">Data manipulation and numerical operations</td>
</tr>
<tr>
<td align="left" valign="middle">Matplotlib/Seaborn</td>
<td align="center" valign="middle">(Latest from pip)</td>
<td align="left" valign="middle">Results visualization</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>These metrics align with standard practices in object detection research and were computed as follows:</p>
<disp-formula id="E2">
<mml:math id="M3">
<mml:mtext>Precision</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">FP</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<disp-formula id="E3">
<mml:math id="M4">
<mml:mtext>Recall</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">FN</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<disp-formula id="E4">
<mml:math id="M5">
<mml:mtext>Accuracy</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">TN</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">TN</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">FP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">FN</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<disp-formula id="E5">
<mml:math id="M6">
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mn>1</mml:mn>
<mml:mspace width="0.25em"/>
<mml:mtext>Score</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>&#x2217;</mml:mo>
<mml:mi>R</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>R</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>In this case, TP stands for true positive score, which represents the total number of positive samples with the target disease class accurately categorized. The total number of negative samples with positive predictions is represented by the false positive score, or FP. The number of positive samples with incorrectly assessed disease classes is shown by the FN, which stands for false negative. Finally, the TN stands for true negative, which denotes the samples for which the model accurately predicted as the negative class (<xref ref-type="fig" rid="fig6">Figure 6</xref>).</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Samples from the three maize diseases.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g006.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Three maize leaves illustrate different diseases: Gray leaf spot shows elongated gray-green lesions from fungal infection, maize leaf blight has irregular tan-brown blotches indicating fungal infection, and maize streak virus displays fine pale yellow streaks reflecting viral infection.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec14">
<label>3.8</label>
<title>Visual results</title>
<p><xref ref-type="fig" rid="fig7">Figure 7</xref> shows representative outputs of the ZamYOLO-Maize model for the three maize diseases shown in <xref ref-type="fig" rid="fig6">Figure 6</xref>. Under complicated field conditions, the model precisely locates lesion sites with bounding boxes that nearly match observed symptoms. The high mAP@50 of 99.5% is readily explained by this exact lesion-level localization, since precise spatial alignment raises IoU scores. Consistent visual performance across disease classes indicates the robustness of the proposed system for field-based maize disease identification (<xref ref-type="table" rid="tab3">Table 3</xref>).</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Visual results showing classified disease regions using the proposed Zam Yolo-Maize model.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g007.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Collage of close-up corn leaves showing visual symptoms of maize streak virus (MSV) and grey leaf spot diseases. Leaves display yellow or white streaks and brown lesions, each labeled with file names and identification tags.</alt-text>
</graphic>
</fig>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Hyperparameters used for the proposed framework.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Parameter</th>
<th align="center" valign="top">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Epochs</td>
<td align="center" valign="top">100 (early stopping at convergence)</td>
</tr>
<tr>
<td align="left" valign="top">Batch size</td>
<td align="center" valign="top">16</td>
</tr>
<tr>
<td align="left" valign="top">Learning rate</td>
<td align="center" valign="top">0.001</td>
</tr>
<tr>
<td align="left" valign="top">Image size</td>
<td align="center" valign="top">640 &#x00D7; 640</td>
</tr>
<tr>
<td align="left" valign="top">Optimiser</td>
<td align="center" valign="top">Adam</td>
</tr>
<tr>
<td align="left" valign="top">Loss function</td>
<td align="center" valign="top">CIoU (Complete Intersection over Union)<break/>+ BCE (Binary Cross-Entropy Loss) for detection and classification</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec15">
<label>3.9</label>
<title>Severity assessment</title>
<p>Lesion coverage obtained from YOLO-predicted bounding boxes was used to determine the severity of the disease. In accordance with normal visual disease assessment procedures, the cumulative lesion area was reported as a percentage of the leaf area and classified as mild (&#x003C;15%), moderate (15&#x2013;30%), or severe (&#x003E;30%). A rule-based treatment advising module integrated projected diseases type and severity level to create management suggestions. Mild infections elicited cultural or biological control recommendations, while severe instances resulted in fungicide-based intervention guidance following regional maize disease management standards. To ensure practical applicability, the advising logic was qualitatively validated against recognized agronomic criteria and tailored to assist decision-making at the extension level (<xref ref-type="table" rid="tab4">Table 4</xref>).</p>
<table-wrap position="float" id="tab4">
<label>Table 4</label>
<caption>
<p>Performance metrics of the proposed YOLOv8n model.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Metric</th>
<th align="center" valign="top">Yolov8n (Proposed)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Precision</td>
<td align="center" valign="top">0.991</td>
</tr>
<tr>
<td align="left" valign="top">Recall</td>
<td align="center" valign="top">0.997</td>
</tr>
<tr>
<td align="left" valign="top">F1- Score</td>
<td align="center" valign="top">0.995</td>
</tr>
<tr>
<td align="left" valign="top">mAP@50</td>
<td align="center" valign="top">0.995</td>
</tr>
<tr>
<td align="left" valign="top">mAP@ [50&#x2013;95]</td>
<td align="center" valign="top">0.995</td>
</tr>
<tr>
<td align="left" valign="top">Inference Speed</td>
<td align="center" valign="top">4.65&#x202F;ms/img</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec sec-type="results" id="sec16">
<label>4</label>
<title>Results and discussion</title>
<p>In this study, we conducted a comparative benchmark used in agricultural disease detection: YOLOv5, YOLOv8s, YOLOv8n, and YOLOv10s. All models were trained and evaluated on the same maize leaf disease dataset using identical data splits, augmentations, and evaluation metrics to ensure fair comparison.</p>
<sec id="sec17">
<label>4.1</label>
<title>Training convergence</title>
<p>Training ran for 100 epochs; convergence was reached around epoch 31 with early stopping. The validation loss stabilized, and there was no significant overfitting. The best weights were selected based on peak mAP@50. All four YOLO variants YOLOv5, YOLOv8s, YOLOv10s, and YOLOv8n show outstanding accuracy on the maize leaf disease detection task, with precision, recall, F1 score, and mAP@50 values consistently above 0.95, according to the performance comparison shown in <xref ref-type="table" rid="tab5">Table 5</xref>. This suggests that the deep learning pipeline is successful in differentiating disease classes and that the dataset is well-learned across models.</p>
<table-wrap position="float" id="tab5">
<label>Table 5</label>
<caption>
<p>Comparative performance of YOLO variants.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Model&#x2192;</th>
<th align="center" valign="top">Yolov5</th>
<th align="center" valign="top">Yolov8s</th>
<th align="center" valign="top">Yolov10s</th>
<th align="center" valign="top">Yolov8n</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Precision</td>
<td align="center" valign="top">0.990</td>
<td align="center" valign="top">0.957</td>
<td align="center" valign="top">0.997</td>
<td align="center" valign="top">0.991</td>
</tr>
<tr>
<td align="left" valign="top">Recall</td>
<td align="center" valign="top">0.667</td>
<td align="center" valign="top">0.994</td>
<td align="center" valign="top">0.999</td>
<td align="center" valign="top">0.997</td>
</tr>
<tr>
<td align="left" valign="top">F1 Score</td>
<td align="center" valign="top">0.797</td>
<td align="center" valign="top">0.976</td>
<td align="center" valign="top">0.999</td>
<td align="center" valign="top">0.995</td>
</tr>
<tr>
<td align="left" valign="top">mAP@50</td>
<td align="center" valign="top">0.995</td>
<td align="center" valign="top">0.995</td>
<td align="center" valign="top">0.995</td>
<td align="center" valign="top">0.995</td>
</tr>
<tr>
<td align="left" valign="top">Map@50&#x2013;95</td>
<td align="center" valign="top">0.905</td>
<td align="center" valign="top">0.963</td>
<td align="center" valign="top">0.993</td>
<td align="center" valign="top">0.994</td>
</tr>
<tr>
<td align="left" valign="top">Inference Speed</td>
<td align="center" valign="top">9.7&#x202F;ms/img</td>
<td align="center" valign="top">8.58&#x202F;ms/img</td>
<td align="center" valign="top">8.24&#x202F;ms/img</td>
<td align="center" valign="top">4.65&#x202F;ms/img</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec18">
<label>4.2</label>
<title>Quantitative results</title>
<p><xref ref-type="table" rid="tab4">Table 4</xref> reports the quantitative performance of the suggested ZamYOLO-Maize model, obtaining a mAP@50 of 99.5%, which implies extremely accurate lesion localization under field settings. The YOLOv8n architecture&#x2019;s capacity to capture discriminative lesion features while staying resilient to background complexity typical of smallholder farms is demonstrated by this result. Compared with comparable YOLO-based crop disease detection studies, such as maize and foliar disease detection utilizing YOLOv5 and YOLOv8 reporting mAP@50 values between 92 and 98% (<xref ref-type="bibr" rid="ref29">Masood et al., 2023</xref>; <xref ref-type="bibr" rid="ref4">Ali et al., 2023</xref>; <xref ref-type="bibr" rid="ref52">Yang et al., 2024</xref>). The proposed methodology displays competitive accuracy. These results directly support the research objective of establishing a reliable and deployable maize disease detection system for resource-constrained environments.</p>
</sec>
<sec id="sec19">
<label>4.3</label>
<title>Comparative analysis</title>
<p>YOLOv5, YOLOv8s, YOLOv10s, and YOLOv8n are compared on the maize leaf disease detection task in <xref ref-type="table" rid="tab5">Table 5</xref>. The higher performance of YOLOv8n is primarily related to architectural enhancements, especially the C2f module and the decoupled detection head (<xref ref-type="bibr" rid="ref16">Jocher et al., 2023</xref>). The C2f module promotes feature reuse and gradient flow, increasing representation of fine-grained lesion patterns, while the decoupled head separates classification and localization tasks, decreasing task interference and improving bounding box precision. Larger variations like YOLOv8s and YOLOv10s, on the other hand, produce modest accuracy gains at higher computational expense, whereas YOLOv5 depends on earlier CSP-based designs. These findings validate YOLOv8n as a well-balanced design for precise and effective implementation in farming scenarios with limited resources (<xref ref-type="fig" rid="fig8">Figures 8</xref>, <xref ref-type="fig" rid="fig9">9</xref>).</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>Bar chart comparing the performance of YOLOv5, YOLOv8s, YOLOv10s, and YOLOv8n across key evaluation metrics.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g008.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Bar chart comparing YOLO model benchmarks&#x2014;Yolov5, Yolov8s, Yolov10s, and Yolov8n&#x2014;across six metrics. Inference Speed (orange) is significantly higher than other scores for all models, while values for Precision, Recall, F1 Score, mAP@50, and mAP@50-95 are similar across models. Legend identifies each color-coded metric.</alt-text>
</graphic>
</fig>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption>
<p>Line plot presenting the performance of four YOLO variants across standard detection metrics.</p>
</caption>
<graphic xlink:href="frai-09-1764283-g009.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Line chart titled &#x201C;Performance Comparison of YOLO Variants&#x201D; displaying Precision, Recall, F1 Score, mAP@50, mAP@50-95, and Inference Speed for YOLOv5, YOLOv8s, YOLOv10, and YOLOv8n, showing notable declines in Inference Speed for YOLOv8n and similar trends among other metrics.</alt-text>
</graphic>
</fig>
<sec id="sec20">
<label>4.3.1</label>
<title>Ablation study</title>
<p>To very the contribution of important design choices, an ablation study was undertaken using YOLOv8n as the baseline model trained on the Zambia Maize Leaf Dataset. Three components were tested independently: feature fusion enhancement (C2f module), loss function setting, and data augmentation approach. Each version was trained under identical parameters, altering only one component at a time. The significance of the C2f module in capturing fine-grained lesion details is demonstrated by the results, which show that eliminating it resulted in a discernible decrease in F1-score. Disabling data augmentation led to decreased resilience under field variability, while simplifying the loss function also reduced localization accuracy. Overall, the optimized configuration used in ZamYOLO-Maize was supported by each component&#x2019;s incremental performance gains.</p>
</sec>
</sec>
<sec id="sec21">
<label>4.4</label>
<title>Comparative analysis with other object techniques</title>
<p>The efficacy of the suggested YOLOv8n-based ZamYOLO-Maize framework in comparison to current object identification techniques used for maize and associated crop disease detection tasks is shown by the comparative evaluation shown in <xref ref-type="table" rid="tab6">Table 6</xref>. The suggested model significantly outperforms previously published YOLO-based systems using comparable agricultural datasets, achieving a mean Average Precision (mAP@0.5) of 99.5% on field-acquired maize leaf images.</p>
<p>Previous methods, such the YOLO MSM multiscale variable kernel framework, reported a mAP@0.5 of 89.24% on a dataset of maize leaf disease, but at very high inference speeds (more than 279 FPS), suggesting that real-time performance was prioritized over peak detection accuracy.</p>
<table-wrap position="float" id="tab6">
<label>Table 6</label>
<caption>
<p>Comparative Analysis of Object Techniques.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Model/Framework</th>
<th align="left" valign="top">Study Reference</th>
<th align="left" valign="top">Dataset Type</th>
<th align="center" valign="top">mAP@0.5</th>
<th align="center" valign="top">FPS</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">YOLO MSM (multi-scale variable kernel YOLO)</td>
<td align="left" valign="middle">
<xref ref-type="bibr" rid="ref30">Meng et al. (2025)</xref>
</td>
<td align="left" valign="middle">Own maize leaf disease detection dataset (field images)</td>
<td align="center" valign="middle">89.24%</td>
<td align="center" valign="middle">279.56</td>
</tr>
<tr>
<td align="left" valign="middle">YOLO MSM&#x202F;+&#x202F;various attention variants (ablation)</td>
<td align="left" valign="middle">
<xref ref-type="bibr" rid="ref30">Meng et al. (2025)</xref>
</td>
<td align="left" valign="middle">Own maize leaf disease detection dataset (field images)</td>
<td align="center" valign="middle">87.46&#x2013;89.13%</td>
<td align="center" valign="middle">213.05&#x2013;273.42</td>
</tr>
<tr>
<td align="left" valign="middle">GhostNet_Triplet_YOLOv8s</td>
<td align="left" valign="middle">
<xref ref-type="bibr" rid="ref24">Li et al. (2024)</xref>
</td>
<td align="left" valign="middle">Corn leaf disease detection, YOLOv8s-based, field + PlantVillage maize images</td>
<td align="center" valign="middle">91.04%</td>
<td align="center" valign="middle">Not reported</td>
</tr>
<tr>
<td align="left" valign="middle">CEMLB-YOLO (YOLOv5-based)</td>
<td align="left" valign="middle">
<xref ref-type="bibr" rid="ref23">Leng et al. (2023)</xref>
</td>
<td align="left" valign="middle">NLB field detection dataset (Northern Leaf Blight)</td>
<td align="center" valign="middle">87.5%</td>
<td align="center" valign="middle">Not reported</td>
</tr>
<tr>
<td align="left" valign="middle">Proposed<break/>(YOLOv8n)</td>
<td align="left" valign="middle">&#x2014;</td>
<td align="left" valign="middle">Field maize leaves</td>
<td align="center" valign="middle">99.5%</td>
<td align="center" valign="middle">4.65&#x202F;ms/img</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Similarly, when applied to difficult tasks, attention-enhanced YOLO MSM versions showed very slight improvements within the 87.46&#x2013;89.13% range, indicating modest benefits from architectural complexity. On corn and Northern Leaf Blight field datasets, more current maize-focused models, such as GhostNet_Triplet_YOLOv8s and CEMLBYOLO, reported mAP@0.5 scores of 91.04 and 87.5%, respectively. Although these techniques outperform previous YOLO variations, their claimed accuracies are still significantly lower than those of the suggested ZamYOLO-Maize architecture. Furthermore, a few experiments failed to report inference speed, which made it difficult to compare deployment feasibility directly.</p>
<p>The suggested YOLOv8n model, on the other hand, achieves exceptional accuracy under actual field conditions by emphasizing diagnostic dependability and detection precision. A lower inference speed of 4.65 FPS on traditional CPU-based gear illustrates the computational trade-off associated with this performance boost. However, this trade-off is acceptable, especially for non-continuous or image-based diagnostic procedures, given the application context of maize disease diagnosis, where accuracy is crucial to minimize misclassification and improper action.</p>
<p>Overall, the findings show that the suggested ZamYOLO-Maize framework sets a new standard for the accuracy of field-level maize leaf disease detection while emphasizing the necessity of further hardware acceleration and optimization to accommodate real-time mobile or edge deployment situations.</p>
</sec>
<sec id="sec22">
<label>4.5</label>
<title>Discussion</title>
<p>Low-resource scenarios, which are prevalent throughout rural Zambia, are defined in this study as smallholder farming environments with limited computational hardware, erratic internet connectivity, and little access to professional diagnostic support. To address these limits, the proposed ZamYOLO-Maize framework employs the lightweight YOLOv8n architecture, selected for its good balance between detection accuracy and processing efficiency. Validation of the model inference on CPU-only hardware (Intel Core i5, 16&#x202F;GB RAM) showed consistent performance without the need for expensive GPUs. Empirically, the model supported viability in resource-constrained environments by achieving high detection accuracy while keeping an acceptable inference latency on local hardware.</p>
<p>Four YOLO-based object detection models namely: YOLOv5, YOLOv8s, YOLOv10s, and YOLOv8n that were trained on the Zambian-specific maize leaf disease dataset are compared in this section. The evaluation emphasizes computing efficiency (inference speed), localization ability (mAP@50), and classification accuracy (Precision, Recall, F1-score). The performance results are summarized in <xref ref-type="table" rid="tab5">Table 5</xref>. The utilization of a region-specific field dataset, optimization for real-time inference utilizing the lightweight YOLOv8n architecture, and careful data augmentation and training techniques are all responsible for the great performance of the suggested model. This study achieves both high accuracy and deployability in low-resource situations, in contrast to previous work which deteriorates under extreme lighting fluctuations or significant occlusions (<xref ref-type="bibr" rid="ref22">Khan et al., 2023</xref>).</p>
<p>With F1-scores of 0.999 and 0.995, respectively, YOLOv10s and YOLOv8n had the best overall performance. According to these findings, the models showed a good balance between precision and recall, indicating accurate disease symptom identification and categorization under a variety of field settings. YOLOv5n had the worst predictive balance (F1&#x202F;=&#x202F;0.797), mostly because of a much lower Recall value (0.667), while YOLOv8s fared somewhat worse (F1&#x202F;=&#x202F;0.976).</p>
<p>Even for the lower-capacity YOLOv5 model, localization performance remained robust, as seen by the consistently high mAP@50 values (&#x2265;0.994 across all models). This demonstrates that YOLO-based architectures are appropriate for leaf disease detection tasks where spatial accuracy is essential.</p>
<p>Significant behavioral variations are revealed by precision and recall trends:</p>
<p>YOLOv10s demonstrated remarkable capacity to minimize false detections and identify real positives, with perfect or almost perfect values (Precision&#x202F;=&#x202F;0.997, Recall&#x202F;=&#x202F;0.999).</p>
<p>Strong sensitivity is a desirable quality for early disease detection in agricultural contexts where missed detections directly translate into crop losses. YOLOv8n and YOLOv8s acquired very high recall values (0.997 and 0.994).</p>
<p>Because of its lesser backbone capacity, YOLOv5 had the lowest Recall (0.667), indicating that it frequently missed disease indications, making it the least dependable for practical implementation.</p>
<p>Given Zambia&#x2019;s need for prompt and precise disease identification, the analysis reveals that more recent YOLO versions (YOLOv8 and YOLOv10) offer significant gains in sensitivity when compared to previous iterations (YOLOv5). In farming contexts with limited resources, inference speed is an important consideration for mobile deployment. At 4.65&#x202F;ms/img, YOLOv8n attained the quickest inference rate, almost two times quicker than YOLOv8s and YOLOv10s, and more than twice as fast as YOLOv5. YOLOv8n was the most practicable for real-time field deployment on low-power devices since it showed the best balance between accuracy and speed. Despite having a slightly slower speed (8.24&#x202F;ms/img), YOLOv10s produced the highest accuracy, making it ideal for server-based or high-end device deployments. YOLOv5 fell short in both speed and accuracy, demonstrating its limitations for contemporary agricultural applications that demand great responsiveness. These results demonstrate that more recent lightweight architectures, especially YOLOv8n, provide substantial deployment benefits for embedded or mobile systems utilized by agricultural extension agents and smallholder farmers.</p>
<sec id="sec23">
<label>4.5.1</label>
<title>Implications for deployment</title>
<p>A working web application was created and put up on a local server to move beyond model validation to usefulness. The suggested YOLOv8n model processes uploaded photos of maize leaves through this application&#x2019;s real-time interface. The diagnosed disease class, the anticipated infection severity, and the model&#x2019;s confidence score are all included in the thorough diagnostic report that the system provides. Crucially, every diagnosis is accompanied by a practical treatment proposal that closes the gap between detection and farmer decision-making. The application allows users to export all results, including classifications, severities, confidences, and recommended treatments in a structured CSV report and has capabilities for batch processing of many photos to support scalability and record-keeping. This prototype establishes a workable framework for field deployment by extension agents by showcasing a full pipeline from image input to agronomic guidance.</p>
<p>Using typical field photos, the severity assessment and treatment advice modules were qualitatively validated. The system&#x2019;s assessed severity levels matched the observable lesion extent in the photos, and the recommended treatments were in line with accepted agronomic methods for managing maize diseases. Repeated inference on the same inputs generated steady severity classifications and suggestions, demonstrating deterministic and reliable module behavior. This confirms the practical viability of the advice outputs for farmer-facing deployment. Furthermore, the prototype web application functions in an offline localhost environment, decreasing dependence on network infrastructure. When taken as a whole, these design and evaluation decisions offer specific technical and empirical backing for modifying the suggested framework for smallholder farming environments with limited resources.</p>
</sec>
<sec id="sec24">
<label>4.5.2</label>
<title>Pathway to economic and societal impact</title>
<p>The developed web application offers a clear, straightforward route to observable social and economic advantages. The system goes beyond simple detection to provide targeted intervention by incorporating real-time disease classification, severity assessment, and automated treatment recommendations. By enabling farmers to apply the appropriate fungicide or practice only when and where necessary, this directly addresses input-use efficiency. This is expected to reduce pesticide expenditures by 15&#x2013;25% and offset yield losses through early, precise diagnosis. Socially, extension personnel can extend their advising services, carry out field surveys effectively, and create digital farm health records due to the application&#x2019;s batch processing and report generation (such as CSV exports). This strengthens the advisory ecosystem for smallholder farmers and democratizes access to professional diagnostics. Additionally, a feedback loop from individual farm management to broader agricultural resilience might be created by using the aggregated, anonymized data from widespread use to drive regional disease detection and sustainable pesticide use policy.</p>
</sec>
<sec id="sec25">
<label>4.5.3</label>
<title>Limitations and model interpretability</title>
<p>Three maize diseases and a healthy class comprised the four classes on which the model was trained and assessed. As a result, when it is exposed to diseases or indications of nutrient deficiencies that were not included in the training set, its performance may deteriorate. Increased false positives or false negatives could result from these diseases being mislabeled as healthy or misclassified as visually comparable disease classifications. This restriction indicates the need for more research on open-set recognition and the addition of more disease and deficiency classes to enhance resilience under actual field situations. It also reflects the closed-set character of the existing model.</p>
<p>ZamYOLO-Maize does not yet have explicit Explainable AI (XAI) techniques, despite the research acknowledging the black-box nature of deep learning algorithms. Because of this, users are not given visual explanations for model predictions, which could have an impact on interpretability and confidence, especially among non-technical stakeholders like farmers. To improve transparency, user confidence, and adoption in actual farming scenarios, future iterations will incorporate lightweight XAI approaches, like Grad-CAM or attention heatmaps, to emphasize image regions influencing disease predictions.</p>
<p>The web application is a prototype that is presently running in a local host environment; it has not yet been tested in distributed or large-scale deployment scenarios. The requirement for remote model updates, poor internet connectivity in rural regions, and modifying the user interface for farmers with low literacy levels could all be obstacles to scaling the system for broad use. These limitations emphasize the necessity of further work on offline functionality, user-centered interface redesign, and cloud or mobile deployment.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="sec26">
<label>5</label>
<title>Conclusion</title>
<p>Four YOLO-based deep learning models YOLOv5, YOLOv8s, YOLOv10s, and YOLOv8n for automated detection and classification of maize leaf diseases in actual field conditions in Zambia were thoroughly evaluated in this study. The findings show that more recent YOLO architectures perform noticeably better than previous iterations in terms of computational efficiency and prediction accuracy. The highest overall accuracy (F1&#x202F;=&#x202F;0.999) was attained by YOLOv10s, demonstrating strong generalization and outstanding feature extraction despite variable field circumstances. Despite having a slightly lower accuracy (F1&#x202F;=&#x202F;0.995), YOLOv8n produced the fastest inference speed (4.65&#x202F;ms/img), which makes it ideal for low-power mobile and edge-based deployments aimed at smallholder farmers.</p>
<p>The YOLOv5n&#x2019;s low Recall (0.667) and performance gap underscore the shortcomings of previous lightweight detectors in identifying subtle illness symptoms under complicated lighting, occlusion, and texture fluctuations. On the other hand, YOLOv8 and YOLOv10&#x2019;s excellent performance validates the efficacy of architectural enhancements including improved feature pyramids, decoupled heads, and optimized convolutional blocks for critical agricultural diagnostics.</p>
<p>To contextualize the performance of the proposed ZamYOLO-Maize model, we compared our YOLOv8n benchmark results with those reported in the literature. Several recent studies using YOLOv8 for plant disease detection achieved mAP@50 scores between 94 and 98% (<xref ref-type="bibr" rid="ref10">Carion et al., 2020</xref>; <xref ref-type="bibr" rid="ref35">Pan et al., 2025</xref>). Our implementation surpasses these values, achieving mAP@50&#x202F;=&#x202F;99.5%, indicating improved feature representation and dataset suitability.</p>
<p>Similarly, studies employing YOLOv5 for maize or rice disease detection typically report F1 scores between 85 and 93% (<xref ref-type="bibr" rid="ref11">Chen et al., 2023</xref>; <xref ref-type="bibr" rid="ref4">Ali et al., 2023</xref>), whereas our YOLOv5n achieved only 79.7% F1, reflecting its limitations on small lesions and complex Zambian field scenes. This aligns with previous reports identifying YOLOv5n&#x2019;s limited sensitivity in low-contrast agricultural imagery. In contrast, YOLOv10s demonstrated superior recall (0.999), consistent with its enhanced feature extraction layers as recently reported by (<xref ref-type="bibr" rid="ref48">Wang et al., 2024</xref>). These comparative insights highlight the advantage of adopting modern YOLO variants for real-time agricultural diagnostics.</p>
<p>Overall, the results support the viability of using YOLOv8n and YOLOv10s as trustworthy instruments for early detection of maize diseases in Zambia, where prompt diagnosis is crucial for reducing production losses and enhancing national food security. By including multi-spectral imaging, model explainability mechanisms, and mobile-based deployment experiments with agricultural extension agents and smallholder communities, future work will expand on this research.</p>
<p>Though a functional foundation is established by this work, a few intriguing research directions are revealed. First, a hybrid data pipeline would be created by integrating inexpensive on-farm IoT sensors (such as soil moisture and microclimate), improving model granularity and customisation. Second, creating explainable AI (XAI) methods is essential for converting model predictions into understandable guidance for farmers, boosting openness and confidence (<xref ref-type="bibr" rid="ref42">Singh et al., 2022</xref>). Third, to guarantee long-term viability beyond pilot phases, future research must investigate scalable commercial mechanisms, such as subscriptions to farmer cooperatives or public-private partnerships. Lastly, increasing the system&#x2019;s capacity to predict market-oriented variables (such as yield-based price forecasts) in addition to crop hazards will address holistic livelihood security and close the gap between agronomic management and economic well-being.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec27">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec sec-type="author-contributions" id="sec28">
<title>Author contributions</title>
<p>PK: Writing &#x2013; original draft. DK: Supervision, Writing &#x2013; review &#x0026; editing. AZ: Writing &#x2013; review &#x0026; editing.</p>
</sec>
<sec sec-type="COI-statement" id="sec29">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec30">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was used in the creation of this manuscript. In order to improve phrase organization, alter wording, and create a MermaidJS code block for a workflow diagram, the authors employed Grammarly and QuillBot to improve readability and language editing of the manuscript. The writers own all concepts, information, analysis, and conclusions. The entire material of this work was reviewed, revised, and is entirely the authors&#x2019; responsibility.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec31">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="sec32">
<title>Supplementary material</title>
<p>The Supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/frai.2026.1764283/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/frai.2026.1764283/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.DOCX" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Adadi</surname><given-names>A.</given-names></name> <name><surname>Berrada</surname><given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Peeking inside the black box: a survey on explainable artificial intelligence (XAI)</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>52138</fpage>&#x2013;<lpage>52160</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2870052</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ahmad</surname><given-names>B.</given-names></name> <name><surname>Noon</surname><given-names>S. K.</given-names></name> <name><surname>Ahmad</surname><given-names>T.</given-names></name> <name><surname>Mannan</surname><given-names>A.</given-names></name> <name><surname>Khan</surname><given-names>N. I.</given-names></name> <name><surname>Ismail</surname><given-names>M.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>Efficient real-time detection of plant leaf diseases using YOLOv8 and raspberry pi</article-title>. <source>VFAST Trans Softw Eng.</source> <volume>12</volume>, <fpage>250</fpage>&#x2013;<lpage>259</lpage>. doi: <pub-id pub-id-type="doi">10.21015/vtse.v12i2.1869</pub-id></mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Akhter</surname><given-names>R.</given-names></name> <name><surname>Sofi</surname><given-names>S. A.</given-names></name></person-group> (<year>2022</year>). <article-title>Precision agriculture using IoT data analytics and machine learning</article-title>. <source>J. King Saud Univ.</source> <volume>34</volume>, <fpage>5602</fpage>&#x2013;<lpage>5618</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jksuci.2021.05.013</pub-id></mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname><given-names>M.</given-names></name> <name><surname>Rahman</surname><given-names>M. M.</given-names></name> <name><surname>Islam</surname><given-names>M. S.</given-names></name></person-group> (<year>2023</year>). <article-title>Real-time plant disease detection using YOLOv8: performance evaluation under natural field conditions</article-title>. <source>Int. J. Agric. Technol.</source> <volume>19</volume>, <fpage>655</fpage>&#x2013;<lpage>670</lpage>.</mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Aravind</surname><given-names>K. R.</given-names></name> <name><surname>Raja</surname><given-names>P.</given-names></name> <name><surname>Mukesh</surname><given-names>K. V.</given-names></name> <name><surname>Aniirudh</surname><given-names>R.</given-names></name> <name><surname>Ashiwin</surname><given-names>R.</given-names></name> <name><surname>Szczepanski</surname><given-names>C.</given-names></name></person-group> (<year>2018</year>). <italic>Disease classification in maize crop using bag of features and multiclass support vector machine</italic>. In: 2018 2nd international conference on inventive systems and control (ICISC), pp. 1191&#x2013;1196.</mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arrieta</surname><given-names>A. B.</given-names></name> <name><surname>D&#x00ED;az-Rodr&#x00ED;guez</surname><given-names>N.</given-names></name> <name><surname>Del Ser</surname><given-names>J.</given-names></name> <name><surname>Bennetot</surname><given-names>A.</given-names></name> <name><surname>Tabik</surname><given-names>S.</given-names></name> <name><surname>Barbado</surname><given-names>A.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI</article-title>. <source>Inf. Fusion</source> <volume>58</volume>, <fpage>82</fpage>&#x2013;<lpage>115</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.inffus.2019.12.012</pub-id></mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Arsenovic</surname><given-names>M.</given-names></name> <name><surname>Karanovic</surname><given-names>M.</given-names></name> <name><surname>Sladojevic</surname><given-names>S.</given-names></name> <name><surname>Anderla</surname><given-names>A.</given-names></name> <name><surname>Stefanovic</surname><given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Solving current limitations of deep learning-based approaches for plant disease detection</article-title>. <source>Symmetry</source> <volume>11</volume>:<fpage>939</fpage>. doi: <pub-id pub-id-type="doi">10.3390/sym11070939</pub-id></mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barbedo</surname><given-names>J. G. A.</given-names></name></person-group> (<year>2019a</year>). <article-title>Plant disease identification from individual lesions and spots using deep learning</article-title>. <source>Biosyst. Eng.</source> <volume>180</volume>, <fpage>96</fpage>&#x2013;<lpage>107</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.biosystemseng.2019.02.002</pub-id></mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barbedo</surname><given-names>J. G. A.</given-names></name></person-group> (<year>2019b</year>). <article-title>Detection of nutrition deficiencies in plants using proximal images and machine learning: a review</article-title>. <source>Comput. Electron. Agric.</source> <volume>162</volume>, <fpage>482</fpage>&#x2013;<lpage>492</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2019.04.035</pub-id></mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Carion</surname><given-names>N.</given-names></name> <name><surname>Massa</surname><given-names>F.</given-names></name> <name><surname>Synnaeve</surname><given-names>G.</given-names></name> <name><surname>Usunier</surname><given-names>N.</given-names></name> <name><surname>Kirillov</surname><given-names>A.</given-names></name> <name><surname>Zagoruyko</surname><given-names>S.</given-names></name></person-group> (<year>2020</year>). <source>End-to-end object detection with transformers (DETR). Computer vision &#x2013; ECCV</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>, <fpage>213</fpage>&#x2013;<lpage>229</lpage>.</mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>H.</given-names></name> <name><surname>Huang</surname><given-names>H.</given-names></name> <name><surname>Li</surname><given-names>Y.</given-names></name> <name><surname>Xu</surname><given-names>X.</given-names></name></person-group> (<year>2023</year>). <article-title>YOLOv8-based detection of crop diseases in field environments</article-title>. <source>Comput. Electron. Agric.</source> <volume>205</volume>:<fpage>107625</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2023.107625</pub-id></mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chouhan</surname><given-names>S. S.</given-names></name> <name><surname>Singh</surname><given-names>U. P.</given-names></name> <name><surname>Jain</surname><given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Automated plant leaf disease detection and classification using fuzzy-based function network</article-title>. <source>Wirel. Pers. Commun.</source> <volume>121</volume>, <fpage>1757</fpage>&#x2013;<lpage>1779</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11277-021-08734-3</pub-id></mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chung</surname><given-names>C. L.</given-names></name> <name><surname>Huang</surname><given-names>K. J.</given-names></name> <name><surname>Chen</surname><given-names>S. Y.</given-names></name> <name><surname>Lai</surname><given-names>M. H.</given-names></name> <name><surname>Chen</surname><given-names>Y. C.</given-names></name> <name><surname>Kuo</surname><given-names>Y. F.</given-names></name></person-group> (<year>2016</year>). <article-title>Detecting Bakanae disease in rice seedlings by machine vision</article-title>. <source>Comput. Electron. Agric.</source> <volume>121</volume>, <fpage>404</fpage>&#x2013;<lpage>411</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2016.01.008</pub-id></mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ferentinos</surname><given-names>K. P.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep learning models for plant disease detection and diagnosis</article-title>. <source>Comput. Electron. Agric.</source> <volume>145</volume>, <fpage>311</fpage>&#x2013;<lpage>318</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2018.01.009</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname><given-names>T.</given-names></name> <name><surname>Du</surname><given-names>X.</given-names></name> <name><surname>Zhang</surname><given-names>N.</given-names></name> <name><surname>Sun</surname><given-names>X.</given-names></name> <name><surname>Li</surname><given-names>X.</given-names></name> <name><surname>Tian</surname><given-names>S.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>YOLOv8-GO: a lightweight model for prompt detection of foliar maize diseases</article-title>. <source>Appl. Sci.</source> <volume>14</volume>:<fpage>10004</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app142110004</pub-id></mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Jocher</surname><given-names>G.</given-names></name> <name><surname>Chaurasia</surname><given-names>A.</given-names></name> <name><surname>Qiu</surname><given-names>J.</given-names></name></person-group> (<year>2023</year>). <source>YOLO by Ultralytics</source>. <publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>GitHub</publisher-name>.</mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kalunga</surname><given-names>P.</given-names></name></person-group> (<year>2026</year>). <italic>Zambia Maize Leaf Dataset [dataset]</italic>. Zenodo.</mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kalunga</surname><given-names>P.</given-names></name> <name><surname>Kunda</surname><given-names>D.</given-names></name></person-group> (<year>2025</year>). <article-title>Investigation of the suitability of existing maize plant leaf disease detection and classification approaches: challenges and open issues</article-title>. <source>Zambia ICT J.</source> <volume>8</volume>, <fpage>70</fpage>&#x2013;<lpage>79</lpage>. doi: <pub-id pub-id-type="doi">10.33260/zictjournal.v8i1.343</pub-id></mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kamaleshkanna</surname><given-names>S.</given-names></name> <name><surname>Ramalingam</surname><given-names>K.</given-names></name> <name><surname>Pazhanivelan</surname><given-names>P.</given-names></name> <name><surname>Jagadeeswaran</surname><given-names>R.</given-names></name> <name><surname>Prabu</surname><given-names>P. C.</given-names></name></person-group> (<year>2024</year>). <article-title>YOLO deep learning algorithm for object detection in agriculture: a review</article-title>. <source>J. Agric. Eng.</source> <volume>55</volume>:<fpage>1641</fpage>. doi: <pub-id pub-id-type="doi">10.4081/jae.2024.1641</pub-id></mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kamilaris</surname><given-names>A.</given-names></name> <name><surname>Prenafeta-Bold&#x00FA;</surname><given-names>F. X.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep learning in agriculture: a survey</article-title>. <source>Comput. Electron. Agric.</source> <volume>147</volume>, <fpage>70</fpage>&#x2013;<lpage>90</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2018.02.016</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kaur</surname><given-names>S.</given-names></name> <name><surname>Pandey</surname><given-names>S.</given-names></name> <name><surname>Goel</surname><given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Semi-automatic leaf disease detection and classification system for soybean culture</article-title>. <source>IET Image Process.</source> <volume>12</volume>, <fpage>1038</fpage>&#x2013;<lpage>1048</lpage>. doi: <pub-id pub-id-type="doi">10.1049/iet-ipr.2017.0822</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khan</surname><given-names>F.</given-names></name> <name><surname>Zafar</surname><given-names>N.</given-names></name> <name><surname>Tahir</surname><given-names>M.</given-names></name> <name><surname>Aqib</surname><given-names>M.</given-names></name> <name><surname>Waheed</surname><given-names>H.</given-names></name> <name><surname>Haroon</surname><given-names>Z.</given-names></name></person-group> (<year>2023</year>). <article-title>A mobile-based system for maize plant leaf disease detection and classification using deep learning</article-title>. <source>Front. Plant Sci.</source> <volume>14</volume>:<fpage>1079366</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpls.2023.1079366</pub-id>, <pub-id pub-id-type="pmid">37255561</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Leng</surname><given-names>S.</given-names></name> <name><surname>Musha</surname><given-names>Y.</given-names></name> <name><surname>Yang</surname><given-names>Y.</given-names></name> <name><surname>Feng</surname><given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>CEMLB-YOLO: efficient detection model of maize leaf blight in complex field environments</article-title>. <source>Appl. Sci.</source> <volume>13</volume>:<fpage>285</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app13169285</pub-id></mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>R.</given-names></name> <name><surname>Li</surname><given-names>Y.</given-names></name> <name><surname>Qin</surname><given-names>W.</given-names></name> <name><surname>Abbas</surname><given-names>A.</given-names></name> <name><surname>Li</surname><given-names>S.</given-names></name> <name><surname>Ji</surname><given-names>R.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>Lightweight network for corn leaf disease identification based on improved YOLO v8s</article-title>. <source>Agriculture</source> <volume>14</volume>:<fpage>220</fpage>. doi: <pub-id pub-id-type="doi">10.3390/agriculture14020220</pub-id></mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>L.</given-names></name> <name><surname>Zhang</surname><given-names>S.</given-names></name> <name><surname>Wang</surname><given-names>B.</given-names></name></person-group> (<year>2021</year>). <article-title>Plant disease detection and classification by deep learning&#x2014;a review</article-title>. <source>IEEE Access</source> <volume>9</volume>, <fpage>56683</fpage>&#x2013;<lpage>56698</lpage>. doi: <pub-id pub-id-type="doi">10.1109/access.2021.3069646</pub-id></mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>J.</given-names></name> <name><surname>Wang</surname><given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>Tomato diseases and pests detection based on improved yolo V3 convolutional neural network</article-title>. <source>Front. Plant Sci.</source> <volume>11</volume>:<fpage>898</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpls.2020.00898</pub-id>, <pub-id pub-id-type="pmid">32612632</pub-id></mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>J.</given-names></name> <name><surname>Wang</surname><given-names>X.</given-names></name></person-group> (<year>2021</year>). <article-title>Plant diseases and pests detection based on deep learning: a review</article-title>. <source>Plant Methods</source> <volume>17</volume>:<fpage>22</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s13007-021-00722-9</pub-id>, <pub-id pub-id-type="pmid">33627131</pub-id></mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>B.</given-names></name> <name><surname>Zhang</surname><given-names>Y.</given-names></name> <name><surname>He</surname><given-names>D.</given-names></name> <name><surname>Li</surname><given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Identification of apple leaf diseases based on deep convolutional neural networks</article-title>. <source>Symmetry</source> <volume>10</volume>:<fpage>11</fpage>. doi: <pub-id pub-id-type="doi">10.3390/sym10010011</pub-id></mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Masood</surname><given-names>M.</given-names></name> <name><surname>Nawaz</surname><given-names>M.</given-names></name> <name><surname>Nazir</surname><given-names>T.</given-names></name> <name><surname>Javed</surname><given-names>A.</given-names></name> <name><surname>Alkanhel</surname><given-names>R.</given-names></name> <name><surname>Elmannai</surname><given-names>H.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>MaizeNet: a deep learning approach for effective recognition of maize plant leaf diseases</article-title>. <source>IEEE Access</source> <volume>11</volume>, <fpage>52862</fpage>&#x2013;<lpage>52876</lpage>. doi: <pub-id pub-id-type="doi">10.1109/access.2023.3280260</pub-id></mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Meng</surname><given-names>Y.</given-names></name> <name><surname>Zhan</surname><given-names>J.</given-names></name> <name><surname>Li</surname><given-names>K.</given-names></name> <name><surname>Yan</surname><given-names>F.</given-names></name> <name><surname>Zhang</surname><given-names>L.</given-names></name></person-group> (<year>2025</year>). <article-title>A rapid and precise algorithm for maize leaf disease detection based on YOLO MSM</article-title>. <source>Sci. Rep.</source> <volume>15</volume>:<fpage>399</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-025-88399-1</pub-id>, <pub-id pub-id-type="pmid">39971956</pub-id></mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mogili</surname><given-names>U. R.</given-names></name> <name><surname>Deepak</surname><given-names>B. B. V. L.</given-names></name></person-group> (<year>2018</year>). <article-title>Review on application of drone systems in precision agriculture</article-title>. <source>Int. J. Pure Appl. Math.</source> <volume>119</volume>, <fpage>1505</fpage>&#x2013;<lpage>1515</lpage>.</mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mohanty</surname><given-names>S. P.</given-names></name> <name><surname>Hughes</surname><given-names>D. P.</given-names></name> <name><surname>Salath&#x00E9;</surname><given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Using deep learning for image-based plant disease detection</article-title>. <source>Front. Plant Sci.</source> <volume>7</volume>:<fpage>1419</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpls.2016.01419</pub-id>, <pub-id pub-id-type="pmid">27713752</pub-id></mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ngugi</surname><given-names>L. C.</given-names></name> <name><surname>Abelwahab</surname><given-names>M.</given-names></name> <name><surname>Abo-Zahhad</surname><given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Recent advances in image processing techniques for automated leaf pest and disease recognition&#x2014;a review</article-title>. <source>Inf. Process. Agric.</source> <volume>8</volume>, <fpage>27</fpage>&#x2013;<lpage>51</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.inpa.2020.04.004</pub-id></mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Osouli</surname><given-names>S.</given-names></name> <name><surname>Haghighi</surname><given-names>B. B.</given-names></name> <name><surname>Sadrossadat</surname><given-names>E.</given-names></name></person-group> (<year>2022</year>). <article-title>An effective scheme for maize disease recognition based on deep networks</article-title>. <source>arXiv</source> <volume>2022</volume>:<fpage>04234</fpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2205.04234</pub-id></mixed-citation></ref>
<ref id="ref35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pan</surname><given-names>C.</given-names></name> <name><surname>Wang</surname><given-names>S.</given-names></name> <name><surname>Wang</surname><given-names>Y.</given-names></name> <name><surname>Liu</surname><given-names>C.</given-names></name></person-group> (<year>2025</year>). <article-title>Ssd-yolo: a lightweight network for rice leaf disease detection</article-title>. <source>Front. Plant Sci.</source> <volume>16</volume>:<fpage>3096</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpls.2025.1643096</pub-id>, <pub-id pub-id-type="pmid">40901551</pub-id></mixed-citation></ref>
<ref id="ref36"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Panigrahi</surname><given-names>K. P.</given-names></name> <name><surname>Das</surname><given-names>H.</given-names></name> <name><surname>Sahoo</surname><given-names>A. K.</given-names></name> <name><surname>Moharana</surname><given-names>S. C.</given-names></name></person-group> (<year>2020</year>). <source>Maize leaf disease detection and classification using machine learning algorithms. In: Progress in computing, analytics and networking</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>, <fpage>659</fpage>&#x2013;<lpage>669</lpage>.</mixed-citation></ref>
<ref id="ref37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pantazi</surname><given-names>X. E.</given-names></name> <name><surname>Moshou</surname><given-names>D.</given-names></name> <name><surname>Tamouridou</surname><given-names>A. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Automated leaf disease detection in different crop species through image features analysis and one-class classifiers</article-title>. <source>Comput. Electron. Agric.</source> <volume>156</volume>, <fpage>96</fpage>&#x2013;<lpage>104</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2018.11.005</pub-id></mixed-citation></ref>
<ref id="ref38"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Patil</surname><given-names>P.</given-names></name> <name><surname>Yaligar</surname><given-names>N.</given-names></name> <name><surname>Meena</surname><given-names>S. M.</given-names></name></person-group> (<year>2017</year>). <italic>Comparison of performance of classifiers&#x2014;SVM, RF and ANN in potato blight disease detection using leaf images</italic>. In: 2017 IEEE international conference on computational intelligence and computing research (ICCIC), pp. 1&#x2013;5.</mixed-citation></ref>
<ref id="ref39"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Redmon</surname><given-names>J.</given-names></name> <name><surname>Divvala</surname><given-names>S.</given-names></name> <name><surname>Girshick</surname><given-names>R.</given-names></name> <name><surname>Farhadi</surname><given-names>A.</given-names></name></person-group> (<year>2016</year>). <italic>You only look once: Unified, real-time object detection</italic>. In: Proceedings of the IEEE conference on computer vision and pattern recognition, p. 779&#x2013;788.</mixed-citation></ref>
<ref id="ref40"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rua</surname><given-names>M.</given-names></name> <name><surname>Castro</surname><given-names>R.</given-names></name> <name><surname>L&#x00F3;pez</surname><given-names>O.</given-names></name></person-group> (<year>2023</year>). <article-title>A review of machine learning in agriculture: applications and challenges for crop disease management</article-title>. <source>Comput. Electron. Agric.</source> <volume>214</volume>:<fpage>108290</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2023.108290</pub-id></mixed-citation></ref>
<ref id="ref41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname><given-names>Y.</given-names></name> <name><surname>Yang</surname><given-names>Z.</given-names></name> <name><surname>Khan</surname><given-names>Z.</given-names></name> <name><surname>Liu</surname><given-names>H.</given-names></name> <name><surname>Chen</surname><given-names>W.</given-names></name> <name><surname>Duan</surname><given-names>S.</given-names></name></person-group> (<year>2025</year>). <article-title>Optimization of improved YOLOv8 for precision tomato leaf disease detection in sustainable agriculture</article-title>. <source>Sensors (Basel).</source> <volume>25</volume>:<fpage>1398</fpage>. doi: <pub-id pub-id-type="doi">10.3390/s25051398</pub-id>, <pub-id pub-id-type="pmid">40096213</pub-id></mixed-citation></ref>
<ref id="ref42"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname><given-names>A.</given-names></name> <name><surname>Sharma</surname><given-names>R.</given-names></name> <name><surname>Jaiswal</surname><given-names>P.</given-names></name></person-group> (<year>2022</year>). <article-title>Explainable AI (XAI) in agriculture: a review of current applications and future research directions</article-title>. <source>Comput. Electron. Agric.</source> <volume>202</volume>:<fpage>107356</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2022.107356</pub-id></mixed-citation></ref>
<ref id="ref43"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname><given-names>Y.</given-names></name> <name><surname>Luo</surname><given-names>H.</given-names></name> <name><surname>Chen</surname><given-names>M.</given-names></name></person-group> (<year>2023</year>). <article-title>A lightweight convolutional neural network for real-time crop disease identification on mobile devices</article-title>. <source>Comput. Electron. Agric.</source> <volume>204</volume>:<fpage>107520</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2023.107520</pub-id></mixed-citation></ref>
<ref id="ref44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thakur</surname><given-names>P. S.</given-names></name> <name><surname>Khanna</surname><given-names>P.</given-names></name> <name><surname>Sheorey</surname><given-names>T.</given-names></name> <name><surname>Ojha</surname><given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>Trends in vision-based machine learning techniques for plant disease identification: a systematic review</article-title>. <source>Expert Syst. Appl.</source> <volume>208</volume>:<fpage>118117</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2022.118117</pub-id></mixed-citation></ref>
<ref id="ref45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tian</surname><given-names>Y.</given-names></name> <name><surname>Yang</surname><given-names>G.</given-names></name> <name><surname>Wang</surname><given-names>Z.</given-names></name> <name><surname>Wang</surname><given-names>H.</given-names></name> <name><surname>Li</surname><given-names>E.</given-names></name> <name><surname>Liang</surname><given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Apple detection during different growth stages in orchards using the improved YOLO-V3 model</article-title>. <source>Comput. Electron. Agric.</source> <volume>157</volume>, <fpage>417</fpage>&#x2013;<lpage>426</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2019.01.012</pub-id></mixed-citation></ref>
<ref id="ref46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vishnoi</surname><given-names>V. K.</given-names></name> <name><surname>Kumar</surname><given-names>K.</given-names></name> <name><surname>Kumar</surname><given-names>B.</given-names></name></person-group> (<year>2021</year>). <article-title>Plant disease detection using computational intelligence and image processing</article-title>. <source>J. Plant Dis. Prot.</source> <volume>128</volume>, <fpage>19</fpage>&#x2013;<lpage>53</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s41348-020-00366-6</pub-id></mixed-citation></ref>
<ref id="ref47"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Waheed</surname><given-names>A.</given-names></name> <name><surname>Goyal</surname><given-names>M.</given-names></name> <name><surname>Gupta</surname><given-names>D.</given-names></name> <name><surname>Khanna</surname><given-names>A.</given-names></name> <name><surname>Al-Turjman</surname><given-names>F.</given-names></name> <name><surname>Pinheiro</surname><given-names>P. R.</given-names></name></person-group> (<year>2020</year>). <article-title>An optimized dense convolutional neural network model for disease recognition and classification in corn leaf</article-title>. <source>Comput. Electron. Agric.</source> <volume>175</volume>:<fpage>105456</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2020.105456</pub-id></mixed-citation></ref>
<ref id="ref48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>A.</given-names></name> <name><surname>Chen</surname><given-names>H.</given-names></name> <name><surname>Liu</surname><given-names>L.</given-names></name> <name><surname>Chen</surname><given-names>K.</given-names></name> <name><surname>Lin</surname><given-names>Z.</given-names></name> <name><surname>Han</surname><given-names>J.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>YOLOv10: real-time end-to-end object detection</article-title>. <source>arXiv</source> <volume>2024</volume>:<fpage>14458</fpage>. doi: <pub-id pub-id-type="doi">10.48550/arXiv.2405.14458</pub-id></mixed-citation></ref>
<ref id="ref49"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X.</given-names></name> <name><surname>Liu</surname><given-names>J.</given-names></name> <name><surname>Zhu</surname><given-names>X.</given-names></name></person-group> (<year>2021</year>). <article-title>Early real-time detection algorithm of tomato diseases and pests in the natural environment</article-title>. <source>Plant Methods</source> <volume>17</volume>:<fpage>43</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s13007-021-00745-2</pub-id>, <pub-id pub-id-type="pmid">33892765</pub-id></mixed-citation></ref>
<ref id="ref50"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wani</surname><given-names>J. A.</given-names></name> <name><surname>Sharma</surname><given-names>S.</given-names></name> <name><surname>Muzamil</surname><given-names>M.</given-names></name> <name><surname>Ahmed</surname><given-names>S.</given-names></name> <name><surname>Sharma</surname><given-names>S.</given-names></name> <name><surname>Singh</surname><given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Machine learning and deep learning based computational techniques in automatic agricultural diseases detection: methodologies, applications, and challenges</article-title>. <source>Arch. Comput. Methods Eng.</source> <volume>29</volume>, <fpage>641</fpage>&#x2013;<lpage>677</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11831-021-09588-5</pub-id></mixed-citation></ref>
<ref id="ref51"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname><given-names>X.</given-names></name> <name><surname>Sahoo</surname><given-names>D.</given-names></name> <name><surname>Hoi</surname><given-names>S. C. H.</given-names></name></person-group> (<year>2020</year>). <article-title>Recent advances in deep learning for object detection</article-title>. <source>Neurocomputing</source> <volume>396</volume>, <fpage>39</fpage>&#x2013;<lpage>64</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neucom.2020.01.085</pub-id></mixed-citation></ref>
<ref id="ref52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname><given-names>S.</given-names></name> <name><surname>Yao</surname><given-names>J.</given-names></name> <name><surname>Teng</surname><given-names>G.</given-names></name></person-group> (<year>2024</year>). <article-title>Corn leaf spot disease recognition based on improved YOLOv8</article-title>. <source>Agriculture</source> <volume>14</volume>:<fpage>666</fpage>. doi: <pub-id pub-id-type="doi">10.3390/agriculture14050666</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3167551/overview">Lei Song</ext-link>, Rutgers, The State University of New Jersey, United States</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2978337/overview">Amos Chege Kirongo</ext-link>, Meru University of Science and Technology, Kenya</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3214048/overview">Guo Xu</ext-link>, Shanghai Dianji University, China</p>
</fn>
</fn-group>
</back>
</article>