<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Plant Sci.</journal-id>
<journal-title>Frontiers in Plant Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Plant Sci.</abbrev-journal-title>
<issn pub-type="epub">1664-462X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpls.2023.1274231</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Plant Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Identification of apple leaf disease via novel attention mechanism based convolutional neural network</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Cheng</surname>
<given-names>Hebin</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/2435622"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Li</surname>
<given-names>Heming</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2283320"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
</contrib-group>
<aff id="aff1">
<institution>School of Intelligence Engineering, Shandong Management University</institution>, <addr-line>Jinan</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Liangliang Yang, Kitami Institute of Technology, Japan</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Ruirui Zhang, Beijing Academy of Agricultural and Forestry Sciences, China; Jiangtao Qi, Jilin University, China</p>
</fn>
<fn fn-type="corresp" id="fn001">
<p>*Correspondence: Heming Li, <email xlink:href="mailto:liheming@sdmu.edu.cn">liheming@sdmu.edu.cn</email>
</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>18</day>
<month>10</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>14</volume>
<elocation-id>1274231</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>08</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>09</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Cheng and Li</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Cheng and Li</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<sec>
<title>Introduction</title>
<p>The identification of apple leaf diseases is crucial for apple production.</p>
</sec>
<sec>
<title>Methods</title>
<p>To assist farmers in promptly recognizing leaf diseases in apple trees, we propose a novel attention mechanism. Building upon this mechanism and MobileNet v3, we introduce a new deep learning network.</p>
</sec>
<sec>
<title>Results and discussion</title>
<p>Applying this network to our carefully curated dataset, we achieved an impressive accuracy of 98.7% in identifying apple leaf diseases, surpassing similar models such as EfficientNet-B0, ResNet-34, and DenseNet-121. Furthermore, the precision, recall, and f1-score of our model also outperform these models, while maintaining the advantages of fewer parameters and less computational consumption of the MobileNet network. Therefore, our model has the potential in other similar application scenarios and has broad prospects.</p>
</sec>
</abstract>
<kwd-group>
<kwd>apple leaf disease</kwd>
<kwd>classification</kwd>
<kwd>deep learning</kwd>
<kwd>attention mechanism</kwd>
<kwd>multi-scale feature extraction</kwd>
</kwd-group>
<counts>
<fig-count count="11"/>
<table-count count="3"/>
<equation-count count="4"/>
<ref-count count="29"/>
<page-count count="11"/>
<word-count count="4724"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-in-acceptance</meta-name>
<meta-value>Technical Advances in Plant Science</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1" sec-type="intro">
<label>1</label>
<title>Introduction</title>
<p>Apple is one of the most popular and widely grown fruits worldwide and has been cultivated by humans for over 2000 years. Apple fruit is rich in vitamins and minerals, with high nutritional value, and is an indispensable part of a healthy diet. However, the production of apples is also hindered by various diseases, which can seriously affect the yield and quality of apples. Traditional plant disease identification, management, and prevention rely on the experience of farmers and local agricultural technicians. When these measures are insufficient, it is impossible to accurately identify the diseases and timely intervene, causing great losses to apple production. In the past decade, with the continuous development and progress of machine learning (ML), especially the advancement of deep learning (DL) technology, the accuracy of identifying leaf diseases has been continuously improved, paving the way for more efficient and real-time disease detection. <xref ref-type="bibr" rid="B13">Kamilaris and Prenafeta-Bold&#xfa; (2018)</xref>; <xref ref-type="bibr" rid="B20">Pardede et&#xa0;al. (2018)</xref>.</p>
<p>The disease recognition of plant leaves is essentially an image classification problem that requires accurate capture of disease features, comparison with other types of diseases, and classification. Traditional ML methods typically use image processing and classifier for plant disease recognition. The image processing methods include extracting the color and texture of disease spots through grayscale values or performing pixel-level segmentation of disease spots. <xref ref-type="bibr" rid="B4">Deng et&#xa0;al. (2019)</xref> Support vector machine (SVM), <xref ref-type="bibr" rid="B19">Mokhtar et&#xa0;al. (2015)</xref> k-means clustering, Naive Bayes, etc. <xref ref-type="bibr" rid="B18">Ma et&#xa0;al. (2018)</xref> are most widely used classifier. Tradition ML has good recognition accuracy for diseases with certain characteristics. <xref ref-type="bibr" rid="B24">Singh et&#xa0;al. (2016)</xref> However, the generalization of these methods is poor, limited by the inability to recognition of nonlinear data and the difficulty of feature extraction. Once the processing object changes, the model cannot perform reasonable classification.</p>
<p>Convolutional neural network (CNN) automatically extracts features directly from the original image, greatly improving the efficiency of image classification. Therefore, with the emergence of CNN, especially the success of AlexNET in the competition of ImageNet LSVRC-2010, <xref ref-type="bibr" rid="B16">Krizhevsky et&#xa0;al. (2017)</xref>; <xref ref-type="bibr" rid="B22">Shin et&#xa0;al. (2021)</xref> a series of DL models have been proposed, such as GoogleNet, Inception, VGG, ResNet, DenseNet, etc. Not surprisingly, these DL networks have also been used by researchers in plant disease detection. For example, Fuentes et&#xa0;al. present a deep-learning-based model to detect diseases and pests in tomato plants. They proposed a two-stage model which combines the meta-architecture (faster R-CNN) with feature extractors such as VGG and ResNet. Their system can effectively recognize nine different types of diseases and pests in complex surroundings. <xref ref-type="bibr" rid="B6">Fuentes et&#xa0;al. (2017)</xref> Khan, et&#xa0;al. utilized a hybrid method -a segmentation method which followed pre-trained deep models to achieve the classification accuracy of 98.60% on public datasets. <xref ref-type="bibr" rid="B15">Khan et&#xa0;al. (2018)</xref> Ferentinos compared some DL networks such as AlexNet, GoogLeNet, and VGG et&#xa0;al. and reported a 99.53% accuracy with VGG16 on the extended PlantVillage dataset. <xref ref-type="bibr" rid="B5">Ferentinos (2018)</xref> Arsenovic et&#xa0;al. proposed a novel two-stage architecture of a neural network which focused on a real environment plant disease classification. Their model achieved an accuracy of 93.67%. <xref ref-type="bibr" rid="B1">Arsenovic et&#xa0;al. (2019)</xref> Too, et&#xa0;al. compared many DL architecture and evaluated the best performance of DenseNet-121 in the experiment. <xref ref-type="bibr" rid="B26">Too et&#xa0;al. (2019)</xref>. Shoaib et&#xa0;al. utilized the Inception Net model in the research work. For the detection and segmentation of disease-affected regions, two state-of the-art semantic segmentation models, i.e., U-Net and Modified U-Net, are utilized in their work too. <xref ref-type="bibr" rid="B23">Shoaib et&#xa0;al. (2022)</xref> At the same time, in the segmented field of apple leaf disease detection, a number of research achievements have also emerged. <xref ref-type="bibr" rid="B7">Hasan et&#xa0;al. (2022)</xref> For example Jiang et&#xa0;al. proposed an INAR-SSD (incorporating Inception module and Rainbow concatenation) model that achieves a detection accuracy of 78.80% mean Average Precision (mAP) on the apple leaf disease dataset, while maintaining a rapid detection speed of 23.13 frames per second (FPS) <xref ref-type="bibr" rid="B12">Jiang et&#xa0;al. (2019)</xref>. Sun et&#xa0;al, proposed a novel MEAN-SSD (Mobile End AppleNet based SSD algorithm) model, which can achieve the detection performance of 83.12% mAP and a speed of 12.53 FPS. <xref ref-type="bibr" rid="B25">Sun et&#xa0;al. (2021)</xref>.</p>
<p>MobileNet is a lightweight network proposed by Google and is widely used by researchers. <xref ref-type="bibr" rid="B10">Howard et&#xa0;al. (2017)</xref>; <xref ref-type="bibr" rid="B27">Wang et&#xa0;al. (2021)</xref>; <xref ref-type="bibr" rid="B29">Xiong et&#xa0;al. (2020)</xref> In MobileNet v1, depthwise separable convolution was first proposed, which combines depthwise convolution and pointwise convolution in the module. The computational complexity was successfully reduced to 1/9 of that of ordinary convolution. Therefore it greatly reduces computational parameters and improves the speed of model computation. <xref ref-type="bibr" rid="B21">Sandler et&#xa0;al. (2018)</xref> In MobileNet v2, the interest manifold is captured by inserting a linear bottleneck in the convolution module instead of the original nonlinear activation function. <xref ref-type="bibr" rid="B14">Kavyashree and El-Sharkawy (2021)</xref> The researchers also proposed the inverted residual structure, which expands dimensions through an expansion layer. The depthwise separable convolutions are used to extract features, and projection layers are used to compress data, making the network smaller again. Through this structure, the dimensionality and computational speed of convolutions are balanced, enhancing the performance of the network. In MobileNet v3, the Squeeze-and-Excitation (SE) attention mechanism is further introduced. The SE module is added to the inverted residual structure, and the activation function is updated. <xref ref-type="bibr" rid="B9">Howard et&#xa0;al. (2019)</xref> Compared to MobileNet v2, the computational speed and accuracy of MobileNet v3 have been further improved.</p>
<p>In recent years, more Transfer learning (TL) strategies are used in DL. <xref ref-type="bibr" rid="B2">Chen et&#xa0;al. (2020)</xref>; <xref ref-type="bibr" rid="B3">Coulibaly et&#xa0;al. (2019)</xref> These DL models require a large amount of labeled data to achieve good performance. However, in many real-world scenarios, obtaining such a large amount of labeled data may be expensive, time-consuming, or impractical. TL enables the utilization of pre-existing large-scale datasets, such as ImageNet or COCO data sets, and transfers the knowledge obtained from them to the target tasks. On the other hand, DL models consist of multiple layers that learn the hierarchical representation of data. Early layers capture general low-level features (such as edges and textures), while later layers capture high-level semantic features. By using TL, we can reuse low-level and intermediate features learned from pre-trained models as feature extractors. This reduces the need to train these layers from scratch and allows us to focus on training only the top layers specific to our tasks. In the training process of our model, we also adopted the method of TL and achieved very good results.</p>
<p>In this article, we propose a deep learning model named mobileNet-MFS, where MFS is the abbreviation for multi-fused spatial. The main contributions of our work include:</p>
<list list-type="order">
<list-item>
<p>A novel fused spatial channel attention (FSCA) mechanism is proposed, which considers both channel and spatial connections of the input layer. We use it to replace the Squeeze-and-Excitation(SE) attention mechanism in the MobileNet v3 architecture and significantly improve the performance of the model.</p>
</list-item>
<list-item>
<p>In order to include multi-dimensional information in neural networks, a multi-scale feature extraction module was applied in our network, which fused image features through convolutions of different dimensions. Research has shown that this module has successfully improved the model&#x2019;s accuracy.</p>
</list-item>
<list-item>
<p>Our proposed MobileNet-MFS model has better performance than the original version of MobileNet v3, demonstrating advantages in accuracy, computational speed, parameter size, and other aspects compared to MobileNet VIT, EffientNet, ShuffleNet, DenseNet in diagnosing apple leaf diseases.</p>
</list-item>
</list>
</sec>
<sec id="s2">
<label>2</label>
<title>Methodology</title>
<sec id="s2_1">
<label>2.1</label>
<title>Network architechture</title>
<p>The network architecture of our model(MobileNet-MFS) is shown in <xref ref-type="fig" rid="f1">
<bold>Figure&#xa0;1A</bold>
</xref>. The design of the model inherits the main modules of MobileNet v3, but in order to obtain better diagnostic efficacy, many modifications were also made to the model. The main body of the model is consistent with MobileNet v3, which consists of a two-dimensional convolutional layer, a series of bottleneck layers with different dimensions, a two-dimensional convolutional layer, a pooling layer, and a one-dimensional convolutional layer in sequence. Through this series of modules, feature information on plant disease-affected areas is extracted, and diseases are classified into 9 types through 1&#xd7;1 convolution. However, at the front end of the model, in order to further explore the feature information that cannot be captured in the original MobileNet v3, we introduced a multi-scale feature extraction module. The most important change is that we have proposed a new FSCA attention mechanism to replace the SE attention mechanism module used in MobileNeT v3. The FSCA mechanism will be explained in detail in the following chapters.</p>
<fig id="f1" position="float">
<label>Figure&#xa0;1</label>
<caption>
<p>
<bold>(A)</bold> Network structure of MobileNet-MFS. <bold>(B)</bold> Detailed composition of a single bottleneck module.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g001.tif"/>
</fig>
<p>As shown in <xref ref-type="fig" rid="f1">
<bold>Figure&#xa0;1B</bold>
</xref>, in MobileNet-MFS, the most basic module is the bottleneck layer, which is composed of an inverted residual network containing depthwise separated convolutions. It replaces the standard convolution operation with a depthwise convolution followed by a pointwise convolution. This reduces computational complexity and model size while maintaining accuracy. In addition to depthwise separated convolution, the bottleneck layer also includes expansion convolution, which mainly serves to increase the number of channels in the input feature map using a 1x1-sized convolutional kernel. Projection convolution is a 1x1 convolutional kernel with a significantly smaller number of output channels than the input channels, thus achieving the goal of limiting the size of the model. When the input and output channels are the same, a residual network can be used. The bottleneck layer of the inverted residual structure formed by the above convolution operations is finally activated using ReLU or h-swish function.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Attention module</title>
<p>Although CNN is very powerful in image expression, they are deficient in expressing spatial information. Therefore, the attention mechanism has been introduced into MobileNet v3, which can improve the learning ability of the model by assigning weights to images. In the original version of MobileNet v3, the SE attention module is placed in the middle of the bottleneck layer, <xref ref-type="bibr" rid="B11">Hu et&#xa0;al. (2020)</xref> giving an updated set of weight values through two fully connection layers and the activation function. However, the SE attention module only cares about the dependencies between channels and ignores location information, which is crucial for generating spatially selective attention maps. Therefore, we propose our FSCA attention mechanism to replace the SE module.</p>
<p>The FSCA attention mechanism considers both spatial and channel information of the input layer, thus more effectively guiding the model to focus on effective positions in the image. As shown in <xref ref-type="fig" rid="f2">
<bold>Figure&#xa0;2</bold>
</xref>, the FSCA attention mechanism consists of two concatenated modules. The first module mainly focuses on aggregating features in the spatial directions of X and Y. By averaging pooling in the X and Y directions and performing concat operation, a 1 &#xd7; (H+W) &#xd7; C dimensional array is obtained. Furthermore, we normalized the array through convolution, separated it, and activated it with a sigmoid function to obtain a set of weights containing information in the X and Y directions. Afterward, the weights are multiplied with the original data to obtain a set of directional perception feature layers. These transformations allow the attention module to capture long-term dependencies along one spatial direction and preserve precise positional information along another spatial direction, which helps the network locate interested targets more accurately.</p>
<fig id="f2" position="float">
<label>Figure&#xa0;2</label>
<caption>
<p>Network architecture of FSCA attention mechanism module.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g002.tif"/>
</fig>
<p>The second module focuses on channel attention. In this module, we will take the maximum and average values of the input feature layers on the channels of each feature point. Afterward, we stack these two values and adjust the number of channels using a convolution with a channel count of 1. Then, we take a sigmoid function and obtain the weights of each feature point in the input feature layer (between 0 and 1). After obtaining this weight, we multiply it by the original input feature layer.</p>
<p>By concatenating and multiplying the two steps, we obtain our FSCA attention mechanism, which focuses on both the X and Y dimensions of input and the fusion of information of channels. Therefore, the obtained results are more comprehensive. Since our attention mechanism fused both spatial and channel information, we named it FSCA attention mechanism, which references CBAM <xref ref-type="bibr" rid="B28">Woo et&#xa0;al. (2018)</xref> and CA <xref ref-type="bibr" rid="B8">Hou et&#xa0;al. (2021)</xref> attention mechanism. In the following experiments, it was demonstrated that the FSCA mechanism helped our model better identify the characteristics of apple leaf diseases.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Multi-scale feature extraction</title>
<p>For apple leaf diseases, there are two main characteristics that are not easily extracted by machines. One is that there is a significant difference in the size of the disease on the leaves, such as Powdery Mill Draw and Grey spot lesions. Another type of disease is that its color or other details may vary depending on the scope of the disease, such as Grey spot and Rust lesions.</p>
<p>The above features involve dimensions of different sizes and are not easily captured by MobileNet V3, which mainly uses 3 &#xd7; 3 and 5 &#xd7; 5 convolution operations. In order to enable the machine to capture more features from different dimensions, <xref ref-type="bibr" rid="B17">Li et&#xa0;al. (2020)</xref> we have added a multi-scale feature extraction module to the front end of the input layer.</p>
<p>The structure of this module is shown in <xref ref-type="fig" rid="f3">
<bold>Figure&#xa0;3</bold>
</xref>. Four dimensions of convolution: 1 &#xd7; 1, 3 &#xd7; 3, 5 &#xd7; 5, and 7 &#xd7; 7 were applied in the module. After the image is convoluted, it is merged into a new feature map and then placed ahead of the network. Through such feature extraction, the accuracy of disease classification was improved.</p>
<fig id="f3" position="float">
<label>Figure&#xa0;3</label>
<caption>
<p>Compositions of multi-feature extraction module.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g003.tif"/>
</fig>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Experimental results</title>
<sec id="s3_1">
<label>3.1</label>
<title>Dataset</title>
<p>The images of apple leaves were collected from both laboratory and outdoor environments, with a total of eight diseases. These leaves were divided into nine categories, and each photo was labeled with the disease type. Our data mainly comes from PlantVillage, PPCD2020, PPCD2021, and ATLDSD datasets. PlantVillage is mainly from laboratory environments, while images from the PPCD2020 and PPCD2021 are collected in natural environments. The total number of samples is 15250, including 12204 for the training set and 3046 for the testing set. The sample ratio for the training and testing sets is 4:1.</p>
<p>As shown in <xref ref-type="fig" rid="f4">
<bold>Figure&#xa0;4</bold>
</xref>, there are a total of eight apple diseases in our sample, namely Alternaria leaf spot, Brown spot, Frogeye leaf spot, Grey spot, Mosaic, Powdery Mildew, Rust and Scab. The number of samples was collected in <xref ref-type="table" rid="T1">
<bold>Table&#xa0;1</bold>
</xref>. Both Brown spot and Mosaic form large spots on the leaves, but the former will first cause the diseased parts of the leaves to turn yellow in a large area. Powdery Mildew can turn the veins of the leaves white and stain the leaves with white spots. Many other plants also suffer from similar diseases, such as strawberries. Other diseases can cause various types of spots on the leaves, such as Rust causing red spots on the leaves, while Gray spots causing gray spots, and Frogeyes causing yellow-brown spots on the center, similar to those on the outer ring of a frog&#x2019;s eye. In order to distinguish these different types of spots, neural networks need to first be able to capture these spots and further distinguish the different features of color and shape in the spots.</p>
<fig id="f4" position="float">
<label>Figure&#xa0;4</label>
<caption>
<p>Classification of the samples: <bold>(A)</bold> Alternaria leaf spot; <bold>(B)</bold> Brown spot; <bold>(C)</bold> Frogeye leaf spot; <bold>(D)</bold> Grey spot; <bold>(E)</bold> Mosaic; <bold>(F)</bold> Powdery mildew; <bold>(G)</bold> Rust; <bold>(H)</bold> Scab; <bold>(I)</bold> Health.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g004.tif"/>
</fig>
<table-wrap id="T1" position="float">
<label>Table&#xa0;1</label>
<caption>
<p>Number of samples from different diseases.</p>
</caption>
<table frame="hsides">
<thead>
<tr>
<th valign="top" align="left">Types</th>
<th valign="top" align="center">Training Sample</th>
<th valign="top" align="center">Test Sample</th>
<th valign="top" align="center">Total Sample</th>
<th valign="top" align="center">Total (data augumation)</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Alternaria leaf spot</td>
<td valign="top" align="left">526</td>
<td valign="top" align="left">131</td>
<td valign="top" align="left">657</td>
<td valign="top" align="left">1578</td>
</tr>
<tr>
<td valign="top" align="left">Brown spot</td>
<td valign="top" align="left">354</td>
<td valign="top" align="left">88</td>
<td valign="top" align="left">442</td>
<td valign="top" align="left">1062</td>
</tr>
<tr>
<td valign="top" align="left">Frogeye leaf spot</td>
<td valign="top" align="left">2544</td>
<td valign="top" align="left">635</td>
<td valign="top" align="left">3179</td>
<td valign="top" align="left">7632</td>
</tr>
<tr>
<td valign="top" align="left">Grey spot</td>
<td valign="top" align="left">285</td>
<td valign="top" align="left">71</td>
<td valign="top" align="left">356</td>
<td valign="top" align="left">855</td>
</tr>
<tr>
<td valign="top" align="left">Health</td>
<td valign="top" align="left">704</td>
<td valign="top" align="left">175</td>
<td valign="top" align="left">879</td>
<td valign="top" align="left">2112</td>
</tr>
<tr>
<td valign="top" align="left">Mosaic</td>
<td valign="top" align="left">316</td>
<td valign="top" align="left">79</td>
<td valign="top" align="left">395</td>
<td valign="top" align="left">948</td>
</tr>
<tr>
<td valign="top" align="left">Powdery mildew</td>
<td valign="top" align="left">947</td>
<td valign="top" align="left">236</td>
<td valign="top" align="left">1183</td>
<td valign="top" align="left">2841</td>
</tr>
<tr>
<td valign="top" align="left">Rust</td>
<td valign="top" align="left">2202</td>
<td valign="top" align="left">550</td>
<td valign="top" align="left">2752</td>
<td valign="top" align="left">6606</td>
</tr>
<tr>
<td valign="top" align="left">Scab</td>
<td valign="top" align="left">4326</td>
<td valign="top" align="left">1081</td>
<td valign="top" align="left">5407</td>
<td valign="top" align="left">12978</td>
</tr>
<tr>
<td valign="top" align="left">Total Number</td>
<td valign="top" align="left">12204</td>
<td valign="top" align="left">3046</td>
<td valign="top" align="left">15250</td>
<td valign="top" align="left">36612</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Evaluation metric</title>
<p>Accuracy is the most commonly used indicator, which represents the proportion of the true value of a model in the overall population. However, measuring the quality of a model cannot be solely based on accuracy. Some other indicators also reflect the quality of the model. For example, precision focuses on the model&#x2019;s ability to avoid false positives, while recall focuses on the model&#x2019;s ability to identify all positive instances. At the same time, when the dataset of the model is imbalanced, the f1-score balances the results of recall and precision, which better reflects the advantages and disadvantages of the model. The area under curve (AUC) shows the trade-off between the true positive rate and the false positive rate. Higher AUC values indicate better discriminability of the model. Therefore, accuracy is used with other performance metrics like precision, recall, f1-Score, and AUC. The definition of accuracy is:</p>
<disp-formula>
<label>(1)</label>
<mml:math display="block" id="M1">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where TN = true negative, FN = false negative, TP = true positive, and FP = false positive.</p>
<p>The expression of precision, recall, and f1-score are equations (2&#x2013;4), respectively.</p>
<disp-formula>
<label>(2)</label>
<mml:math display="block" id="M2">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula>
<label>(3)</label>
<mml:math display="block" id="M3">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula>
<label>(4)</label>
<mml:math display="block" id="M4">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mn>1</mml:mn>
<mml:mi>s</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
</sec>
<sec id="s4" sec-type="result">
<label>4</label>
<title>Result</title>
<sec id="s4_1">
<label>4.1</label>
<title>Accuracy and Loss</title>
<p>The accuracy and loss values of the model are shown in <xref ref-type="fig" rid="f5">
<bold>Figure&#xa0;5</bold>
</xref>. By analyzing the images, we can conclude that the accuracy of training and testing has improved to over 97% after 20 epochs. For the training data, the loss is around 0.5, while for the test data, the loss stabilizes below 0.1 after 20 epochs. When epochs approach 80, the model achieved a maximum accuracy of 98.7%.</p>
<fig id="f5" position="float">
<label>Figure&#xa0;5</label>
<caption>
<p>The <bold>(A)</bold> accuracy and <bold>(B)</bold> loss curve of the experiment.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g005.tif"/>
</fig>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Confusion matrix</title>
<p>The confusion matrix of the experiment is shown in <xref ref-type="fig" rid="f6">
<bold>Figure&#xa0;6</bold>
</xref>, where the horizontal and vertical coordinates represent the disease predicted by the model and the real disease respectively. Therefore, when the prediction is consistent with the actual situation, the axis data of the matrix will be added by one. When the predicted disease is inconsistent with the actual disease, the increased value of the matrix appears in the nondiagonal region. Take &#x2018;Rust&#x2019; as an example, 534 cases of Rust were accurately identified, but 4 cases were misdiagnosed as Frogeye, 2 cases were misdiagnosed as health, and 10 cases were misdiagnosed as Scab. The 10 misdiagnosed cases were also the most common in the model, due to the similarity in size and color between rust and scab. Next, we want to further modify the model to better distinguish between the two diseases.</p>
<fig id="f6" position="float">
<label>Figure&#xa0;6</label>
<caption>
<p>Confusion matrix of disease classification.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g006.tif"/>
</fig>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>ROC</title>
<p>We have depicted the Receiver Operating Characteristic (ROC) curve of each disease, as shown in <xref ref-type="fig" rid="f7">
<bold>Figure&#xa0;7</bold>
</xref>. It should be noted that the true positive rate of various diseases is high, resulting in a very steep ROC curve. The curve of Gery spot is different from several other diseases, as it initially reaches around 0.95. When the false positive rate reaches over 0.6, the true positive rate further increases to over 0.98. The steep ROC curve shows that the model can distinguish various diseases very well. In contrast, the ROC of general models is only diagonal.</p>
<fig id="f7" position="float">
<label>Figure&#xa0;7</label>
<caption>
<p>ROC curves of disease samples.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g007.tif"/>
</fig>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>Comparison with other attention mechanisms</title>
<p>In order to visually display the impact of different attention mechanisms, we calculated and compared the accuracy of different attention mechanisms (SE, ECA, CBAM, CA, FSCA, MFS) within the MobileNet v3 framework. As shown in <xref ref-type="fig" rid="f8">
<bold>Figure&#xa0;8</bold>
</xref>, our proposed FSCA attention mechanism and combined multi-scale MFS attention mechanism grow rapidly with epochs but are slightly slower compared to other types. But when the epoch increases to 20, their stability and maximum value are the best. In contrast, the fluctuation amplitude of other attention mechanisms is relatively large, while the accuracy of the MFS and the FSCA mechanism fluctuates at the highest point, demonstrating special stability.</p>
<fig id="f8" position="float">
<label>Figure&#xa0;8</label>
<caption>
<p>Comparison of accuracy curves for different attention mechanisms.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g008.tif"/>
</fig>
</sec>
<sec id="s4_5">
<label>4.5</label>
<title>Comparison with other CNNs</title>
<p>The accuracy of different CNNs and MobileNet-MFS are also compared. As shown in <xref ref-type="fig" rid="f9">
<bold>Figure&#xa0;9</bold>
</xref>, the light gray curve represents the accuracy curve of MobileNet-MFS. Compared with other models, it also rises very quickly and gradually reaches its high-level platform after 20 epochs. At the 28th epoch, MobileNet-MFS has an accuracy of around 98%, which is better than other models at the same epoch. Finally, when the epoch reaches 75, the MobileNet-MFS reaches its maximum accuracy of 98.7%, surpassing all other models.</p>
<fig id="f9" position="float">
<label>Figure&#xa0;9</label>
<caption>
<p>Comparison of accuracy curves for different models.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g009.tif"/>
</fig>
<p>In order to comprehensively compare our model with other classic models, we calculated several indicators such as precision, recall, f1 score, and AUC. These indicators can measure the model&#x2019;s capabilities from different aspects. From <xref ref-type="table" rid="T2">
<bold>Table&#xa0;2</bold>
</xref>, we can note that the MobileNet-MFS has the highest metrics in precision, recall and f1-score. However, in terms of AUC, it is not as good as a group of models such as EfficientNet-B0 and MobileNet-VIT.</p>
<table-wrap id="T2" position="float">
<label>Table&#xa0;2</label>
<caption>
<p>Precision, Recall, F1-Score and AUC for different models.</p>
</caption>
<table frame="hsides">
<thead>
<tr>
<th valign="top" align="left">Model</th>
<th valign="top" align="center">Precision</th>
<th valign="top" align="center">Recall</th>
<th valign="top" align="center">F1-Score</th>
<th valign="top" align="center">AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">MobileNetV3</td>
<td valign="top" align="left">0.982257</td>
<td valign="top" align="left">0.982272</td>
<td valign="top" align="left">0.982245</td>
<td valign="top" align="left">0.996483</td>
</tr>
<tr>
<td valign="top" align="left">Densenet121</td>
<td valign="top" align="left">0.978340</td>
<td valign="top" align="left">0.978332</td>
<td valign="top" align="left">0.978258</td>
<td valign="top" align="left">0.998184</td>
</tr>
<tr>
<td valign="top" align="left">EfficientB0</td>
<td valign="top" align="left">0.985624</td>
<td valign="top" align="left">0.985555</td>
<td valign="top" align="left">0.985524</td>
<td valign="top" align="left">
<bold>0.998827</bold>
</td>
</tr>
<tr>
<td valign="top" align="left">ShuffnetV2 X10</td>
<td valign="top" align="left">0.981947</td>
<td valign="top" align="left">0.981944</td>
<td valign="top" align="left">0.981842</td>
<td valign="top" align="left">0.998230</td>
</tr>
<tr>
<td valign="top" align="left">Resnet34</td>
<td valign="top" align="left">0.979438</td>
<td valign="top" align="left">0.979317</td>
<td valign="top" align="left">0.979282</td>
<td valign="top" align="left">0.997757</td>
</tr>
<tr>
<td valign="top" align="left">MobileVIT</td>
<td valign="top" align="left">0.984214</td>
<td valign="top" align="left">0.984242</td>
<td valign="top" align="left">0.984185</td>
<td valign="top" align="left">0.998727</td>
</tr>
<tr>
<td valign="top" align="left">MobileNet-MFS</td>
<td valign="top" align="left">
<bold>0.986198</bold>
</td>
<td valign="top" align="left">
<bold>0.986211</bold>
</td>
<td valign="top" align="left">
<bold>0.986156</bold>
</td>
<td valign="top" align="left">0.996105</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Regarding the comparison of model performance, in addition to the above indicators, it is also necessary to consider the computational resources used by the models. MobileNet-MFS is based on MobileNet v3 and belongs to a lightweight CNN. The lightweight of the model will help it be applied to a wider range of scenarios. In addition, the computational complexity of the model is also a very important indicator, and the FLOPs provide an effective method to measure the computational complexity of the model. The indicators provided in <xref ref-type="table" rid="T3">
<bold>Table&#xa0;3</bold>
</xref> help us measure various aspects of the model more comprehensively. Taking into account parameter counts, memory size, and FLOPs counts, The MobileNet-MFS has more advantages over EfficientNet-B0, ResNet-34, and DenseNet-121, consuming slightly more computing resources than MobileNet v3, but not as streamlined as ShuffleNet v2.</p>
<table-wrap id="T3" position="float">
<label>Table&#xa0;3</label>
<caption>
<p>Comparison of operational and parameter performance among different models.</p>
</caption>
<table frame="hsides">
<thead>
<tr>
<th valign="middle" align="left">Model</th>
<th valign="top" align="left">TOP-1 Accuracy (%)</th>
<th valign="top" align="left">Parameters Count<break/>(Millions)</th>
<th valign="top" align="left">Memory Size (MB)</th>
<th valign="top" align="left">FLOPs Count (MFLOPs)</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">MobileNet-MFS</td>
<td valign="top" align="left">98.69</td>
<td valign="top" align="left">4.96</td>
<td valign="top" align="left">51.30</td>
<td valign="top" align="left">251.94</td>
</tr>
<tr>
<td valign="top" align="left">MobileNetV3</td>
<td valign="top" align="left">98.39</td>
<td valign="top" align="left">4.21</td>
<td valign="top" align="left">50.39</td>
<td valign="top" align="left">226.44</td>
</tr>
<tr>
<td valign="top" align="left">Densenet121</td>
<td valign="top" align="left">97.90</td>
<td valign="top" align="left">6.96</td>
<td valign="top" align="left">147.10</td>
<td valign="top" align="left">2881.60</td>
</tr>
<tr>
<td valign="top" align="left">EfficientB0</td>
<td valign="top" align="left">98.56</td>
<td valign="top" align="left">4.02</td>
<td valign="top" align="left">79.40</td>
<td valign="top" align="left">398.03</td>
</tr>
<tr>
<td valign="top" align="left">ShuffnetV2 X10</td>
<td valign="top" align="left">98.33</td>
<td valign="top" align="left">1.26</td>
<td valign="top" align="left">20.84</td>
<td valign="top" align="left">149.58</td>
</tr>
<tr>
<td valign="top" align="left">Resnet34</td>
<td valign="top" align="left">98.09</td>
<td valign="top" align="left">21.29</td>
<td valign="top" align="left">37.61</td>
<td valign="top" align="left">3673.72</td>
</tr>
<tr>
<td valign="top" align="left">MobileVIT</td>
<td valign="top" align="left">98.49</td>
<td valign="top" align="left">1.94</td>
<td valign="top" align="left">&#x2013;</td>
<td valign="top" align="left">743.48</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In summary, through the comparison of various indicators, parameter quantities, and computational complexity, we can conclude that although many excellent models have emerged for image classification, MobileNet-MFS is still a state-of-the-art model.</p>
</sec>
</sec>
<sec id="s5" sec-type="discussions">
<label>5</label>
<title>Discussions</title>
<p>Finally, we utilized Gradient-weighted Class Activation Mapping (GRAD-CAM) to extract network recognition feature maps of images. Through these feature maps, we can more intuitively see the model&#x2019;s recognition of image features. As shown in <xref ref-type="fig" rid="f10">
<bold>Figure&#xa0;10A</bold>
</xref>, the Alternaria leaf spot on the leaf is very well and directly identified. From <xref ref-type="fig" rid="f10">
<bold>Figures&#xa0;10B, C</bold>
</xref>, it should be noted that the lesion areas on the Rust and Gray spot leaves with multiple spots have also been simultaneously observed, without any omissions or misjudgments. As shown in <xref ref-type="fig" rid="f10">
<bold>Figure&#xa0;10D</bold>
</xref>, the large area of yellow on the brown spot was well captured by our model, and the spots on the brown spot were also given special attention. These figures demonstrate the model&#x2019;s excellent feature capture ability.</p>
<fig id="f10" position="float">
<label>Figure&#xa0;10</label>
<caption>
<p>Heat map display of feature extraction of leaf disease sites: <bold>(A)</bold> Alternaria leaf spot <bold>(B)</bold> Rust <bold>(C)</bold> Grey spot <bold>(D)</bold> Brown spot.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g010.tif"/>
</fig>
<p>The error case of MobileNet-MFS is also checked, and these images are selected from the library. As shown in <xref ref-type="fig" rid="f11">
<bold>Figures&#xa0;11A, B</bold>
</xref>, the Rust lesion can be accurately captured by our model. However, the leaves in <xref ref-type="fig" rid="f11">
<bold>Figures&#xa0;11C, D</bold>
</xref> with Frogeyes disease were mistakenly identified by the model as Rust-infected leaves. It can be deduced that these erroneous cases are due to the many similarities in the characteristics of these two diseases, and this discrimination error should be very difficult for CNNs.</p>
<fig id="f11" position="float">
<label>Figure&#xa0;11</label>
<caption>
<p>
<bold>(A)</bold> Leaves with Rust disease. <bold>(B)</bold> Heat map of feature extraction of the Rust lesion site. <bold>(C, D)</bold> Mistakenly identified leaves with Frogeye disease.</p>
</caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-14-1274231-g011.tif"/>
</fig>
<p>From the perspective of incorrect images, it is actually difficult for the human eye to distinguish between the two situations. We cannot rule out that the database itself may still have misclassification in some cases. Without proper management, the error rate of the human eye itself is within the range of 5% -10%. If artificial intelligence is well-trained, it can surpass human recognition ability. Therefore, considering randomness, we believe that certain errors are inevitable.</p>
<p>Simply comparing accuracy, our work is inferior to some recent work. However, on the one hand, our dataset differs from theirs, as a large proportion of the images in our dataset are collected from the natural environment. On the other hand, the parameters and operation time of our model are also different. Although 98.7% is a high-level score for the classification of leaf diseases, the images in our dataset have been well processed, so they cannot fully restore the real usage scenarios. We have not yet processed images taken in orchard environment, therefore it is the weakness of our work. Our next step is to develop a network that can process drone and robot camera images, remove unclear and messy backgrounds, and make accurate classifications on mobile devices.</p>
</sec>
<sec id="s6" sec-type="conclusions">
<label>6</label>
<title>Conclusions</title>
<p>The identification of apple leaf diseases is very difficult, thanks to the development of deep learning, a series of models have shown great achievement in identifying leaf diseases. On the basis of these works, we have improved MobileNet v3 by modifying its attention mechanism, taking into account the influence of dimension and space. At the same time, we have added a multi-scale feature extraction module to further improve the performance of the network. By comparing with similar models, we found that our proposed MobileNet-MFS showed the best performance in terms of accuracy and stability. This also indicates that our proposed attention mechanism and multi-scale module have effectively improved the feature capture ability of the model for leaf diseases, and there is also hope for their application in other aspects. We also calculated the ROC and confusion matrix of the model, which shows that the model is very good at resolving various diseases. Finally, we reviewed the feature extraction graph of the model through GRAD-CAM and analyzed the error cases. Compared to previous models, the model is more efficient mainly due to the mutual cooperation of two aspects. FSCA and multi-scale respectively increase the model&#x2019;s feature discovery ability and the implementation of more scale features, both of which are crucial for getting more accurate classifications. This work indicates that the MobileNet-MFS is a very effective model for distinguishing apple leaf diseases, and the FSCA attention mechanism used in this model is also worthy of further application in other scenarios.</p>
</sec>
<sec id="s7" sec-type="data-availability">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="s9" sec-type="author-contributions">
<title>Author contributions</title>
<p>HC: Methodology, Software, Visualization, Writing &#x2013; review &amp; editing. HL: Investigation, Visualization, Writing &#x2013; original draft.</p>
</sec>
</body>
<back>
<sec id="s10" sec-type="funding-information">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The authors thank financial support from Key Projects in Shandong Province for Undergraduate Teaching Reform Research (Z2022150).</p>
</sec>
<sec id="s11" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="s12" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arsenovic</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Karanovic</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Sladojevic</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Anderla</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Stefanovic</surname> <given-names>D.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Solving current limitations of deep learning based approaches for plant disease detection</article-title>. <source>Symmetry</source> <volume>11</volume>, <fpage>939</fpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.3390/sym11070939</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Sun</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Nanehkaran</surname> <given-names>Y.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Using deep transfer learning for image-based plant disease identification</article-title>. <source>Comput. Electron. Agric.</source> <volume>173</volume>, <elocation-id>105393</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2020.105393</pub-id>
</citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coulibaly</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Kamsu-Foguem</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Kamissoko</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Traore</surname> <given-names>D.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep neural networks with transfer learning in millet crop images</article-title>. <source>Comput. Industry.</source> <volume>108</volume>, <fpage>115</fpage>&#x2013;<lpage>120</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compind.2019.02.003</pub-id>
</citation>
</ref>
<ref id="B4">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Deng</surname> <given-names>L.</given-names>
</name>
<name>
<surname>Wang</surname> <given-names>Z.</given-names>
</name>
<name>
<surname>Zhou</surname> <given-names>H.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Application of image segmentation technology in crop disease detection and recognition</article-title>,&#x201d; in <source>Computer and computing technologies in agriculture XI</source>. Eds. <person-group person-group-type="editor">
<name>
<surname>Li</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Zhao</surname> <given-names>C.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>365</fpage>&#x2013;<lpage>374</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ferentinos</surname> <given-names>K. P.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Deep learning models for plant disease detection and diagnosis</article-title>. <source>Comput. Electron. Agric.</source> <volume>145</volume>, <fpage>311</fpage>&#x2013;<lpage>318</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2018.01.009</pub-id>
</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fuentes</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Yoon</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Kim</surname> <given-names>S. C.</given-names>
</name>
<name>
<surname>Park</surname> <given-names>D. S.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition</article-title>. <source>Sensors</source> <volume>17</volume> (<issue>9</issue>), <fpage>2022</fpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.3390/s17092022</pub-id>
</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hasan</surname> <given-names>R. I.</given-names>
</name>
<name>
<surname>Yusuf</surname> <given-names>S. M.</given-names>
</name>
<name>
<surname>Mohd Rahim</surname> <given-names>M. S.</given-names>
</name>
<name>
<surname>Alzubaidi</surname> <given-names>L.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Automated masks generation for coffee and apple leaf infected with single or multiple diseases-based color analysis approaches</article-title>. <source>Inf. Med. Unlocked.</source> <volume>28</volume>, <elocation-id>100837</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.imu.2021.100837</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hou</surname> <given-names>Q.</given-names>
</name>
<name>
<surname>Zhou</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Feng</surname> <given-names>J.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Coordinate attention for efficient mobile network design</article-title>,&#x201d; in <source>2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR)</source>, <publisher-loc>Nashville, TN, USA</publisher-loc>, <fpage>13708</fpage>&#x2013;<lpage>13717</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/CVPR46437.2021.01350</pub-id>
</citation>
</ref>
<ref id="B9">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Howard</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Sandler</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Wang</surname> <given-names>W.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>L.-C.</given-names>
</name>
<name>
<surname>Tan</surname> <given-names>M.</given-names>
</name>
<etal/>
</person-group>. (<year>2019</year>). &#x201c;<article-title>Searching for mobilenetv3</article-title>,&#x201d; in <source>2019 IEEE/CVF international conference on computer vision (ICCV)</source>, <publisher-loc>Seoul, South Korea</publisher-loc>, <fpage>1314</fpage>&#x2013;<lpage>1324</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/ICCV.2019.00140</pub-id>
</citation>
</ref>
<ref id="B10">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Howard</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Zhu</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Kalenichenko</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Wang</surname> <given-names>W.</given-names>
</name>
<name>
<surname>Weyand</surname> <given-names>T.</given-names>
</name>
<etal/>
</person-group>. (<year>2017</year>). <source>Mobilenets: Efficient convolutional neural networks for mobile vision applications</source>. <fpage>arXiv:1704.04861</fpage>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hu</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Shen</surname> <given-names>L.</given-names>
</name>
<name>
<surname>Albanie</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Sun</surname> <given-names>G.</given-names>
</name>
<name>
<surname>Wu</surname> <given-names>E.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Squeeze-and-excitation networks</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>42</volume>, <fpage>2011</fpage>&#x2013;<lpage>2023</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/TPAMI.2019.2913372</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname> <given-names>P.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Liu</surname> <given-names>B.</given-names>
</name>
<name>
<surname>He</surname> <given-names>D.</given-names>
</name>
<name>
<surname>Liang</surname> <given-names>C.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>59069</fpage>&#x2013;<lpage>59080</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/ACCESS.2019.2914929</pub-id>
</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kamilaris</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Prenafeta-Bold&#xfa;</surname> <given-names>F. X.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Deep learning in agriculture: A survey</article-title>. <source>Comput. Electron. Agric.</source> <volume>147</volume>, <fpage>70</fpage>&#x2013;<lpage>90</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2018.02.016</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kavyashree</surname> <given-names>P. S. P.</given-names>
</name>
<name>
<surname>El-Sharkawy</surname> <given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Compressed mobilenet v3:a light weight variant for resource-constrained platforms</article-title>,&#x201d; in <source>2021 IEEE 11th annual computing and communication workshop and conference (CCWC)</source>, <publisher-loc>NV, USA</publisher-loc>, <fpage>0104</fpage>&#x2013;<lpage>0107</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/CCWC51732.2021.9376113</pub-id>
</citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khan</surname> <given-names>M. A.</given-names>
</name>
<name>
<surname>Akram</surname> <given-names>T.</given-names>
</name>
<name>
<surname>Sharif</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Awais</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Javed</surname> <given-names>K.</given-names>
</name>
<name>
<surname>Ali</surname> <given-names>H.</given-names>
</name>
<etal/>
</person-group>. (<year>2018</year>). <article-title>Ccdf: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep cnn features</article-title>. <source>Comput. Electron. Agric.</source> <volume>155</volume>, <fpage>220</fpage>&#x2013;<lpage>236</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2018.10.013</pub-id>
</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krizhevsky</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Sutskever</surname> <given-names>I.</given-names>
</name>
<name>
<surname>Hinton</surname> <given-names>G. E.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Imagenet classification with deep convolutional neural networks</article-title>. <source>Commun. ACM</source> <volume>60</volume>, <fpage>84</fpage>&#x2013;<lpage>90</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1145/3065386</pub-id>
</citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname> <given-names>Z.</given-names>
</name>
<name>
<surname>Yang</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Li</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Guo</surname> <given-names>R.</given-names>
</name>
<name>
<surname>Yang</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Yue</surname> <given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A solanaceae disease recognition model based on se-inception</article-title>. <source>Comput. Electron. Agric.</source> <volume>178</volume>, <elocation-id>105792</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2020.105792</pub-id>
</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Du</surname> <given-names>K.</given-names>
</name>
<name>
<surname>Zheng</surname> <given-names>F.</given-names>
</name>
<name>
<surname>Zhang</surname> <given-names>L.</given-names>
</name>
<name>
<surname>Gong</surname> <given-names>Z.</given-names>
</name>
<name>
<surname>Sun</surname> <given-names>Z.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network</article-title>. <source>Comput. Electron. Agric.</source> <volume>154</volume>, <fpage>18</fpage>&#x2013;<lpage>24</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2018.08.048</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mokhtar</surname> <given-names>U.</given-names>
</name>
<name>
<surname>Ali</surname> <given-names>M. A. S.</given-names>
</name>
<name>
<surname>Hassenian</surname> <given-names>A. E.</given-names>
</name>
<name>
<surname>Hefny</surname> <given-names>H.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Tomato leaves diseases detection approach based on support vector machines</article-title>,&#x201d; in <source>2015 11th international computer engineering conference (ICENCO)</source>, <publisher-loc>Cairo, Egypt</publisher-loc>, <fpage>246</fpage>&#x2013;<lpage>250</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/ICENCO.2015.7416356</pub-id>
</citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pardede</surname> <given-names>H. F.</given-names>
</name>
<name>
<surname>Suryawati</surname> <given-names>E.</given-names>
</name>
<name>
<surname>Sustika</surname> <given-names>R.</given-names>
</name>
<name>
<surname>Zilvan</surname> <given-names>V.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Unsupervised convolutional autoencoderbased feature learning for automatic detection of plant diseases</article-title>,&#x201d; in <source>2018 international conference on computer, control, informatics and its applications (IC3INA)</source>, <publisher-loc>Tangerang, Indonesia</publisher-loc>, <fpage>158</fpage>&#x2013;<lpage>162</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1109/IC3INA.2018.8629518</pub-id>
</citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sandler</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Howard</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Zhu</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Zhmoginov</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Chen</surname> <given-names>L.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>MobileNetV2: Inverted Residuals and Linear Bottlenecks</article-title>,&#x201d; in <source>2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>, <publisher-loc>Salt Lake City, UT, USA</publisher-loc>, <fpage>4510</fpage>&#x2013;<lpage>4520</lpage>.  doi:&#xa0;<pub-id pub-id-type="doi">10.1109/CVPR.2018.00474</pub-id>
</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shin</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Chang</surname> <given-names>Y. K.</given-names>
</name>
<name>
<surname>Heung</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Nguyen-Quang</surname> <given-names>T.</given-names>
</name>
<name>
<surname>Price</surname> <given-names>G. W.</given-names>
</name>
<name>
<surname>Al-Mallahi</surname> <given-names>A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A deep learning approach for rgb image-based powdery mildew disease detection on strawberry leaves</article-title>. <source>Comput. Electron. Agric.</source> <volume>183</volume>, <elocation-id>106042</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2021.106042</pub-id>
</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shoaib</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Hussain</surname> <given-names>T.</given-names>
</name>
<name>
<surname>Shah</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Ullah</surname> <given-names>I.</given-names>
</name>
<name>
<surname>Shah</surname> <given-names>S. M.</given-names>
</name>
<name>
<surname>Ali</surname> <given-names>F.</given-names>
</name>
<etal/>
</person-group>. (<year>2022</year>). <article-title>Deep learning-based segmentation and classification of leaf images for detection of tomato plant disease</article-title>. <source>Front. Plant Sci.</source> <volume>13</volume>. doi:&#xa0;<pub-id pub-id-type="doi">10.3389/fpls.2022.1031748</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Singh</surname> <given-names>A.</given-names>
</name>
<name>
<surname>Ganapathysubramanian</surname> <given-names>B.</given-names>
</name>
<name>
<surname>Singh</surname> <given-names>A. K.</given-names>
</name>
<name>
<surname>Sarkar</surname> <given-names>S.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Machine learning for highthroughput stress phenotyping in plants</article-title>. <source>Trends Plant Sci.</source> <volume>21</volume>, <fpage>110</fpage>&#x2013;<lpage>124</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.tplants.2015.10.015</pub-id>
</citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sun</surname> <given-names>H.</given-names>
</name>
<name>
<surname>Xu</surname> <given-names>H.</given-names>
</name>
<name>
<surname>Liu</surname> <given-names>B.</given-names>
</name>
<name>
<surname>He</surname> <given-names>D.</given-names>
</name>
<name>
<surname>He</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname> <given-names>H.</given-names>
</name>
<etal/>
</person-group>. (<year>2021</year>). <article-title>Mean-ssd: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks</article-title>. <source>Comput. Electron. Agric.</source> <volume>189</volume>, <elocation-id>106379</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2021.106379</pub-id>
</citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Too</surname> <given-names>E. C.</given-names>
</name>
<name>
<surname>Yujian</surname> <given-names>L.</given-names>
</name>
<name>
<surname>Njuki</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Yingchun</surname> <given-names>L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>A comparative study of fine-tuning deep learning models for plant disease identification</article-title>. <source>Comput. Electron. Agric.</source> <volume>161</volume>, <fpage>272</fpage>&#x2013;<lpage>279</lpage>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2018.03.032</pub-id>
</citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname> <given-names>H.</given-names>
</name>
<name>
<surname>Peng</surname> <given-names>Z.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Rice diseases detection and classification using attention based neural network and bayesian optimization</article-title>. <source>Expert Syst. Appl.</source> <volume>178</volume>, <elocation-id>114770</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.eswa.2021.114770</pub-id>
</citation>
</ref>
<ref id="B28">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Woo</surname> <given-names>S.</given-names>
</name>
<name>
<surname>Park</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Lee</surname> <given-names>J.-Y.</given-names>
</name>
<name>
<surname>Kweon</surname> <given-names>I. S.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Cbam: Convolutional block attention module</article-title>,&#x201d; in <source>Computer vision &#x2013; ECCV 2018</source>. Eds. <person-group person-group-type="editor">
<name>
<surname>Ferrari</surname> <given-names>V.</given-names>
</name>
<name>
<surname>Hebert</surname> <given-names>M.</given-names>
</name>
<name>
<surname>Sminchisescu</surname> <given-names>C.</given-names>
</name>
<name>
<surname>Weiss</surname> <given-names>Y.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>3</fpage>&#x2013;<lpage>19</lpage>.</citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xiong</surname> <given-names>Y.</given-names>
</name>
<name>
<surname>Liang</surname> <given-names>L.</given-names>
</name>
<name>
<surname>Wang</surname> <given-names>L.</given-names>
</name>
<name>
<surname>She</surname> <given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname> <given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Identification of cash crop diseases using automatic image segmentation algorithm and deep learning with expanded dataset</article-title>. <source>Comput. Electron. Agric.</source> <volume>177</volume>, <elocation-id>105712</elocation-id>. doi:&#xa0;<pub-id pub-id-type="doi">10.1016/j.compag.2020.105712</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>