key: cord-0027347-ig5w7ogd authors: Elaraby, Ahmed; Hamdy, Walid; Alanazi, Saad title: Classification of Citrus Diseases Using Optimization Deep Learning Approach date: 2022-02-10 journal: Comput Intell Neurosci DOI: 10.1155/2022/9153207 sha: 384eec7527d5e1713b34be628f836ecf6597aac8 doc_id: 27347 cord_uid: ig5w7ogd Most plant diseases have apparent signs, and today's recognized method is for an expert plant pathologist to identify the disease by looking at infected plant leaves using a microscope. The fact is that manually diagnosing diseases is time consuming and that the effectiveness of the diagnosis is related to the pathologist's talents, making this a great application area for computer-aided diagnostic systems. The proposed work describes an approach for detecting and classifying diseases in citrus plants using deep learning and image processing. The main cause of decreased productivity is considered to be plant diseases, which results in financial losses. Citrus is an important source of nutrients such as vitamin C all around the world. On the contrary, citrus diseases have a negative impact on the citrus fruit and quality. In the recent decade, computer vision and image processing techniques have become increasingly popular for the detection and classification of plant diseases. The suggested approach is evaluated on the citrus disease image gallery dataset and the combined dataset (citrus image datasets of infested scale and plant village). These datasets were used to identify and classify citrus diseases such as anthracnose, black spot, canker, scab, greening, and melanose. AlexNet and VGG19 are two kinds of convolutional neural networks that were used to build and test the proposed approach. The system's total performance reached 94% at its best. The proposed approach outperforms the existing methods. e essential techniques of attaining the greatest economic value of citrus are the identification and categorization of citrus diseases. Citrus disease classification, as the most significant element of citrus disease processing, is progressively performed by machine learning than manual techniques such as computer image processing, pattern recognition, and other technologies. Automatic fruit classification using machine vision can not only solve difficulties such as poor productivity and inconsistent classification standards that come with human sorting but can also increase classification accuracy [1] . For many people in the world, agriculture has been their primary source of income. Agriculture's increased commercialization has had a significant impact on our environment. One of the most pressing problems in agriculture is the identification of plant diseases. Early disease identification helps in the prevention of disease transmission to other plants that leads to significant economic losses. Plant diseases have a wide variety of effects, from minor symptoms to full plantation loss, all of which have a significant influence on the agricultural economy [2] . Deep learning is now widely utilized in a variety of disciplines, including object identification [3] , signal and speech recognition [4] , biomedical image classification [5, 6] , and segmentation [7] . Deep learning is also being utilized extensively in agriculture for the identification and categorization of plant diseases [8] . Convolutional neural network (CNN) is regarded as the most effective deep learning approach [9] . Several CNN architectures such as AlexNet [10] , GoogLeNet [11] , and others are utilized to identify and classify plant diseases. Furthermore, there are many researchers who used deep learning models for the identification and classification of citrus diseases (Pourreza et al. [12] , Barman and Ridip [13] , Xiaoling et al. [14] , and Zia Ur Rehman et al. [15] ). e available dataset for training deep learning models has a significant impact on their performance. On the sufficiently big dataset, these models exhibit improved outcomes and excellent generalizability. e datasets currently available for citrus plant diseases usually lack sufficient images in a variety of situations, which are required for developing high-accuracy models. Given the small dataset, the model may overfit and perform poorly on the real-world test dataset. To improve the dataset, different data augmentation techniques such as affine transformation and perspective transformation are utilized [16] . Generative adversarial networks (GANs) are used to create counterfeit images when the training images are inadequate, and there is no ability for image manipulation techniques to change the outputs. e major goal of this research is to use deep learning approaches to identify citrus plant diseases at a lower cost. Two distinct citrus images, fruit disease image (FDI) and leaf Disease image (LDI), are used to solve this problem. Viruses, fungi, mold, bacteria, and mites are the most common causes of diseases in plants. e proposed approach detects and classifies the affected plant's diseases and then presents the results in multiperformance metrics to prove the effectiveness of our models. When compared to prior or current methodologies, the proposed approach yields a result with less calculation time and more accuracy. We used stochastic gradient descent with momentum to optimize the models. Many approaches for detecting and classifying fruit diseases have been proposed by researchers in the fields of computer vision and machine learning [17] . For segmentation of arecanut bunches, Dhanesha et al. [18] employed the YCR color model approach. For segmenting bunches, the approach employs volumetric overlap error and dice similarity coefficient to estimate the similarity between the input image and ground truth. is technique focuses on segmenting arecanut branches that are not arecanut. e same approach is expanded using the HSV color model for the purpose of bunch segmentation [19] . To diagnose rice plant diseases, Ghosal and Sarkar [20] presented a VGG16 with transfer learning. e authors utilized four classes of images to train this classification, and VGG16 has an accuracy of up to 92.4%. Kumar et al. [21] presented a system to identifying diseases at coffee leaves, and radial basis function neural network, fuzzy logic-based expert system, transfer learning techniques, and CNN with data augmentation were used for identification. is study employs two types of datasets: original leaf images and chosen symptomatic portions from leaf images. ere are five different kinds of leaf images in each dataset. e model performed well with a score of 97.61%. For identifying the diseases in millet crops, Coulibaly et al. [22] used a VGG16 model using a transfer learning method. is study gathered 124 leaf images and divided them into two categories: mildew infections and healthy leaves. e accuracy of the VGG16 model was 95%. A convolutional neural network was proposed by Hari et al. [23] as an effective method for detecting diseases in plant types such as grape, maize, tomato, and apple. e dataset comprises a total of 15,210 leaf images divided into ten classes, which were used to train and test the model. e accuracy of the suggested convolutional neural network was 86%. For identifying diseases in tomato leaf images, Jiang et al. [24] employed a CNN model ResNet50. e collection contains 3,000 images that belong to three classifications. is model has 98.0% accuracy. For identifying diseases in plant leaves, Nandhini and Bhavani [25] offered machine learning methods such as KNN, decision trees, and SVM. To segment the diseased part of the leaf image, they employed a feature extraction technique that involves several steps, including converting RGB images to lab color space models for color feature extraction, K-means clustering, fast Fourier transform, and histogram, scale-invariant feature transform for shape feature extraction, and principal component analysis for lowering vector size. For classification, the algorithms discussed above were utilized, with SVM outperforming the other two techniques. Panchal et al. [26] utilized a random forest classifier to detect early blight, late blight infections, and bacterial spot in leaves of plants. e herpes simplex virus (HSV) method was utilized to separate the sick and healthy portions of the leaf in image segmentation, and a gray level co-occurrence matrix was used for feature extraction. ose models have a 98% accuracy rate. Automatic plant disease identification for 28 distinct classes gathered from 15 different plants was presented by Kamal et al. [27] . A total of 23,352 images were chosen, ranging in size from 43 to 1,760 per class. For feature extraction, six distinct models were chosen: DenseNet 121, DenseNet 169, VGG19, InceptionV3, NasNetmobile, MobileNet, and ResNet. 50,000 images were gathered from 14 distinct crops by Agarwal et al. [28] . All the images were created using the same dimensions. ree convolutional layers with three max-pooling layers and various filters are included in the proposed model. Adhikari et al. [29] presented a methodology for automatically detecting plant illness, particularly in tomato plants, using image processing. e datasets such as gray spot, late blight, and bacterial canker included some images of three forms of tomato plant diseases. Some of the images were taken from the Internet, while others were taken with camera equipment on the premises. e CNN model that was formed contains 24 convolutional layers and two fully linked layers. Janarthan et al. [30] created a system for the categorization of four distinct citrus leaf diseases that included an embedding module, a patch module, and a deep neural network. is study employed a dataset of 609 images. e patch module divides the various patches of lesion found on leaves into individual pictures, increasing the amount of dataset available for training. Background removal and data augmentation are among the preprocessing techniques employed. e training is performed with the deep Siamese network. e suggested approach achieved 94.04% with a minimal computing cost; adjusting roughly 2.3 million parameters are required to train the network. Pan et al. [31] described a deep convolutional neural network-based technique. Black spot, anthracnose, sand rust, canker, scab, and greening (HLB) are among the diseases included in the collection, which has 2,097 images. To expand the quantity of datasets available for training, data augmentation approaches are used. e dataset is partitioned into 6 : 2:2 for training, validation, and testing, correspondingly. Features are extracted and classified by using the DenseNet model. e last dense block in this work is changed to simplify the DenseNet model. With a decent forecast time, the proposed strategies achieved an accuracy of 88%. Zhang et al. [32] proposed a technique for detecting canker disease. Deep neural networks are used in both rounds of this process. GANs (Generative adversarial networks) are employed in the initial stage to magnify the dataset by reproducing the original dataset and creating synthetic pictures. e second step is based on AlexNet, and it involves making modifications to the optimization target and updating the parameters via Siamese training, with an accuracy of 90.9% and a recall of 86.5%. In this paper, we study and evaluate the effectiveness of several first-order optimizers, particularly for identifying images of citrus diseases using pretrained models such as AlexNet and VGG19. e pretrained AlexNet model surpasses the other architectures when it comes to categorising photos of citrus diseases, according to the findings of the experiments. In recent years, citrus plant diseases' automatic detection has grown in popularity by using deep neural networks. We present a short description of the proposed framework that will be used for detection and classification of diseases in citrus plants using deep learning and image processing. e general architecture of our suggested deep learning models is depicted in Figure 1 , which includes input datasets, preprocessing phase, deep learning model phase, transfer learning phase, classification phase, and evaluation metrics phase. First, it is possible to identify lesion patches on citrus fruits and expose them by using the suggested deep learning models. e second step is to classify citrus diseases. It takes a long time to train a neural network from scratch. It necessitates the use of a good hyperparameter selection technique. Instead, transferring the weights from a conventional pretrained network is simple and delivers better categorization performance measures. e schematic diagram of transfer learning-based citrus plant classification from the image of the diseases is shown in Figure 2 . e disease images are scaled to fit into a pretrained network's standard input size. Data augmentation utilizing rotation is accomplished in the preprocessing step since a deep neural network works effectively with a higher number of images. e selected model's starting layers and network weights are transferred. e discriminative features from the illness images are extracted by the appropriate network. Modifying the last layers allows for classification. A common deep learning approach is based on transfer learning in which pretrained model weights are transferred to a new classification issue. As a result, training becomes more efficient and easier. e architectures AlexNet and VGG19 are employed in this study. AlexNet [33] is made up of eight layers that may be learned (five convolutional and three fully connected layers). Due to its deep architecture, VGG19 [34] is a well-known pretrained model for image classification that works well. It has 47 layers, including sixteen convolutional layers, five maximum pooling layers, and three fully linked layers. [35] is made up of eight layers, five of which are convolutional and three of which are completely connected. Each convolutional layer is paired with a maxpooling layer and a normalization layer to reduce the image size and normalize the output pixel values [36] . For AlexNet, the images are resized to 224 * 224 pixels. e first convolutional layer uses an input image of 224 * 224 * 3 pixels with 96 kernels of size 11 * 11 * 3 pixels and a stride of 4 * 4. Here, 3 denotes creating three-channel RGB images. is layer has a total of 34,944 parameters. e maxpooling layer follows with a pool size of 2 * 2 and 2 strides. e second convolutional layer uses data from the preceding layer and has a 256-kernel size for 1 stride, followed by maxpooling layers with 2 * 2 pool size and 2 strides. is layer has a total of 2,973,952 parameters. e third convolutional layer has a kernel size of 384 with 1 stride and is followed by a maxpooling layer with a pool size of 22 with 2 strides, which takes data from the previous layers. is layer has a total of 885,120 parameters. e fourth convolutional layer, with a kernel size of 384 and 1 stride, takes input from the preceding layers. is layer has a total of 1,327,488 parameters. e fifth convolutional layer has 256 parts and 1 step and is followed by a maxpooling layer with 22 pool sizes and 2 steps, which integrates input from previous layers. e total parameter is now 884,992. ere are three thick layers with 4,096 neurons after the five convolutional layers. ere are a total of 28,079,671 parameters utilized. e activation function utilized here is the Relu, while the Softmax activation function is used for the last dense layer. is VGG19 [34] network is identical to VGG16, but instead of 16, it will have 19 layers, including 16 convolutional layers and three entirely linked dense layers. e first and second layers each include 64 filters and 3 * 3 kernels, which are followed by the maxpooling layer. Following the maxpooling layer, there are 128 filters with a 33% kernel in the second and third convolutional layers, and following that there are four convolutional layers with 256 filters of 3 * 3 kernel and a maxpooling layer in that order. Two further convolutional layers with 512 filters of 3 * 3 Computational Intelligence and Neuroscience kernel are put in sequence, followed by a maxpooling layer. is output is then routed into layers that are fully coupled. With 4096, 4096, and 1000 neurons, there are three fully connected thick layers. For all layers, the activation function is Relu, except for the last dense layer that uses the Softmax activation function. Gradient descent is a firstorder optimization process that iteratively adjusts a neural network's learnable parameters to minimize the loss. Generally, the gradient indicates the direction in which the loss function's change rate is the steepest. e learning rate is the rate at which each learnable parameter is modified in the positive direction of displacement, with a step size in the negative direction that is appropriate. e equation below represents the update equation mathematically: where W is the learnable parameter vector, η denotes the step size, and L denotes the loss function. e gradient descent algorithm contains three major variations, depending on how many data samples used for gradient computation: minibatch gradient descent (MBGD), stochastic gradient descent (SGD), and batch gradient descent (BGD). In the BGD technique, the loss function gradient is calculated for the whole training dataset, whereas in the SGD approach, a parameter update is performed for each training sample. In the MBGD algorithm, the entire dataset for training is partitioned into minibatches, and the parameters are changed for each minibatch. On the one hand, BGD causes sluggish training and superfluous calculations. On the other hand, SGD is faster, and although there are swings owing to frequent updates with large volatility, it is generally stable. Minibatch gradient descent has a lower variance of parameter updates than the other two methods, which might lead to more steady convergence. We used stochastic gradient descent with momentum (SGDM). e SGDM technique has been extended to include stochastic gradient descent with momentum [37] . It takes past gradients into consideration in each dimension. e term momentum prevents undesirable oscillations and speeds up the algorithm's convergence. In our experiments, we used a sample of images taken from the benchmark databases. On the citrus disease image gallery dataset, the combined dataset (citrus images database of infested scale and plant village), and our own gathered image database, the suggested approach is evaluated. Citrus diseases including anthracnose, black spot, canker, scab, greening, and melanose were detected and classified using these datasets. e suggested method outperforms current techniques. Our databases are divided into two databases; first one for fruit disease image (FDI), and the second one for leaf disease image (LDI). e description of our two databases is shown in Table 1 . e samples of first database FDI and the samples of second database LDI are presented in Figure 3 and To be successfully trained, deep learning classification networks require a large amount of training data. Unfortunately, the number and scarcity of current citrus disease image collections, as well as the lack of genuine annotated ground truths, continue to obstruct the automatic diagnosis of citrus diseases. To solve this problem, augmentation operations on the training set were performed to increase the number of training images and avoid the overfitting problem that can occur in the case of using a small amount of training data during the training process. Several augmentation parameters, such as random cropping, rotation, mirroring, and color shifting, were applied to the data using principal component analysis. After augmentation, we get 12,211 images, as illustrated in Table 2 . e input images are transformed to a standard size of 256 * 256 dimensions in the proposed work to make implementation easier and save processing time. We employed different proportions of training and testing samples to evaluate the efficiency of the proportion of training and testing samples' number on classification. We used 80-20 and 60-40 training and testing samples, respectively. As the number of training samples grows, it is projected that the technique's accuracy would improve. For the classification of citrus disease images, AlexNet and VGG19 networks are trained and evaluated using the two datasets. e pretrained model is divided into two phases; in the first phase, the dataset is split into 80% of the images for training and 20% for testing, and in the second phase, the dataset is split into 60% of the images for training and 40% for testing of the images for each database. SGDM optimizers are used for training. e following equations define the performance metrics utilized in this study. e sensitivity, otherwise called recall, indicates the precision of positive instance, and it refers to how many examples of positive sets were correctly labelled; it can be measured using equation (2), where TP represents true positive or the numeral of positive cases that are precisely classified, and FN represents false negatives or the quantity of positive cases that are inaccurately named as negatives. Specificity is explained as the restrictive probability of actual negative token of an optional class, which generally corresponds to the likelihood of the negative marking which is true; it is expressed by equation (3), where TN signifies the quantities of cases or real negatives that are negative and named such true, and FP indicates the quantities of false upsides or cases that are erroneously delegated positive. In general, sensitivity and accuracy measure the algorithm's effectiveness on a certain class and are either negative or positive, respectively. Precision is the most frequent criterion for assessing categorization efficiency. During the assessment period, the accuracy is evaluated every 20 iterations. is metric, which counts the proportion of samples that are properly categorized, is represented by the following equation: Equation (5) gives precision when we divide the number of true positives by the same number plus the number of false positives. is statistic assesses the algorithm's accuracy or its ability to anticipate results. e model's precision relates to how "exact" it is in terms of how many of the anticipated positives are actually positive. e performance of a deep neural network may be improved by properly selecting hyperparameters such as batch size, maximum epochs, and step size. For training the pretrained models, a batch size of 32 is chosen. A low step size of 0.00001 and a number of epochs of 20 may lead to better network performance when transferring pretrained network weights. e SGDM optimizer is used to change network settings for each database, and performance measurements are recorded. e simulation findings show that when a pretrained model is trained with 80% of the (1) Select W 0 as the initial parameter vector and f(w) as the objective function (2) Select the decay rate moving average c and step size η (3) Create the first moment M 0 to zero vector (4) while true do (5) W 0 is not converge do (6) j � j + 1 (7) at timestep, j, g j calculate the gradients (8) Using the updated biased first-moment estimate M j � c M j−1 + (1 − c)g j (9) Use to change the parameters W j � W j−1 − η M j (10) end ALGORITHM 1: SGDM optimizer. Computational Intelligence and Neuroscience images and tested with 20% of the images, the SGDM optimizer produces superior results. Two different phases, 60 : 40 and 80 : 20, are used to calculate results. In the first stage, 60 : 40 approach is used to calculate classification performance for two datasets, and results are shown in Tables 3 and 4 . ey explain the findings achieved using the AlexNet and VGG19 models for various datasets. AlexNet classifier gives better classification accuracy as compared with VGG19 classifiers. On AlexNet, the achieved accuracy is 91.4%, while precision, sensitivity, specificity, and F-score are 90.7%, 90.6%, 90.4%, and 90.9%, respectively, but on VGG19, the achieved accuracy is 91.1%, while precision, sensitivity, specificity, and F-score are 90.1%, 90.3%, 90.2%, and 90.7%, respectively. In the second stage, evaluation is performed by using a ratio of 80 : 20. Tables 5 and 6 illustrate the results of this sensitivity, specificity, and F-score are 92.4%, 92.2%, 92%, and 92.5%, respectively. e classification rate, accuracy, precision, sensitivity, specificity, and F-score of the proposed and current techniques are shown in Figures 5-8 . In all experiments, the suggested technique outperforms the existing methods in terms of classification rate, accuracy, precision, sensitivity, specificity, and F-score. e study results of Sharif et al. [38] , dealing with of citrus diseases, were compared with the results of our study and are shown in Table 7 . As seen in this table, the AlexNet with the SGDM model achieved high accuracy than studies of Sharif et al. with the original dataset. e performance of SGDM optimizers for the automated identification of citrus disease images using the transfer learning approach is evaluated and compared in this paper. For extracting discriminative features from the source images, two standard models, AlexNet and VGG19, are examined. To assess network performance, the FDI and LDI citrus disease datasets are used to reach the greatest classification accuracy of 94.3%. Based on the findings, we determined that the deep learning methodology is a mature approach when compared to other methods. When we have a big amount of training data, we may also use an 80 : 20 strategy depending on the results. Data availability is considered to be the main hindrance of this work, which is reduced partly due to the incorporation of the data augmentation stage. In future research, we will concentrate on these flaws and work to enhance accuracy and classification algorithms. e data used to support the findings of this study are available from the corresponding author upon request. e authors declare that they have no conflicts of interest. Citrus greening detection using visible spectrum imaging and C-SVC Crop losses due to diseases and their implications for global food production losses and food security You only look once: unified, real-time object detection Convolutional neural networks for speech recognition Stacked convolutional neural network for diagnosis of COVID-19 Disease from X-ray Images Residual learning based CNN for breast cancer histopathological image classification Deeprnnetseg: deep residual neural network for nuclei segmentation on breast cancer histopathological images Plant leaf disease detection and classification using machine learning approaches: a review Plant disease classification: a comparative evaluation of convolutional neural networks and deep learning optimizers ImageNet classification with deep convolutional neural networks Deep learning algorithm for autonomous driving using GoogLeNet Identification of Citrus Greening Disease Using a Visible Band Image Analysis Smartphone assist deep neural network to detect the citrus diseases in agri-informatics Detection of citrus Huanglongbing based on image feature extraction and two-stage BPNN modeling Classification of citrus plant diseases using deep transfer learning Tomato plant disease detection using transfer learning with C-GAN synthetic images Supervised classification of slightly bruised peaches with respect to the time after bruising by using hyperspectral imaging technology Segmentation of arecanut bunches using YCgCr color model Segmentation of arecanut bunches using HSV color model Rice leaf diseases classification using CNN with transfer learning Calcutta Conference (CALCON) Disease detection in coffee plants using convolutional neural network Deep neural networks with transfer learning in millet crop images Detection of plant disease by leaf image using convolutional neural network A tomato leaf diseases classification method based on deep learning Feature extraction for diseased leaf image classification using machine learning Plant diseases detection and classification using machine learning models Transfer learning for fine-grained crop disease classification based on leaf images Potato crop disease classification using convolutional neural network Tomato plant diseases detection system using image processing Deep Metric Learning Based Citrus Disease Classification with Sparse Data A smart mobile diagnosis system for citrus diseases based on densely connected convolutional networks Classification of canker on small datasets using improved deep convolutional generative adversarial networks Imagenet classification with deep convolutional neural networks Very deep convolutional networks for large-scale image recognition Convolutional_Networks_for_Large-Scale_Image_ Recognition Gradientbased learning applied to document recognition Optimization of deep learning model for plant disease detection using particle swarm optimizer On the momentum term in gradient descent learning algorithms Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection