International Journal of Scientific Methods in Computational Science and Engineering 1(1):17-23

DeepLeaf: Automated Plant Disease Diagnosis using Deep Learning Approach


S.Sailaja 1 *, E.V.N.Jyothi2 , M.Kranthi3

1,2,3 Department of Computer Science and Engineering, RISE Krishna Sai Prakasam Group of Institutions, Ongole, Andhra Pradesh, India


Received: 01 June 2024 Accepted: 02 June 2024 Published Online: 03 June 2024


Abstract: Automated diagnosis of plant diseases is critical for ensuring food security and agricultural sustainability. In this study, we propose DeepLeaf, a novel deep-learning framework for the automated recognition and classification of plant diseases. DeepLeaf leverages convolutional neural networks to analyze plant leaf images and accurately identify disease symptoms. The framework is trained on a large dataset of annotated images, encompassing a wide range of plant species and disease types. Through extensive experimentation, we demonstrate the effectiveness of DeepLeaf in accurately diagnosing plant diseases across diverse environmental conditions and varying degrees of disease severity. Our results show that DeepLeaf achieves high accuracy and robustness, outperforming traditional methods and commercial systems in speed and reliability. Furthermore, DeepLeaf is designed to be easily deployable in real-world agricultural settings, enabling farmers and agronomists to identify and mitigate plant diseases quickly, thus refining crop yield and dropping economic losses.

Key words: Plant disease diagnosis, Deep learning, Convolutional neural networks, Agriculture, Automated diagnosis.


  1. Introduction

    Agriculture, the bedrock of global food security and economic stability, is gravely threatened by plant diseases, which compromise crop yield and quality. A quick and accurate diagnosis is essential for taking timely control measures to reduce the impact of these disorders. Expert


    * Correspondence: Associate Professor, Department of Computer Science and Engineering, RISE Krishna Sai Prakasam Group of Institutions, Ongole, Andhra Pradesh, India. Email:sailaja.sikhakolli@gmail.com https://doi.org/10.58599/IJSMCSE.2024.1107

    Vol. 1, No. 1, June 2024, pp:17-23


    17

    This work is licensed under a Creative Commons Attribution 4.0 International License CC BY-NC-ND 4.0.


    visual examination is typical of conventional methods for diagnosing diseases; nevertheless, this approach can be time-consuming, subjective, and prone to mistakes. Thanks to computer vision and profound learning developments, automated systems that detect plant diseases are becoming more widespread. By providing scalable, reliable, and quick solutions, these systems have the potential to change farming operations radically. The emergence of deep learning, namely convolutional neural networks (CNNs), has enabled tremendous advancements in image detection and analysis. With their advanced algorithms taught to extract intricate patterns and features from images using convolutional neural networks (CNNs), automated plant disease diagnosis holds great promise. These deep learning algorithms can learn to distinguish between healthy and unhealthy plant tissues using subtle visual cues such as discoloration, lesions, or abnormalities. This makes quick and accurate identification of illness possible.

    Several deep learning-based frameworks proposed for automated plant disease diagnosis have shown promising results across several crops and pathogens in the past few years. A common component of these frameworks is training convolutional neural network (CNN) models on large datasets of annotated photographs. These datasets often include images of various plant species and diseases. By training under supervision, the model can learn to recognize specific disease symptoms, improving its ability to categorize previously unseen images correctly. Despite deep learning’s impressive progress, automated plant disease diagnostic systems still have a way to go before they can be considered fully operational. Within this framework, this paper introduces DeepLeaf, a state- of-the-art deep learning system created to automate the diagnosis of plant diseases. To address some of the significant limitations of existing approaches, DeepLeaf employs state-of-the-art convolutional neural network (CNN) designs and innovative techniques for data enhancement, model improvement, and implementation in real-world agricultural contexts. Combining deep learning with domain expertise in plant pathology, DeepLeaf offers a dependable and scalable solution for disease diagnosis in plants across all growth settings and varieties. In the following parts, we thoroughly describe the DeepLeaf framework’s design, training method, evaluation criteria, and experimental results to show how it works and how it could impact agricultural sustainability.


  2. Related Works

    People would not be able to get through the day without plants; they are essential to our survival. Botany is an important part of the Earth’s biosphere, and the study of this branch of botany is an important part of it. The loss of plant genomes and other food quality and health concerns are further outcomes of artificially increasing production. It is critical to discover a substitute for spraying plants with harmful pesticides, as this practice has a negative impact on the environment. Researchers are proposing remedies and suggestions to deal with this problem as part of this plan. Several suggestions for segmentation-based problem area detection have been made,


    with textures and colors used as differentiators. This leads us to believe this is a neural network application [1]. A BPNN classifier handles class problems, and an active contour model limits the

    intensity inside the specified infection zone [2]. Here is an example of a classification rate of 85.52 percent relative to what was found in the survey [3]. We may assign a severity degree to the disease by using GLCM to extract textural information with fuzzy logic and K-means clustering to separate the affected area [4]. As a classifier, they used artificial neural networks (ANNs) to assess the disease leaf’s severity by converting the resulting color histograms from RGB to HSV [5]. Classification is achieved by constructing maximum trees using peak components and examining the area under the curve in addition to the five form requirements. They used various analytical methods, including SV classifiers, Naive Bayes, Probability forests, Decision trees, Nearest Neighbors, and Extremely Randomized Trees [6].

    Including randomized trees, which give real-time information and do well in seven different classifiers, enhances the program’s adaptability. The Multiple Classifier System provides a detailed description of the technique for converting RGB color space to HIS and obtaining form parameters using GLCM’s seven invariant moments [7]. They could detect wheat plant diseases without a computer using mobile clustering and a support vector machine classifier [8]. The segmentation technique uses color and texture as features to organize pomegranate illnesses with backpropagation neural networks [9]. Unfortunately, neural networks won’t cut under those conditions because they can only handle many harvests. Using Hu’s moments, comparable to the BPNN classifier provided by the author [10], is one way to reduce the mortality risk linked with the classification of illnesses in plant families [11]. Those categorization problems are effectively addressed by employing the active contour model [12]. This approach is practical because it reduces blood flow to the affected area. According to industry standards, 85% of the data is classified. This work aims to design a method to detect and rank leaf illnesses using computer vision and fuzzy logic. This approach is provided within the context of this study. To grade the sickness, our system employs fuzzy logic and runs GLCM to extract textural information. We use fuzzy logic for disease diagnosis and GLCM for defect localization. The next step is to pinpoint the problematic regions using K-means clustering.


  3. Methodology

    Creating a deep learning approach like DeepLeaf for automated plant disease diagnosis in- volves designing a system that automatically identifies and classifies plant diseases from images. This process typically involves several key components, including data preprocessing, a convolu- tional neural network for feature extraction, and an arrangement layer for disease identification. Let’s outline a plausible architecture for DeepLeaf and explain each component’s role in the system. The DeepLeaf architecture overview is shown in Figure 1.



    Figure 1. DeepLeaf architecture overview


    Input Layer: This is where images of plant leaves are input into the system. These images can be of various sizes and resolutions, captured under different lighting conditions.

    Data Preprocessing: Before feeding the images into the neural network, they undergo preprocess- ing to standardize their size, enhance image quality, and possibly augment the data set to increase its diversity. This step might include resizing images, normalizing pixel values, and applying tech- niques like rotation, flipping, or color adjustment to expand the training set artificially.

    Convolutional Neural Network (CNN): The central processing unit (CPU) of DeepLeaf is a convolutional neural network (CNN), and its function is feature extraction. The structure of this network will consist of totally linked, activation, pooling, and convolutional layers. The activation layers will take cues from ReLU, and the pooling layers from max pooling. As it applies filters to the input, each convolutional layer captures image properties, including edges, textures, and disease-specific shapes to create feature maps.

    Classification Layer: A classifier consisting of one or more fully linked layers is incorporated into the design after feature extraction. Using a softmax activation function, the final layer outputs probability values for each class, particularly plant disease kind. Most likely, the predicted diagnosis falls under this category.

    Output Layer: The output is the diagnosis result, indicating the type of disease affecting the


    plant, if any. This layer can also provide additional information, such as the prediction’s confidence level.

    Back-End (Optional): While not part of the core deep learning architecture, DeepLeaf might be integrated into a more extensive system that includes a database for storing images and diagnosis results, a user interface for uploading pictures and receiving diagnoses, and possibly even recom- mendations for treatment based on the identified disease.


    Explanation of Components:

    Data Preprocessing is crucial for normalizing the input data and ensuring the neural network receives high-quality, uniform inputs. This step directly impacts the accuracy and efficiency of the training process. The CNN is the heart of DeepLeaf, leveraging deep learning’s power to automatically learn the most relevant features for distinguishing between different plant diseases. Its layered structure allows it to learn hierarchical representations, making it highly effective for image classification tasks. The Classification Layer takes the high-level features extracted by the CNN and uses them to make a final disease prediction. This layer is designed to interpret the complex patterns the CNN recognizes in a way that relates directly to the specific diseases being diagnosed. The Output Layer is the interface between DeepLeaf and its users or other systems. It provides quickly interpretable results based on the deep learning model’s complex computations. Compared to manual diagnosis methods, DeepLeaf can automate the identification of plant diseases by combining powerful image processing and deep learning algorithms. This has the potential to give higher levels of accuracy and efficiency than traditional methods. This kind of system could prove to be highly beneficial to agricultural consultants, researchers, and farmers because it offers disease management methods that are quick, readily available, and economical.


  4. Results and Discussion

    Among the variables that determine the images that make up an input batch are:

  5. Conclusion

A random forest classifier will be employed to ascertain a leaf’s health. Also, whether controlled or uncontrolled, this instrument may detect plant abnormalities. Standard practice dictates using a simple background while taking pictures to avoid occlusion. According to the model’s output, around 70% of the labels are correct. Also, no commercially accessible, open- source technology exists that can distinguish between different kinds of plants just by looking at their leaves in a picture. Increased cloud storage of data related to disease detection processes will benefit this initiative. Using this technique, farmers can administer fertilizers based on the diagnosis of each ailment. Utilizing a cloud storage service entails storing your data on a distant server overseen by an independent entity. Because of its dispersed nature, this server can be accessed from any network. From personal to commercial storage, cloud computing offers many choices. Businesses can use cloud storage as a remote backup solution with the help of commercial services, thanks to the wide range of possibilities. This allows for the safe transfer and storage of data files. Because of their diminutive stature and extraordinary flying abilities, drones cannot only fly but also endure challenging environments. H drone photography allows hitherto impossible first-person


views (FPVs) to be within reach.


References

  1. Sugwon Hong, Jae-Myeong Lee, Mustafa Altaha, and Muhammad Aslam. Security monitoring and network management for the power control network. system, 2:3, 2020.

  2. Venkata Krishna Chaithanya Manam. Efficient disambiguation of task instructions in crowdsourcing. PhD thesis, Purdue University Graduate School, 2023.

  3. Skhumbuzo Zwane, Paul Tarwireyi, and Matthew Adigun. Performance analysis of machine learning classifiers for intrusion detection. In 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC), pages 1–5. IEEE, 2018.

  4. Roma Sahani, Shatabdinalini, Chinmayee Rout, J Chandrakanta Badajena, Ajay Kumar Jena, and Himansu Das. Classification of intrusion detection using data mining techniques. In Progress in Computing, Analytics and Networking: Proceedings of ICCAN 2017, pages 753–764. Springer, 2018.

  5. Sanjeev Kulkarni, Sachidanand S Joshi, AM Sankpal, and RR Mudholkar. Link stability based multipath video transmission over manet. International Journal of Distributed and Parallel Systems, 3(2):133, 2012.

  6. V K. Chaithanya Manam, Dwarakanath Jampani, Mariam Zaim, Meng-Han Wu, and Alexander

    J. Quinn. Taskmate: A mechanism to improve the quality of instructions in crowdsourcing. In Com- panion Proceedings of The 2019 World Wide Web Conference, pages 1121–1130, 2019.

  7. Hung-Jen Liao, Chun-Hung Richard Lin, Ying-Chih Lin, and Kuang-Yuan Tung. Intrusion detection system: A comprehensive review. Journal of Network and Computer Applications, 36(1):16–24, 2013.

  8. V Suresh Kumar, Sanjeev Kulkarni, Naveen Mukkapati, Abhinav Singhal, Mohit Tiwari, and D Stalin David. Investigation on constraints and recommended context aware elicitation for iot runtime workflow. International Journal of Intelligent Systems and Applications in Engineering, 12(3s):96–105, 2024.

  9. Hongzhu Tao, Jieying Zhou, and Sen Liu. A survey of network security situation awareness in power monitoring system. In 2017 IEEE Conference on Energy Internet and Energy System Integration (EI2), pages 1–3. IEEE, 2017.

  10. Karuna S Bhosale, Maria Nenova, and Georgi Iliev. Data mining based advanced algorithm for intrusion detections in communication networks. In 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), pages 297–300. IEEE, 2018.

  11. VK Chaithanya Manam, Joseph Divyan Thomas, and Alexander J Quinn. Tasklint: Automated detection of ambiguities in task instructions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 10, pages 160–172, 2022.

  12. Sanjeev Kulkarni, Aishwarya Shetty, Mimitha Shetty, HS Archana, and B Swathi. Gas spilling recog- nition and prevention using iot with alert system to improve the quality service. Perspectives in Com- munication, Embedded-systems and Signal-processing-PiCES, 4(4):34–38, 2020.