Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 4202 word(s) 4202 2021-07-02 06:14:28 |
2 The format is correct Meta information modification 4202 2021-07-07 03:18:57 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Ouhami, M.; Hafiane, A. Machine Learning for Crop Disease. Encyclopedia. Available online: (accessed on 22 June 2024).
Ouhami M, Hafiane A. Machine Learning for Crop Disease. Encyclopedia. Available at: Accessed June 22, 2024.
Ouhami, Maryam, Adel Hafiane. "Machine Learning for Crop Disease" Encyclopedia, (accessed June 22, 2024).
Ouhami, M., & Hafiane, A. (2021, July 06). Machine Learning for Crop Disease. In Encyclopedia.
Ouhami, Maryam and Adel Hafiane. "Machine Learning for Crop Disease." Encyclopedia. Web. 06 July, 2021.
Machine Learning for Crop Disease

Crop diseases constitute a serious issue in agriculture, affecting both quality and quantity of agriculture production. Disease control has been a research object in many scientific and technologic domains. Technological advances in sensors, data storage, computing resources and artificial intelligence have shown enormous potential to control diseases effectively. A growing body of literature recognizes the importance of using data from different types of sensors and machine learning approaches to build models for detection, prediction, analysis, assessment, etc. However, the increasing number and diversity of research studies requires a literature review for further developments and contributions in this area. 

plant disease machine learning remote sensing intelligent sensors data fusion

1. Introduction

According to the FAO [1], pest attacks and plant diseases are considered as two of the main causes of decreasing food availability and food hygiene. Determined by the disease and the development stage, the damages on the crops range from simple physiological defects to plant death. In addition to biological agents, other physical agents such as abrupt climate change [2] can cause diseases and harm the plant. Conventional methods for detecting and locating plant diseases include direct visual diagnosis by visual identification of disease symptoms appearing on plant leaves or by chemical techniques that involve molecular tests on plant leaves [3].

Promising approaches for detecting and locating diseases were proposed in recent years using automatic monitoring and recognition systems. Advances in sensor technologies and data processing have opened new perspectives for the detection and diagnosis of crop anomalies. Disease surveillance can be performed by capturing data from the soil and plant cover or using sensors, such as remote sensing (RS) or ground equipment, as well as with developing and testing machine learning algorithms [4]. Implementing management practices with smart algorithms optimizes profitability, sustainability and protection of land resources.

Furthermore, the agriculture field can be supplied with multiple sensors measuring environmental characteristics, plant canopy, leaves indices extracted from remote sensing imagery and IoT sensors. Given a variety of data extracted, data fusion techniques are required to assemble those types of data to better understand crop growing conditions and disease symptoms development. In addition, machine learning-based data fusion has undergone important development, and when used on agriculture data would have a great impact on plant protection field, in particular, disease and early disease detection. Therefore, several multi-sensors and remote sensing based fusion techniques have been used in agriculture for this purpose [5][6].

2. Crop Disease Detection

2.1. Ground Imaging

Crop ground imaging is the technique of acquiring crops’ fruit and leave images at ground level using smartphones or digital cameras. Since visual symptoms on the crops and plant leaves are important for disease detection, researchers tried to capture plant leaves in field conditions [7][8], raising the challenge of dealing with complex background, shadows and unstable luminosity. As a result, the spectral characteristics of plants are affected by diseases, leading researchers to invest in the detection of infected and uninfected leaves and the classification of different disease severity degrees with visual symptoms and even before visual symptoms appearance [9]. A modern approach for disease detection relies on machine learning algorithms to explore data from different acquisition systems:

Traditional machine learning algorithms were used for the purpose of disease detection. (SVM) models are commonly used for plant disease detection due to their prediction efficiency. Similarly, manual extraction of lesion characteristics and combination of multiple SVM classifiers (color, texture and shape characteristics) for diseases recognition on plant leaves have been proposed in order to reduce misclassification [10][11][12][13]. Statistical analysis of some indices using the principal component analysis (PCA) model successfully differentiated between healthy plants and infected golden potato disease progression [14].

Deep learning models were then used to improve the prediction quality and address larger types of diseases and crops. This subsection presents deep learning models deployed on RGB images, multispectral images and hyperspectral images for early disease detection.

In [15], the authors tested several deep learning models from scratch and with transfer learning. The approach outperformed the conventional method using only visual information with an accuracy of 98%. Likewise, in [16], a smaller dataset of infected tomato plant leaves images was divided into different pest attacks and plant diseases. The detection method achieved an accuracy of 95.65% using the DensNet161 with transfer learning.

For better practicability for farmers, researchers developed mobile applications for disease detection using deep learning adaptable models with mobile computing capacity and energy. The model was able to distinguish tomato leaves diseases through image recognition with accuracy reaching 89.2%. In [17], MobileNet was tested for citrus disease detection and compared to another CNN model, such as Self-Structured (SSCNN) classifiers. The results showed that SSCNN was more accurate for citrus leaf disease classification on mobile phone images.

In a pre-symptom disease detection task exploiting hyperspectral images, the authors in [18] used the extreme learning machine (ELM) classifier model on full wavelengths of hyperspectral tomato leaves images. They selected the effective wavelengths that contain much disease information in order to avoid instability of convergence in predictive models (high correlation between bands). In the same context, [7] developed a method to detect fusarium head blight disease in wheat using hyperspectral images and a specific acquisition protocol considering the field conditions. The authors were able to classify infected and healthy wheat head crops using hyperspectral images.

Ground imagery is an interesting technology in smart farming. Deep models using this type of acquisition guarantee high detection accuracy thanks to the close level leaf imaging with high resolution,Table 2 summarizes the effective wavelengths used for disease detection for close range imaging presented in this section. However, this strategy fails to monitor and diagnose plant diseases at a large scale. Moreover, these techniques are time-consuming in a wide range study area.

Table 2. Effective wavelengths for disease detection.
Effective Wavelengths Indices Ref.
697.44, 639.04, 938.22, 719.15, 749.90, 874.91, 459.58 and 971.78 nm - [19]
full range 750–1350 nm
700–1105 nm
- [20]
665 nm and 770 nm SR [14]
670, 695, 735 and 945 nm NDVI
655, 746, and 759–761 nm - [21]
445 nm, 500 nm, 680 nm, 705 nm, 750 nm RENDVI, PSRI [9]
442, 508, 573, 696 and 715 nm - [18]
Full range 400–1000 nm - [7]

2.2. UAV Imaging

UAVs are exploited also as a precision agriculture (PA) solution for monitoring and controlling crops growth [22][23], eventual disease development and weed detection [24][25], thanks to their ability to collect higher resolution images at lower costs. UAVs equipped with embedding cameras and sensors perform efficient field data acquisition for field scale visualization and analysis. Additional elements can help enhance performances of crop monitoring techniques, such as the choice of appropriate sensors and intelligent recognition models. As spectrometry is sensitive to diseases, multispectral cameras are more often used for disease detection studies.

For systems using these types of sensors, a large amount of data is first stored in large-scale databases provided by information systems such as the geographical information system (GIS). In fact, the information system enables the visualization and analysis this data. Data collected provide information on soil and vegetation cover characteristics, such as soil organic content and soil moisture, biomass quantity, weed existence and early detection of crop stress with eventual disease stage evaluation.

Traditional machine learning algorithms are used for plant disease detection using UAV images. One of the first models attempting to predict infection severity on plants from images is the Backpropagation NN (BPNN) [20], in which the authors extracted spectral data from remote sensing hyperspectral images of tomato plants. In [26], the authors adopted a segmentation approach based on the Simple Linear Iterative Clustering (SLIC) for soybean foliar diseases detection. In [27], UAV images were utilized for the detection of citrus canker in several disease development stages.

To conclude, the performance of traditional machine learning approaches is limited and can easily vary according to different growing periods and with different acquisition equipment. In addition, the low performance can also be due to the feature engineering process, which provokes important information loss.

Deep learning models have also been developed and used to tackle the limitations of traditional machine learning for plant disease detection using UAV images. Similarly, with the aim of detecting disease symptoms in grape leaves [28], the authors used the CNN approach by performing a relevant combination of image features and color spaces. The classification model was designed based on a combination of deep convolutional generative adversarial networks (DCGANs), and an AdaBoost classifier. The first step consisted of overlaying the two types of images, using an optimized image registration, and resulting images were used with semantic segmentation approach (SegNet architecture) to delineate and detect the vine symptoms.

2.3. Satellite Imaging

Conversely, satellites covering wider land areas offer historical images of the study area depending on satellite acquisition frequency. Indeed, satellites can provide multispectral images with very high spatial resolution that can range from 0.5 m to more than 30 m. Conversely, high temporal resolution satellites have very low spectral resolution [29]. For instance, the MODIS sensor for the Terra/Aqua satellite collects daily images.

Several machine learning methods have been used to perform land monitoring from satellite images, for instance: mapping of urban fabric [30][31][32], crop classification and field boundaries [33][34] and pest detection [35].

Traditional machine learning was used to test the usage of satellite images for disease detection. The types of stress detected in this study were pests and disease stress, heavy metal stress or double stress combining the two first types. SVM was deployed in [36] for disease detection in winter wheat. In [37], the naive Bayes algorithm was tested on spectral signatures of coffee berry necrosis issued from Landsat 8 OLI satellite images in the aim of disease detection; the classification reached an accuracy of 50%.

Deep learning has proven its high performance for disease detection also using satellite images. In [38], the authors proposed a gated recurrent unit (GRU)-based model to predict development of sudden death syndrome (SDS) disease in soybean quadrats. In fact, the range of 10 m resolution and above are barely enough for crop classification task, which becomes challenging for disease detection [39]. To bridge the gap of lacking data and improve the prediction, several analysts recommended incorporating satellite images with aerial images and other data sources such as wireless sensor networks that capture environmental parameters for disease detection [40].

2.4. Internet of Things Sensors

A typical wireless monitoring system must contain multiple sensors connected in each zone to an installed node, with sensors and nodes communicating via radio-frequency. In case the WSN is unavailable, one of the existing alternative solutions is the weather station [41] which provides different local measurements in real-time for various agricultural applications. Several studies have been established to collect wireless sensor network data for disease detection. Nevertheless, the classic methods used for disease detection are limited and it is more interesting to take advantage of machine learning algorithms to generate efficient prediction models.

Sensors for temperature, relative humidity and leaf humidity are placed in the vineyard to collect the necessary data. The prediction was based on multiparameters extracted from the field, namely atmospheric temperature, atmospheric humidity, CO2 concentration, illumination intensity, soil moisture, soil temperature and leaf wetness. The model achieved promising results, proving the validity of environmental data for early disease detection. Since abiotic factors such as temperature, soil moisture and humidity help to determine whether the plant is growing in healthy conditions or not, the system used two sensors: a soil moisture sensor and a temperature-humidity sensor.

Deep learning: In [42], the authors developed an approach for prediction of cotton disease and pests occurrence. (Bi-LSTM) was then introduced for prediction; it achieved an accuracy of 87.84% and an overall area under the curve (AUC) score of 0.95. Nevertheless, we noticed that the amount of IoT papers established for disease detection using machine learning is not sufficient, which may be due to the fact that these data are not efficient in prediction crop health status. Thus, these inputs coupled with other types of data can provide valuable results by using appropriate fusion techniques and adequate AI models for good adjustments to these complex multivariate data.

2.5. Summary

Some of the most innovative technologies in plant protection are connected sensor networks, since there is a correlation between variations in microclimatic conditions and plant stress. Numerous research studies were carried out to control and monitor crops, and also predict plant health based on meteorological characteristics [43][44][45]. In addition, images can be a better representation of crop health state. Ground images, UAV images [46][28][47] and satellite images [48][49] have proven effective in detecting plant diseases.

We noticed that a new tendency in disease detection application is spreading widely, characterized by the use of deep learning. DL eliminates the manual feature extraction phase that can sometimes result in low prediction performance and requires less effort for feature engineering [50]. In addition, DL models have been used to efficiently classify diseases in challenging environments with complex backgrounds and overlapping plant leaves. Conversely, traditional machine learning cannot effectively distinguish symptoms of disease with similar characteristics, nor can it take advantage of a larger number of trainings [8].

3. Data Fusion Potential for Disease Detection

3.1. Data Sources

Data sources can provide useful information about the studied phenomena; for unimodal data source a simple data concatenation can be enough for prediction purposes [44]. Otherwise, when we have several types of sensors, advanced data fusion is necessary [51]. Data from several sensors first require data analysis to characterize, order or correlate the different available data sources, and then to decide on the strategy or algorithm to be used to merge the data. Among the relationships that exist, we can find distribution, complementarity, heterogeneity, redundancy, contradiction, concordance, discordance, synchronization and difference in granularity [52].

3.2. Data Fusion Categories

In literature, data fusion methods are divided into three main categories: probability-based methods, evidence-based methods and knowledge-based methods. Probability-based methods [53] such as the Kalman filter [54], the Bayesian fusion [55] and the Hidden Markov model [56] are limited to low-dimensional or homogeneous data and suffer from high computational complexity. Therefore, they are not adequate for complex problems. Evidence-based methods [57], such as the Dempster Shafer theory [58][59], are used to deal with missing information, additional assumptions and solve the problem of uncertainty.

3.3. Intelligent Multimodal Fusion

Multimodal fusion based on machine learning [60] is capable of learning representations of different modalities at various levels of abstraction [61], with significantly improved performances [62]. Multimodal fusion can be split into two main categories [63]: model-based approaches that explicitly address fusion in their construction, and model-agnostic approaches which are general and flexible and do not directly depend on a specific machine learning method. Depending on data abstraction level, different fusion architectures for the agnostic fusion [53][63][64] are possible.

Measurement fusion (or early fusion), also known as first level data fusion, allows the immediate integration and presentation of sensor data using feature vectors. Data are generally concatenated [44], which makes fusion limited when dealing with heterogeneous data. This architecture is the most widely used because of its simplicity: it is easy to align data. In [65], the authors tried to predict the rate of photosynthesis and calculate the optimal CO2concentration based on real-time environmental information via a WSN system in greenhouses for tomato seedling stage cultivation.

Feature fusion combines the results of early fusion and individual unimodal predictors by merging feature vectors, allowing heterogeneous data from different data sources to be combined. Deep Convolutional Neuron Networks (DCNNs) were used on multiple levels of multimodal data fusion. The feature level fusion with feature learning from raw data was performed after the raw data extraction phase. DCNNs were applied on each type of data to learn features, and then the outputs were extracted as the learned features.

Decision fusion (or late fusion) involves processing data from each sensor separately to obtain high-level inference decisions, which are then combined in a second stage [66]. The decision-level fusion method combines information from different sensors after each sensor has made a preliminary decision. In [67], a use case of weighted decision fusion architecture on multiple sensors is presented. Then, the method of weighted majority voting (WMV) was used to merge the resulting vectors, with each sensor data being weighted by a confidence measure (or weight).

Hybrid fusion merges information at two or more levels. In the hybrid approach proposed in [51], the authors developed the merging technique of different CNN classifiers for object detection in changing environments. Three types of input modalities were used: RGB, depth and optical flow. The CifarNet architecture was designed as the single expert model, and then the outputs of each expert network model were fused with weights determined by an additional network called gating network.

Tensor Fusion (TFM) consists mainly of a tensor fusion layer that models unimodal, bimodal and trimodal interactions using a three-fold Cartesian product from modality integration [68]. LMF has been proposed to identify the emotions of speakers according to their verbal and non-verbal behaviors, based on visual, audio and language data. Three YouTube videos databases were used with annotation of feelings, speaker traits and emotions. The learning network for acoustic and visual modalities was represented by a two-layer neural network, and for linguistic modalities, a Long Short-Term Memory (LSTM) network was used to extract the representations.

Multimodal Search Architecture Fusion (MFAS) is a generic architecture that creates a large number of possible fusion architectures, scans the neural architecture and choses the best performing architectures [62]. The MFAS is inspired by the progressive neural architecture search (PAS) [69] where the search is efficiently guided for architecture sampling using temperature-based sampling [70]. Testing three datasets, the MFAS has proven its efficacity against the state-of-the-art results on those datasets.

In [71], the authors performed a comparison between four types of fusion (late, MoE, LFM and MiD) on image and signal modalities for automatic texture detection of objects. Fusion methods provided latent vectors which were introduced in the corresponding artificial neural networks ANNs. Tested on degradation scenarios, the Late, MoE and Mid fusion methods behaved similarly. The fusion architecture potentially allowed ANN to achieve good results in the texture detection task.

To conclude, machine learning-based multimodal fusion approaches have an important potential to solve open issues in agriculture by merging different types of data. We believe that exploiting these advanced techniques for disease detection issues can provide a better understanding of the plant environment and thus improve prediction performance.

3.4. Data Fusion Applications in Agriculture

Even if advanced fusion techniques are a rapidly growing area in agriculture, literature still lacks studies on disease detection in this domain. Different applications on data fusion in agriculture are presented in literature, specifically data fusion for yield prediction [23][72][73], crop identification [39][74], land monitoring [75][76] and disease detection [77][78][79].

3.4.1. Data Fusion for Yield Prediction

In [23], the authors investigated the relationship between canopy thermal information and grain yield, using data fusion of data from different sensors. They extracted spectral (VIs), structure (vegetation fraction (VF), canopy height (CH)), thermal (normalized relative canopy temperature (NRCT)) and texture information from canopy using multi-sensors installed on a UAV. Two fusion models were used in this study, input-level feature fusion DNN (DNN-F1) and intermediate-level feature fusion DNN (DNN-F2). in terms of prediction accuracy, spatial adaptability and robustness across different types of models.

3.4.2. Data Fusion for Crop Identification

In this study [74], the authors exploited spatio-temporal data to segment satellite images of vegetation fields. The data used are images captured by the Gaofen 1 and 2 satellite. The authors developed a CNN 3D active architecture to extract information for the multi-temporal images. In the same context, some researchers tested the feasibility of temporal CNNs (TempCNNs) for satellite images classification [39].

3.4.3. Data Fusion for Land Monitoring

Therefore, some researchers have attempted to develop resolution improvement techniques to solve this issue using data fusion. In this context, the authors in [80] developed an extended super-resolution convolutional neural network (ESRCNN) for data fusion framework, specifically to blend Landsat-8 and Sentinel-2 images of 20 m and 10 m spatial resolution, respectively. The fusion approach outperformed the no-fusion approach using the same models, and the best accuracies were achieved using the SegNet method reaching 84.4% and 89.8% without and with image fusion, respectively, on the test set. They combined spectral canopy information (vegetation indices) extracted from Worldview-2/3 data with canopy structure features (canopy cover and height) calculated from UAV RGB images.

3.4.4. Data Fusion for Disease Detection

One of the first attempts to integrate multisource data for disease detection was developed using both meteorological data and satellite scenes [78]. In [77], the authors proposed a multi-context fusion network for crop disease detection. Nonetheless, their method suffers from imbalanced data due to seasonal and regional difficulty with various categories of crop diseases. For banana plant detection in the field, they trained the object detection model RetinaNet on UAV RGB images and developed a custom classifier for simultaneous banana tree localization and disease classification.

3.4.5. Summary

The applications of data fusion in agriculture presented in this section can be divided into three types. Spatio-spectral fusion is a multi-band fusion that constitutes a fine-spatial and fine-spectral fusion. Spatio-temporal fusion is based on blending data with fine spatial resolution and coarse temporal resolution (temporal revisit frequency) with data that have fine temporal resolution, but coarse spatial resolution, with the objective being to create a fine spatio-temporal resolution. Finally, multimodal fusion corresponds to heterogeneous multisensory fusion.

3.5. Data Fusion Challenges for Agriculture

In addition to noise, observed data may be characterized by non-commensurability, different resolutions and incompatible sizes or alignment, and consideration should be given to exploit a pre-processing model to solve this problem [81]. Furthermore, different data sources may provide contradictory data or missing values. Once data are ready for the learning process, unbalanced data, which are basically unequal representation, can also affect the prediction rate. Thus, the biggest constraint of data fusion is the multimodality with data from distinct types of sensors, different fusion architectures can be adopted [82][71].

4. Discussion and Conclusions

In addition to RGB and hyperspectral images, thermal images have proven to be very useful in the detection of plant diseases. The main motivation is the fact that plant leaf temperature can help predict plant health status. Several researchers have explored this type of images for disease detection approaches at the leaf level, and others have combined these images with multispectral data for effective early detection at the ground vehicle and aerial vehicle level since plant leaves acquisition requires involving people to drill down the whole field to acquire images, which is an energy and time-consuming strategy.

Spectral imaging using UAV provides important information on soil and the upper part of plants in a large spectrum, therefore, UAVs are used more often. Merging the two technologies can also broaden the spectrum of plants to be processed while ensuring early detection accuracy. Satellites can be an excellent alternative to UAVs to monitor healthy plant growth depending on the spatial and spectral resolution. A promising new field in the detection of plant disease on a larger scale is UAVs and satellites imaging which has proved its usefulness in many agriculture applications.

Despite the usefulness of satellite images, this area of research faces several challenges. Clouds and their shadows represent a major obstacle when it comes to processing and extracting disease signature from high-resolution satellite images; when clouds cover vegetation, the acquired images become unexploitable. The main obstacles to the crop monitoring application and disease detection using satellites are rapid changes in agricultural land cover in relatively short time intervals, differences in seeding dates, atmospheric conditions and fertilization strategies; since it is difficult to predict whether the reflectance changes are due to disease or to those factors, an in-situ study is required to validate predictions. High-resolution satellite images can be a key approach for very large-scale disease detection.

The current technologies of imaging sensors have many limitations for earlier disease detection. The association of multiple sensors data can provide a better understanding of the growth and health status of the crop and thus better prediction rates. This explains a growing interest in the scientific community for the multimodal data fusion in the field of crop disease detection. One can benefit from the power of AI algorithms to process the multimodal data sources and predict crop diseases in earlier stages.

Indeed, neural networks and deep neural network models demonstrated a significant capacity in the agriculture field to monitor the healthy crop growth and capture anomalies outperforming traditional machine learning algorithms. SegNet and FCN outperformed SVM model in both experimental fields and with different combinations of image bands as shown inTable 7; where RGBMSand NIRMSare respectively visual and near-infrared (NIR) bands of multispectral image, FRGBMS and FNIRMS with high resolution are respectively fusion results of RGBMS and NIRMS. In addition, from the table we can clearly see the impact of image fusion on the recognition results and the accuracies improved for all the models, including the traditional machine learning model.

Thus, the correct diagnosis depends on the choice of DL architecture and the type, quantity and the quality of the data. The application of multimodal deep learning involves the selection of a learning architecture and algorithm. Lately, multimodal fusion has proven an inescapable potential and is increasingly used in several domains such as healthcare, sentiment analysis, human–robot interaction, human activity recognition or object detection. In the agriculture field, several deep learning fusion approaches have been proposed, such as applications in yield prediction, land monitoring, crop identification and disease detection.

The most widely used type of fusion in agriculture is the fusion of multi-sensors data from aerial vehicles, fusion of multi-resolution satellites data and the fusion of satellite and UAV images. Thus, for our specific task, the use of other data sources can enhance early disease detection performance. However, few multimodal fusion studies have been conducted, particularly for disease detection. Promising results of multimodal fusion were presented in this paper, demonstrating the high potential of deep learning fusion models for prediction when using multimodal data, which creates an opportunity for further research works.


  1. FAO; WHO. The Second Global Meeting of the FAO/WHO International Food Safety Authorities Network; World Health Organization: Geneva, Switzerland, 2019.
  2. Jullien, P.; Alexandra, H. Agriculture de precision. In Agricultures et Territoires; Éditions L’Harmattan: Paris, France, 2005; pp. 1–15.
  3. Rochon, D.A.; Kakani, K.; Robbins, M.; Reade, R. Molecular aspects of plant virus transmission by olpidium and plasmodiophorid vectors. Annu. Rev. Phytopathol. 2014, 42, 211–241.
  4. Lary, D.J.; Alavi, A.H.; Gandomi, A.H.; Walker, A.L. Machine learning in geosciences and remote sensing. Geosci. Front. 2016, 7, 3–10.
  5. Li, W.; Wang, Z.; Wei, G.; Ma, L.; Hu, J.; Ding, D. A Survey on multisensor fusion and consensus filtering for sensor networks. Discret. Dyn. Nat. Soc. 2015, 2015, 1–12.
  6. Liao, W.; Chanussot, J.; Philips, W. Remote sensing data fusion: Guided filter-based hyperspectral pansharpening and graph-based feature-level fusion. In Mathematical Models for Remote Sensing Image Processing; Moser, G., Zerubia, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 243–275.
  7. Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying wheat hyperspectral pixels of healthy heads and fusarium head blight disease using a deep neural network in the wild Field. Remote Sens. 2018, 10, 395.
  8. Picon, A.; Seitz, M.; Alvarez-Gila, A.; Mohnke, P.; Ortiz-Barredo, A.; Echazarra, J. Crop conditional Convolutional neural networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput. Electron. Agric. 2019, 167, 105093.
  9. Behmann, J.; Steinrücken, J.; Plümer, L. Detection of early plant stress responses in hyperspectral images. ISPRS J. Photogramm. Remote Sens. 2014, 93, 98–111.
  10. Es-Saady, Y.; El Massi, I.; El Yassa, M.; Mammass, D.; Benazoun, A. Automatic recognition of plant leaves diseases based on serial combination of two SVM classifiers. In Proceedings of the 2016 International Conference on Electrical and Information Technologies (ICEIT), Tangiers, Morocco, 4–7 May 2016; pp. 561–566.
  11. El Massi, I.; Es-Saady, Y.; El Yassa, M.; Mammass, D.; Benazoun, A. Automatic recognition of the damages and symptoms on plant leaves using parallel combination of two classifiers. In Proceedings of the 13th Computer Graphics, Imaging and Visualization (CGiV 2016), Beni Mellal, Morocco, 29 March–1 April 2016; pp. 131–136.
  12. Prajapati, H.B.; Shah, J.P.; Dabhi, V.K. Detection and classification of rice plant diseases. Intell. Decis. Technol. 2017, 11, 357–373.
  13. El Massi, I.; Es-Saady, Y.; El Yassa, M.; Mammass, D. Combination of multiple classifiers for automatic recognition of diseases and damages on plant leaves. Signal Image Video Process. 2021, 15, 789–796.
  14. Atherton, D.; Choudhary, R.; Watson, D. Hyperspectral remote sensing for advanced detection of early blight (Alternaria solani) disease in potato (Solanum tuberosum) plants prior to visual disease symptoms. In Proceedings of the 2017 ASABE Annual International Meeting, Washington, DC, USA, 16–19 July 2017; pp. 1–10.
  15. Brahimi, M.; Arsenovic, M.; Laraba, S.; Sladojevic, S.; Boukhalfa, K.; Moussaoui, A. Deep Learning for Plant Diseases: Detection and Saliency Map Visualisation. In Human and Machine Learning, Human–Computer Interaction Series; Springer: Cham, Switzerland, 2018; pp. 93–117.
  16. Ouhami, M.; Es-Saady, Y.; El Hajji, M.; Hafiane, A.; Canals, R.; El Yassa, M. Deep transfer learning models for tomato disease detection. Image Signal Process ICISP 2020, 12119, 65–73.
  17. Barman, U.; Choudhury, R.D.; Sahu, D.; Barman, G.G. Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease. Comput. Electron. Agric. 2020, 177, 105661.
  18. Xie, C.; Shao, Y.; Li, X.; He, Y. Detection of early blight and late blight diseases on tomato leaves using hyperspectral imaging. Sci. Rep. 2015, 5, 16564.
  19. Zhu, H.; Chu, B.; Zhang, C.; Liu, F.; Jiang, L.; He, Y. Hyperspectral imaging for presymptomatic detection of tobacco disease with successive projections algorithm and machine-learning classifiers. Sci. Rep. 2017, 7, 1–12.
  20. Wang, X.; Zhang, M.; Zhu, J.; Geng, S. Spectral prediction of Phytophthora infestans infection on tomatoes using artificial neural network (ANN). Int. J. Remote Sens. 2008, 29, 1693–1706.
  21. Xie, C.; Yang, C.; He, Y. Hyperspectral imaging for classification of healthy and gray mold diseased tomato leaves with different infection severities. Comput. Electron. Agric. 2017, 135, 154–162.
  22. Barbedo, J.G.A. Detection of nutrition deficiencies in plants using proximal images and machine learning: A review. Comput. Electron. Agric. 2019, 162, 482–492.
  23. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599.
  24. Milioto, A.; Lottes, P.; Stachniss, C. Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 41–48.
  25. Bah, M.D.; Hafiane, A.; Canals, R. Weeds detection in UAV imagery using SLIC and the hough transform. In Proceedings of the 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–6.
  26. Tetila, E.C.; Machado, B.B.; Belete, N.A.D.S.; Guimaraes, D.A.; Pistori, H. Identification of soybean foliar diseases using unmanned aerial vehicle images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2190–2194.
  27. Abdulridha, J.; Batuman, O.; Ampatzidis, Y. UAV-based remote sensing technique to detect citrus canker disease uti-lizing hyperspectral imaging and machine learning. Remote Sens. 2019, 11, 1373.
  28. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243.
  29. Zhu, X.; Cai, F.; Tian, J.; Williams, T.K.A. Spatiotemporal fusion of multisource remote sensing data: Literature survey, taxonomy, principles, applications, and future directions. Remote Sens. 2018, 10, 527.
  30. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657.
  31. El Mendili, L.; Puissant, A.; Chougrad, M.; Sebari, I. Towards a multi-temporal deep learning approach for mapping urban fabric using sentinel 2 images. Remote Sens. 2020, 12, 423.
  32. Wang, Y.; Gu, L.; Li, X.; Ren, R. building extraction in multitemporal high-resolution remote sensing imagery using a multifeature lstm network. IEEE Geosci. Remote Sens. Lett. 2020, 1–5.
  33. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741.
  34. Karim, Z.; Van Zyl, T. Deep Learning and Transfer Learning applied to Sentinel-1 DInSAR and Sentinel-2 optical satellite imagery for change detection. In Proceedings of the 2020 International SAUPEC/RobMech/PRASA Conference 2020, Cape Town, South Africa, 29–31 January 2020; pp. 1–7.
  35. Donovan, S.D.; MacLean, D.A.; Zhang, Y.; Lavigne, M.B.; Kershaw, J.A. Evaluating annual spruce budworm defoliation using change detection of vegetation indices calculated from satellite hyperspectral imagery. Remote Sens. Environ. 2020, 253, 112204.
  36. Ma, H.; Huang, W.; Jing, Y.; Yang, C.; Han, L.; Dong, Y.; Ye, H.; Shi, Y.; Zheng, Q.; Liu, L.; et al. Integrating growth and environmental parameters to discriminate powdery mildew and aphid of winter wheat using bi-temporal Landsat-8 imagery. Remote Sens. 2019, 11, 846.
  37. Miranda, J.D.R.; Alves, M.D.C.; Pozza, E.A.; Neto, H.S. Detection of coffee berry necrosis by digital image processing of landsat 8 oli satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101983.
  38. Bi, L.; Hu, G.; Raza, M.; Kandel, Y.; Leandro, L.; Mueller, D. A gated recurrent units (gru)-based model for early detection of soybean sudden death syndrome through time-series satellite imagery. Remote Sens. 2020, 12, 3621.
  39. Pelletier, C.; Webb, G.I.; Petitjean, F. Temporal convolutional neural network for the classification of satellite image time series. Remote Sens. 2019, 11, 523.
  40. Yashodha, G.; Shalini, D. An integrated approach for predicting and broadcasting tea leaf disease at early stage using IoT with machine learning—A review. Mater. Today Proc. 2021, 37, 484–488.
  41. Tripathy, A.K.; Adinarayana, J.; Merchant, S.N.; Desai, U.B.; Ninomiya, S.; Hirafuji, M.; Kiura, T. Data mining and wireless sensor network for groundnut pest/disease interaction and predictions—A preliminary study. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2013, 5, 427–436.
  42. Chen, P.; Xiao, Q.; Zhang, J.; Xie, C.; Wang, B. Occurrence prediction of cotton pests and diseases by bidirectional long short-term memory networks with climate and atmosphere circulation. Comput. Electron. Agric. 2020, 176, 105612.
  43. Rodríguez, S.; Gualotuña, T.; Grilo, C. A System for the monitoring and predicting of data in precision. Procedia Comput. Sci. 2017, 121, 306–313.
  44. Patil, S.S.; Thorat, S.A. Early detection of grapes diseases using machine learning and IoT. In Proceedings of the 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP), Mysuru, India, 12–13 August 2016; pp. 1–5.
  45. Khan, S.; Narvekar, M. Disorder detection of tomato plant (Solanum lycopersicum) using IoT and machine learning. J. Physics. Conf. Ser. 2020, 1432.
  46. Duarte-Carvajalino, J.M.; Alzate, D.F.; Ramirez, A.A.; Santa-Sepulveda, J.D.; Fajardo-Rojas, A.E.; Soto-Suárez, M. Evaluating late blight severity in potato crops using unmanned aerial vehicles and machine learning algorithms. Remote Sens. 2018, 10, 1513.
  47. Wiesner-Hanks, T.; Stewart, E.L.; Kaczmar, N.; DeChant, C.; Wu, H.; Nelson, R.J.; Lipson, H.; Gore, M.A. Image set for deep learning: Field images of maize annotated with disease symptoms. BMC Res. Notes 2018, 11, 440.
  48. Liu, M.; Wang, T.; Skidmore, A.K.; Liu, X. Heavy metal-induced stress in rice crops detected using multi-temporal Sentinel-2 satellite images. Sci. Total Environ. 2018, 637, 18–29.
  49. Zheng, Q.; Huang, W.; Cui, X.; Shi, Y.; Liu, L. New spectral index for detecting wheat yellow rust using sentinel-2 multispectral imagery. Sensors 2018, 18, 4040.
  50. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90.
  51. Mees, O.; Eitel, A.; Burgard, W. Choosing smartly: Adaptive multimodal fusion for object detection in changing environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 151–156.
  52. Bellot, D. Fusion de Données avec des Réseaux Bayésiens pour la Modélisation des Systèmes Dynamiques et son Application en Télémédecine. Ph.D. Thesis, Université Henri Poincaré, Nancy, France, 2002.
  53. Ding, W.; Jing, X.; Yan, Z.; Yang, L.T. A survey on data fusion in internet of things: Towards secure and privacy-preserving fusion. Inf. Fusion 2019, 51, 129–144.
  54. Liggins, M., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2009.
  55. Pavlin, G.; de Oude, P.; Maris, M.; Nunnink, J.; Hood, T. A multi-agent systems approach to distributed bayesian in-formation fusion. Inf. Fusion 2010, 11, 267–282.
  56. Albeiruti, N.; Al Begain, K. Using hidden markov models to build behavioural models to detect the onset of dementia. In Proceedings of the 2014 Sixth International Conference on Computational Intelligence, Communication Systems and Networks, Tetovo, Macedonia, 27–29 May 2014; pp. 18–26.
  57. Smith, D.; Singh, S. Approaches to multisensor data fusion in target tracking: A Survey. IEEE Trans. Knowl. Data Eng. 2006, 18, 1696–1710.
  58. Wu, H.; Siegel, M.; Stiefelhagen, R.; Yang, J. Sensor fusion using dempster-shafer theory. In Proceedings of the IMTC/2002. Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference (IEEE Cat. No.00CH37276), Anchorage, AK, USA, 21–23 May 2002; pp. 7–11.
  59. Awogbami, G.; Agana, N.; Nazmi, S.; Yan, X.; Homaifar, A. An Evidence theory based multi sensor data fusion for multiclass classification. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1755–1760.
  60. Abdelmoneem, R.M.; Shaaban, E.; Benslimane, A. A survey on multi-sensor fusion techniques in iot for healthcare. In Proceedings of the 2018 13th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 18–19 December 2018; pp. 157–162.
  61. Ramachandram, D.; Taylor, G.W. Deep Learning for Visual understanding deep multimodal learning. IEEE Signal. Process. Mag. 2017, 34, 96–108.
  62. Pérez-Rúa, J.M.; Vielzeuf, V.; Pateux, S.; Baccouche, M.; Jurie, F. MFAS: Multimodal fusion architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6966–6975.
  63. Baltrusaitis, T.; Ahuja, C.; Morency, L.P. Multimodal machine learning: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 423–443.
  64. Feron, O.; Mohammad-Djafari, A. A Hidden Markov model for Bayesian data fusion of multivariate signals. J. Electron. Imaging 2004, 14, 1–14.
  65. Jiang, Y.; Li, T.; Zhang, M.; Sha, S.; Ji, Y. WSN-based Control System of Co2 Concentration in Greenhouse. Intell. Autom. Soft Comput. 2015, 21, 285–294.
  66. Joze, H.R.V.; Shaban, A.; Iuzzolino, M.L.; Koishida, K. MMTM: Multimodal Transfer Module for CNN Fusion. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 13286–13296.
  67. Moslem, B.; Khalil, M.; Diab, M.O.; Chkeir, A.; Marque, C.A. Multisensor data fusion approach for improving the classification accuracy of uterine EMG signals. In Proceedings of the 18th IEEE International Conference Electronics Circuits, System ICECS, Beirut, Lebanon, 11–14 December 2011; pp. 93–96.
  68. Zadeh, A.; Chen, M.; Poria, S.; Cambria, E.; Morency, L.-P. Tensor Fusion Network for Multimodal Sentiment Analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Beijing, China, 17–20 September 2017; pp. 1103–1114.
  69. Liu, C.; Zoph, B.; Neumann, M.; Shlens, J.; Hua, W.; Li, L.-J.; Fei-Fei, L.; Yuille, A.; Huang, J.; Murphy, K. Progressive Neural Architecture Search. In Transactions on Petri Nets and Other Models of Concurrency XV; Kounty, M., Kordon, F., Pomello, L., Eds.; Springer Science and Business Media LLC: Berlin, Germany, 2018; Volume 11205, pp. 19–35.
  70. Perez-Rua, J.M.; Baccouche, M.; Pateux, S. Efficient progressive neural architecture search. arXiv arXiv:1808.00391.
  71. Bednarek, M.; Kicki, P.; Walas, K. On robustness of multi-modal fusion—Robotics perspective. Electronics 2020, 9, 1152.
  72. Maimaitijiang, M.; Ghulam, A.; Sidike, P.; Hartling, S.; Maimaitiyiming, M.; Peterson, K.; Shavers, E.; Fishman, J.; Peterson, J.; Kadam, S.; et al. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine. ISPRS J. Photogramm. Remote Sens. 2017, 134, 43–58.
  73. Chu, Z.; Yu, J. An end-to-end model for rice yield prediction using deep learning fusion. Comput. Electron. Agric. 2020, 174, 105471.
  74. Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sens. 2018, 10, 75.
  75. Song, Z.; Zhang, Z.; Yang, S.; Ding, D.; Ning, J. Identifying sunflower lodging based on image fusion and deep semantic segmentation with UAV remote sensing imaging. Comput. Electron. Agric. 2020, 179, 105812.
  76. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Daloye, A.M.; Erkbol, H.; Fritschi, F.B. Crop monitoring using satellite/uav data fusion and machine learning. Remote Sens. 2020, 12, 1357.
  77. Zhao, Y.; Liu, L.; Xie, C.; Wang, R.; Wang, F.; Bu, Y.; Zhang, S. An effective automatic system deployed in agricultural internet of things using multi-context fusion network towards crop disease recognition in the wild. Appl. Soft Comput. 2020, 89, 106128.
  78. Zhang, J.; Pu, R.; Yuan, L.; Huang, W.; Nie, C.; Yang, G. Integrating remotely sensed and meteorological observations to forecast wheat powdery mildew at a regional scale. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4328–4339.
  79. Selvaraj, M.G.; Vergara, A.; Montenegro, F.; Ruiz, H.A.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of banana plants and their major diseases through aerial images and machine learning methods: A case study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020, 169, 110–124.
  80. Shao, Z.; Cai, J.; Fu, P.; Hu, L.; Liu, T. Deep learning-based fusion of Landsat-8 and Sentinel-2 images for a harmonized surface reflectance product. Remote Sens. Environ. 2019, 235, 111425.
  81. Kerkech, M.; Hafiane, A.; Canals, R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 2020, 174, 105446.
  82. Liu, Z.; Shen, Y.; Lakshminarasimhan, V.B.; Liang, P.P.; Zadeh, A.B.; Morency, L.-P. Efficient low-rank multimodal fusion with modality-specific factors. arXiv 2018, arXiv:1806.00064.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : ,
View Times: 1.0K
Entry Collection: Remote Sensing Data Fusion
Revisions: 2 times (View History)
Update Date: 24 Nov 2021
Video Production Service