TY - GEN
T1 - Deep Convolutional Neural Network Based Image Processing Framework for Monitoring the Growth of Soybean Crops
AU - Chamara, A. H.M.Nipuna
AU - Alkady, Khalid
AU - Jin, Hongyu
AU - Bai, Frank
AU - Samal, Ashok
AU - Ge, Yufeng
N1 - Publisher Copyright:
© ASABE 2021. All rights reserved.
PY - 2021
Y1 - 2021
N2 - While information about crops can be derived from many different modalities including hyperspectral imaging, multispectral imaging, fluorescence imaging, 3D laser scanning, etc. low-cost RGB imaging sensors in continuous monitoring of crops is a more practical and feasible alternative. In this research, an image processing pipeline was developed to monitor the growth of soybean crops in a research field of the University of Nebraska-Lincoln using their RGB images collected by overhead-phenocams within 30 days using Raspberry-Pi-Zero with a camera module where images were saved on an SD card. The images were stored in the JPG file format with 1920×1080 resolution and 24-bit depth. The proposed image processing pipeline was built using the MATLAB computer vision and deep learning toolboxes. The pipeline began with resizing field images to 512×512 resolution, followed by a denoising step using a pretrained Denoising Deep Convolutional Neural Network (DCNN). Then, a semantic segmentation algorithm developed and named as SoySegNet was used to isolate the canopy of soybean crops from the background. A DeepLab v3+ DCNN was developed using the transfer learning technique based on the ResNet-18 DCNN, to perform the semantic segmentation. The semantic segmentation DCNN was trained with 119 pixel-labeled images and additional images generated using data augmentation techniques (i.e., random translation and reflection). The augmentation step increased the size of the image dataset used in the training, validation, and testing of the DCNN. The SoySegNet was able to identify soybean canopy with a pixel-level accuracy of 94%. Various vegetative indices (i.e., excess green index, excess green minus excess red, vegetative index, the color index of vegetation, visible atmospherically resistant index, red-green-blue vegetation index, modified green, red vegetation index, and normalized difference index) were computed using the segmented field images to monitor the growth rate of soybean crops. Furthermore, the proposed image processing pipeline was extended to count the soybean leaves in the segmented images using a deep neural network based on the You Only Look Once (YOLO) architecture and named as SoyCountNet. The SoyCountNet was trained with the same 119 labeled images used for SoySegNet, where the leaves were labeled using bounding boxes. Again, data augmentation techniques were used to increase the size of the training, validation, and testing data sets. The SoyCountNet consisted of ResNet50 DCNN as a feature extraction network and an object detection subnetwork. The SoyCountNet was able to count soybean leaves with a 0.36 precision in the field-segmented images of the soybean crops. This research demonstrated that the proposed image processing pipeline in conjunction with low-cost RGB imaging devices could provide a reliable and cost-effective framework for continuous crop monitoring. Novel application of this framework would be to generate meaningful data about the crop in real-time in edge computing devices of Low Power Wide Area Network (LPWAN) based agricultural Internet of Things (IoT) sensor networks.
AB - While information about crops can be derived from many different modalities including hyperspectral imaging, multispectral imaging, fluorescence imaging, 3D laser scanning, etc. low-cost RGB imaging sensors in continuous monitoring of crops is a more practical and feasible alternative. In this research, an image processing pipeline was developed to monitor the growth of soybean crops in a research field of the University of Nebraska-Lincoln using their RGB images collected by overhead-phenocams within 30 days using Raspberry-Pi-Zero with a camera module where images were saved on an SD card. The images were stored in the JPG file format with 1920×1080 resolution and 24-bit depth. The proposed image processing pipeline was built using the MATLAB computer vision and deep learning toolboxes. The pipeline began with resizing field images to 512×512 resolution, followed by a denoising step using a pretrained Denoising Deep Convolutional Neural Network (DCNN). Then, a semantic segmentation algorithm developed and named as SoySegNet was used to isolate the canopy of soybean crops from the background. A DeepLab v3+ DCNN was developed using the transfer learning technique based on the ResNet-18 DCNN, to perform the semantic segmentation. The semantic segmentation DCNN was trained with 119 pixel-labeled images and additional images generated using data augmentation techniques (i.e., random translation and reflection). The augmentation step increased the size of the image dataset used in the training, validation, and testing of the DCNN. The SoySegNet was able to identify soybean canopy with a pixel-level accuracy of 94%. Various vegetative indices (i.e., excess green index, excess green minus excess red, vegetative index, the color index of vegetation, visible atmospherically resistant index, red-green-blue vegetation index, modified green, red vegetation index, and normalized difference index) were computed using the segmented field images to monitor the growth rate of soybean crops. Furthermore, the proposed image processing pipeline was extended to count the soybean leaves in the segmented images using a deep neural network based on the You Only Look Once (YOLO) architecture and named as SoyCountNet. The SoyCountNet was trained with the same 119 labeled images used for SoySegNet, where the leaves were labeled using bounding boxes. Again, data augmentation techniques were used to increase the size of the training, validation, and testing data sets. The SoyCountNet consisted of ResNet50 DCNN as a feature extraction network and an object detection subnetwork. The SoyCountNet was able to count soybean leaves with a 0.36 precision in the field-segmented images of the soybean crops. This research demonstrated that the proposed image processing pipeline in conjunction with low-cost RGB imaging devices could provide a reliable and cost-effective framework for continuous crop monitoring. Novel application of this framework would be to generate meaningful data about the crop in real-time in edge computing devices of Low Power Wide Area Network (LPWAN) based agricultural Internet of Things (IoT) sensor networks.
KW - Crop monitoring
KW - Deep convolutional neural network
KW - Edge-computing
KW - In-field
KW - Iot
KW - Matlab
KW - Phenocams
UR - http://www.scopus.com/inward/record.url?scp=85114204170&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114204170&partnerID=8YFLogxK
U2 - 10.13031/aim.202100259
DO - 10.13031/aim.202100259
M3 - Conference contribution
AN - SCOPUS:85114204170
T3 - American Society of Agricultural and Biological Engineers Annual International Meeting, ASABE 2021
SP - 754
EP - 770
BT - American Society of Agricultural and Biological Engineers Annual International Meeting, ASABE 2021
PB - American Society of Agricultural and Biological Engineers
T2 - 2021 American Society of Agricultural and Biological Engineers Annual International Meeting, ASABE 2021
Y2 - 12 July 2021 through 16 July 2021
ER -