While information about crops can be derived from many different modalities including hyperspectral imaging, multispectral imaging, fluorescence imaging, 3D laser scanning, etc. low-cost RGB imaging sensors in continuous monitoring of crops is a more practical and feasible alternative. In this research, an image processing pipeline was developed to monitor the growth of soybean crops in a research field of the University of Nebraska-Lincoln using their RGB images collected by overhead-phenocams within 30 days using Raspberry-Pi-Zero with a camera module where images were saved on an SD card. The images were stored in the JPG file format with 1920×1080 resolution and 24-bit depth. The proposed image processing pipeline was built using the MATLAB computer vision and deep learning toolboxes. The pipeline began with resizing field images to 512×512 resolution, followed by a denoising step using a pretrained Denoising Deep Convolutional Neural Network (DCNN). Then, a semantic segmentation algorithm developed and named as SoySegNet was used to isolate the canopy of soybean crops from the background. A DeepLab v3+ DCNN was developed using the transfer learning technique based on the ResNet-18 DCNN, to perform the semantic segmentation. The semantic segmentation DCNN was trained with 119 pixel-labeled images and additional images generated using data augmentation techniques (i.e., random translation and reflection). The augmentation step increased the size of the image dataset used in the training, validation, and testing of the DCNN. The SoySegNet was able to identify soybean canopy with a pixel-level accuracy of 94%. Various vegetative indices (i.e., excess green index, excess green minus excess red, vegetative index, the color index of vegetation, visible atmospherically resistant index, red-green-blue vegetation index, modified green, red vegetation index, and normalized difference index) were computed using the segmented field images to monitor the growth rate of soybean crops. Furthermore, the proposed image processing pipeline was extended to count the soybean leaves in the segmented images using a deep neural network based on the You Only Look Once (YOLO) architecture and named as SoyCountNet. The SoyCountNet was trained with the same 119 labeled images used for SoySegNet, where the leaves were labeled using bounding boxes. Again, data augmentation techniques were used to increase the size of the training, validation, and testing data sets. The SoyCountNet consisted of ResNet50 DCNN as a feature extraction network and an object detection subnetwork. The SoyCountNet was able to count soybean leaves with a 0.36 precision in the field-segmented images of the soybean crops. This research demonstrated that the proposed image processing pipeline in conjunction with low-cost RGB imaging devices could provide a reliable and cost-effective framework for continuous crop monitoring. Novel application of this framework would be to generate meaningful data about the crop in real-time in edge computing devices of Low Power Wide Area Network (LPWAN) based agricultural Internet of Things (IoT) sensor networks.