An application of Deep Learning for Sweet Cherry Phenotyping using YOLO Object Detection

Citation

Ritayu Nagpal, Sam Long, Shahid Jahagirdar, Weiwei Liu, Scott Fazackerley, Ramon Lawrence and Amritpal Singh, An application of Deep Learning for Sweet Cherry Phenotyping using YOLO Object Detection, in The 25th International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'21). 2021, Las Vegas, United States.

Plain language summary

Fruit size is an important fruit quality attribute in sweet cherries. Traditionally it is measured by manually counting 100 fruits from a sample separately two times and followed by weighing them to generate an average fruit weight value. Measuring fruit size for a large number of germplasm lines needs to be done each season by hand to identify superior selections. In this work, we have applied deep learning, a type of artificial intelligence-based system for counting cherries in real time using live image feed from a low-cost camera’s field of view. In addition to counting the fruits, the tool helps measure of size and colour of each fruit in the field of view. The application of this novel tool could speed up evaluation samples and reduce manual labour.

Abstract

Tree fruit breeding is a long-term activity involving repeated measurements of various fruit quality traits on a large number of samples. These traits are traditionally measured by manually counting the fruits, weighing to indirectly measure the fruit size, and fruit colour is classified subjectively into different color categories using visual comparison to colour charts. These processes are slow, expensive and subject to evaluators' bias and fatigue. Recent advancements in deep learning can help automate this process. A method was developed to automatically count the number of sweet cherry fruits in a camera's field of view in real time using YOLOv3. A system capable of analyzing the image data for other traits such as size and color was also developed using Python. The YOLO model obtained close to 99% accuracy in object detection and counting of cherries and 90% on the Intersection over Union metric for object localization when extracting size and colour information. The model surpasses human performance and offers a significant improvement compared to manual counting.

Publication date

2021-07-26

Author profiles