Deep learning for real‑time fruit detection and orchard fruit load estimation: benchmarking of ‘MangoYOLO’

Koirala, A and Walsh, K.B and Wang, Z and McCarthy, C (2019) Deep learning for real‑time fruit detection and orchard fruit load estimation: benchmarking of ‘MangoYOLO’. Precision Agriculture, 20 (6). pp. 1107-1135. ISSN 1385-2256


Abstract

The performance of six existing deep learning architectures were compared for the task of detection of mango fruit in images of tree canopies. Images of trees (n = 1 515) from across five orchards were acquired at night using a 5 Mega-pixel RGB digital camera and 720 W of LED flood lighting in a rig mounted on a farm utility vehicle operating at 6 km/h. The two stage deep learning architectures of Faster R-CNN(VGG) and Faster R-CNN(ZF), and the single stage techniques YOLOv3, YOLOv2, YOLOv2(tiny) and SSD were trained both with original resolution and 512 × 512 pixel versions of 1 300 training tiles, while YOLOv3 was run only with 512 × 512 pixel images, giving a total of eleven models. A new architecture was also developed, based on features of YOLOv3 and YOLOv2(tiny), on the design criteria of accuracy and speed for the current application. This architecture, termed ‘MangoYOLO’, was trained using: (i) the 1 300 tile training set, (ii) the COCO dataset before training on the mango training set, and (iii) a daytime image training set of a previous publication, to create the MangoYOLO models ‘s’, ‘pt’ and ‘bu’, respectively. Average Precision plateaued with use of around 400 training tiles. MangoYOLO(pt) achieved a F1 score of 0.968 and Average Precision of 0.983 on a test set independent of the training set, outperforming other algorithms, with a detection speed of 8 ms per 512 × 512 pixel image tile while using just 833 Mb GPU memory per image (on a NVIDIA GeForce GTX 1070 Ti GPU) used for in-field application. The MangoYOLO model also outperformed other models in processing of full images, requiring just 70 ms per image (2 048 × 2 048 pixels) (i.e., capable of processing ~ 14 fps) with use of 4 417 Mb of GPU memory. The model was robust in use with images of other orchards, cultivars and lighting conditions. MangoYOLO(bu) achieved a F1 score of 0.89 on a day-time mango image dataset. With use of a correction factor estimated from the ratio of human count of fruit in images of the two sides of sample trees per orchard and a hand harvest count of all fruit on those trees, MangoYOLO(pt) achieved orchard fruit load estimates of between 4.6 and 15.2% of packhouse fruit counts for the five orchards considered. The labelled images (1 300 training, 130 validation and 300 test) of this study are available for comparative studies.


Statistics for USQ ePrint 37716
Statistics for this ePrint Item
Item Type: Article (Commonwealth Reporting Category C)
Refereed: Yes
Item Status: Live Archive
Additional Information: Files associated with this item cannot be displayed due to copyright restrictions
Faculty/School / Institute/Centre: Current - Institute for Advanced Engineering and Space Sciences - Centre for Agricultural Engineering (1 Aug 2018 -)
Faculty/School / Institute/Centre: Current - Institute for Advanced Engineering and Space Sciences - Centre for Agricultural Engineering (1 Aug 2018 -)
Date Deposited: 21 Jan 2020 03:32
Last Modified: 30 Jan 2020 03:08
Uncontrolled Keywords: deep learning, fruit detection, mango, yield estimation
Fields of Research : 07 Agricultural and Veterinary Sciences > 0706 Horticultural Production > 070601 Horticultural Crop Growth and Development
08 Information and Computing Sciences > 0801 Artificial Intelligence and Image Processing > 080104 Computer Vision
09 Engineering > 0906 Electrical and Electronic Engineering > 090602 Control Systems, Robotics and Automation
Socio-Economic Objective: E Expanding Knowledge > 97 Expanding Knowledge > 970109 Expanding Knowledge in Engineering
Identification Number or DOI: 10.1007/s11119-019-09642-0
URI: http://eprints.usq.edu.au/id/eprint/37716

Actions (login required)

View Item Archive Repository Staff Only