Recognition of tea buds based on an improved YOLOv7 model
-
Graphical Abstract
-
Abstract
The traditional recognition algorithm is prone to miss detection targets in the complex tea garden environment, and it is difficult to satisfy the requirement for tea bud recognition accuracy and efficiency. In this study, the YOLOv7 model was developed to improve tea bud recognition accuracy for some extreme tea garden scenarios. In the improved model, a lightweight MobileNetV3 network is adopted to replace the original backbone network, which reduces the size of the model and improves detection efficiency. The convolutional block attention module is introduced to enhance the attention to the features of small and occluded tea buds, suppressing the interference of the complex tea garden environment on tea bud recognition and strengthening the feature extraction capability of the recognition model. Moreover, to further improve recognition accuracy for dense and occlusive scenarios, the soft non-maximum suppression strategy is integrated into the recognition model. Experimental results show that the improved YOLOv7 model has the precision, recall, and mean average precision (mAP) values of 88.3%, 87.4%, and 88.5%, respectively. Compared with the Faster R-CNN, SSD, and original YOLOv7 algorithms, the mAP of the improved YOLOv7 model is increased by 7.4, 7.9, and 3.9 percentage points, respectively, and its recognition speed is also promoted by 94.9%, 46.2%, and 16.9%. The proposed model can rapidly and accurately identify the tea buds in multiple complex tea garden scenarios - such as dense distribution, being close to the background color, and mutual occlusion - with high generalization and robustness, which can provide theoretical and technical support for the recognition of tea-picking robots.
-
-