Enhanced tree trunk detection for the autonomous field mower via LiDAR-camera fusion in complex environments
-
Graphical Abstract
-
Abstract
The increasingly widespread application of autonomous field mowers in agriculture has significantly heightened the demand for precise and reliable tree trunk detection technologies, particularly in complex and challenging operational environments. To overcome the inherent limitations of single-sensor systems, such as the sparse point cloud resolution in Light Detection and Ranging (LiDAR), photometric sensitivity in camera-based methods, and persistent occlusion interference, this study proposes a multi-sensor fusion framework that integrates data from multi-line LiDAR and a monocular camera for robust tree trunk detection. First, a spatio-temporal calibration framework was developed to ensure accurate alignment of multi-source data. Subsequently, the PointPillars network was utilized for efficient extraction of 3D point cloud features, while an improved You Only Look Once Version 8 Nano (YOLOv8n) model was integrated to enable precise 2D image feature extraction. Additionally, the Complete Intersection over Union (CIoU) fusion strategy was adopted to enable effective cross-modal bounding box matching. Experimental results demonstrate that the proposed fusion approach achieves average positioning errors of 0.0619 m in the horizontal direction and 0.0583 m in the vertical direction, along with a tree trunk detection accuracy of 93.68%. This method effectively resolves the false detection issues typically encountered with traditional point cloud clustering algorithms in complex environments, while also mitigating performance degradation in vision-based detection under complex texture conditions. The proposed framework presents an innovative approach to environment-aware perception for autonomous mowing operations.
-
-