Abstract:
Objective A low-cost 3D light detecting and ranging (LiDAR) point cloud information processing and plant row estimation method for environment perception in agricultural robot navigation is proposed for the areas where the satellite signal is seriously occluded in the forest or under the canopy.
Method First, the pass through filter was used to filter out the target irrelevant points outside the area of interest. Secondly, the methods of mean shift clustering and scanning area adaptation were proposed to segment the trunk of each plant, and the vertical projection of the trunk point cloud was used to estimate the center point. Finally, the plant rows were estimated by determing the trunk centers with the least square fitting method. The simulation experiment and field experiment were carried out in the simulated orchard and metasequoia forest in the open field. The angle between the plant row vector and the due east was used as the index. The angle error between the plant row information identified by the proposed method and the true value of the plant row measured by GNSS satellite antenna positioning was calculated.
Result Using the proposed method of 3D LiDAR point cloud information processing and plant row estimation, the average errors of plant row identification in simulation experiment and field experiment were 0.79° and 1.48°, the minimum errors were 0.12° and 0.88°, and the maximum errors were 1.49° and 2.33°, respectively.
Conclusion The vehicle-mounted 3D LiDAR can effectively estimate the plant rows of metasequoia. This research enriches the ideas and methods of crop identification, and provides a theoretical basis for the map-free navigation of agricultural robots in areas without satellite signal coverage.