Author: DAI Jun,LI Wenbo,ZHAO Junwei,YUAN Xingqi,WANG Yuegong,LI Dongfang,CHENG Xiaoqi,HANAJIMA Nao | Time: 2024-03-25 | Counts: |
doi:10.16186/j.cnki.1673-9787.2022120009
Received:2022/12/05
Revised:2023/03/14
Published:2024/03/25
Study on joint calibration method based on monocular camera and multi-line lidar
DAI Jun1,2,LI Wenbo2,ZHAO Junwei2,YUAN Xingqi2,WANG Yuegong3,LI Dongfang2,CHENG Xiaoqi4,HANAJIMA Naohiko5
1.Henan International Joint Laboratory of Advanced Electronic Packaging Materials Precision Forming,Henan Polytechnic University,Jiaozuo 454000,Henan,China;2.School of Mechanical & Power Engineering,Henan Polytechnic University,Jiaozuo 454000,Henan,China;3.Pingdingshan PMJ Coal Mine Machinery Equipment Co.,Ltd.,Pingdingshan 467000,Henan,China;4.School of Mechatronic Engineering and Automation,Foshan University,Foshan 528225,Guangdong,China;5.Robotics and Mechanical Engineering Research Unit,Muroran Institute of Technology,Muroran 0500071,Japan
Abstract: Objectives A joint calibration method based on nonlinear optimization to address the issue of external parameter calibration between camera and lidar was proposed.The objective was to minimize the calibration error and achieve higher calibration accuracy. Methods First, images of the checkerboard calibration board were taken from different angles, and the internal parameters of the camera were calibrated by using a toolkit, resulting in obtaining the internal parameters of the monocular camera. Then, the corner point feature coordinates of the calibration board were detected in both the laser point cloud and image. The coordinates in the laser point cloud were obtained by extracting the point cloud data of the calibration board and its geometric features, followed by determining the vertex coordinates through fitting the extracted pattern. The coordinates of each corner point were obtained by counting the number of rows and columns of the calibration board. The FAST corner detection was used to detect the corner point feature coordinates of the camera, and their coordinates were determined based on the gray information of the corner points. The objective function was constructed based on the projection error of the detected feature points from the point cloud to the image. The external parameter solution was transformed into the least squares problem. Finally, the optimal external parameters were obtained through an iterative solution by using the Levenberg-Marquardt nonlinear optimization algorithm. Results The final average calibration error reached 1.29 pixels, with a maximum error of 2.46 pixels, a minimum error of 0.70 pixels, and a standard deviation of 0.57 pixels. Conclusions The calibration results demonstrated good accuracy, allowing for the projection of the point cloud onto the image. The results were applied to the visual and LiDAR fusion SLAM algorithm in practical scenarios,resulting in smooth motion trajectories highly consistent with the map. Conclusions The calibration process was simple and convenient. It did not require the actual physical size of the checkerboard and meet practical requirements.
Key words:multi-sensor fusion;monocular camera;lidar;joint calibration;image processing