>> English >> Current Issue >> 正文
Study on joint calibration method based on monocular camera and multi-line lidar
Author: DAI Jun,LI Wenbo,ZHAO Junwei,YUAN Xingqi,WANG Yuegong,LI Dongfang,CHENG Xiaoqi,HANAJIMA Nao Time: 2024-03-25 Counts:

doi:10.16186/j.cnki.1673-9787.2022120009

Received:2022/12/05

Revised:2023/03/14

Published:2024/03/25

Study on joint calibration method based on monocular camera and multi-line lidar

DAI Jun12LI Wenbo2ZHAO Junwei2YUAN Xingqi2WANG Yuegong3LI Dongfang2CHENG Xiaoqi4HANAJIMA Naohiko5

1.Henan International Joint Laboratory of Advanced Electronic Packaging Materials Precision FormingHenan Polytechnic UniversityJiaozuo 454000HenanChina2.School of Mechanical & Power EngineeringHenan Polytechnic UniversityJiaozuo 454000HenanChina3.Pingdingshan PMJ Coal Mine Machinery Equipment Co.Ltd.Pingdingshan 467000HenanChina4.School of Mechatronic Engineering and AutomationFoshan UniversityFoshan 528225GuangdongChina5.Robotics and Mechanical Engineering Research UnitMuroran Institute of TechnologyMuroran 0500071Japan

Abstract: Objectives A joint calibration method based on nonlinear optimization to address the issue of external parameter calibration between camera and lidar was proposed.The objective was to minimize the calibration error and achieve higher calibration accuracy. Methods First images of the checkerboard calibration board were taken from different angles and the internal parameters of the camera were calibrated by using a toolkit resulting in obtaining the internal parameters of the monocular camera. Then the corner point feature coordinates of the calibration board were detected in both the laser point cloud and image. The coordinates in the laser point cloud were obtained by extracting the point cloud data of the calibration board and its geometric features followed by determining the vertex coordinates through fitting the extracted pattern. The coordinates of each corner point were obtained by counting the number of rows and columns of the calibration board. The FAST corner detection was used to detect the corner point feature coordinates of the camera and their coordinates were determined based on the gray information of the corner points. The objective function was constructed based on the projection error of the detected feature points from the point cloud to the image. The external parameter solution was transformed into the least squares problem. Finally the optimal external parameters were obtained through an iterative solution by using the Levenberg-Marquardt nonlinear optimization algorithm. Results The final average calibration error reached 1.29 pixels with a maximum error of 2.46 pixels a minimum error of 0.70 pixels and a standard deviation of 0.57 pixels. Conclusions The calibration results demonstrated good accuracy allowing for the projection of the point cloud onto the image. The results were applied to the visual and LiDAR fusion SLAM algorithm in practical scenariosresulting in smooth motion trajectories highly consistent with the map. Conclusions The calibration process was simple and convenient. It did not require the actual physical size of the checkerboard and meet practical requirements.

Key words:multi-sensor fusionmonocular cameralidarjoint calibrationimage processing

 

Lastest