To gather the data required to perform the calibration involves a robot making a series of planned movements (10 to 20 are recommended), either human-operated or automatically. At the end of each movement, the camera takes an image of the calibration object. The calibration object pose is extracted from the image, and the robot pose is registered from the controller. To achieve good calibration quality, robot poses at which the camera takes images of the calibration object should be sufficiently distinct, using all the robot joints, resulting in a diversity of perspectives with different viewing angles. The images below illustrate the required diversity of imaging poses for eye-to-hand and eye-in-hand systems. At the same time, the calibration object should be fully visible in the field of view of the camera.
The task is then to solve homogeneous transformation equations to estimate the rotational and translational components of the locations of the calibration object and those of the hand-eye transformation.
Hand-eye calibration process steps:
- Move the robot to a new posture
- Register the end-effector pose
- Image the calibration object (obtain its pose)
- Repeat steps 1-3 multiple times, e.g. 10 - 20
- Compute hand-eye transform
Check out the interactive hand-eye calibration code sample we have in C++ samples. Alternatively, computing hand-eye transform can be done using Zivid CLI tool for Hand-Eye calibration. This Command-Line Interface enables the user to specify the dataset collected in steps 1-3 to compute the transformation matrix and residuals. These results are saved in user-specified files. This CLI tool is experimental and It will eventually be replaced by a GUI.
Continue reading about hand-eye calibration cautions and recommendations.