This tutorial shows how to use the Command Line Interface (CLI) tool for Hand-Eye calibration. Step by step instructions is provided with screenshots below.
1. Acquire the dataset
If you haven't read our complete tutorial on hand-Eye calibration, we encourage you to do so. The bare minimum for acquiring the dataset is to check out the hand-eye calibration process to learn more about the required robot poses, and to learn how to get good quality Zivid checkerboard point clouds. Also, it can be very helpful to read on cautions and recommendations.
The dataset assumes 10 - 20 pairs of Zivid checkerboard point clouds in .zdf file format and corresponding robot poses in .yaml file format. The naming convention is:
- Point clouds: img01.zdf, img02.zdf, img03.zdf, ...
- Robot poses: pos01.yaml, pos02.yaml, pos03.yaml, ...
Here's an example of a robot pose for download - pos01.yaml. To learn how to write/read files in .yaml format, check out the OpenCV YAML file storage class. The dataset folder should look similar to this:
3. Run the hand-eye calibration CLI tool
Launch the Command Prompt by pressing Win + R keys on the keyboard, then type cmd and press Enter.
Navigate to the folder where you installed Zivid software:
The inputs and outputs of the ZividExperimentalHandEyeCalibration.exe CLI tool can be displayed by typing the following command:
To run the ZividExperimentalHandEyeCalibration.exe CLI tool you must specify the type of calibration (eye-in-hand or eye-to-hand) and the path to the directory where the dataset (.zdf files and .yaml robot poses) is located. It is also handy to specify the location where you want to save the resulting hand-eye transform and residuals, see the example below:
During the execution, the algorithm outputs if it is able to detect the checkerboard ("OK") or not ("FAILED"). After the detection phase, it outputs the resulting hand-eye transform (4x4 homogeneous transformation matrix). Lastly, it outputs hand-eye calibration residuals for every pose; see example output below.
The resulting homogeneous transformation matrix (eyeInHandTransform.yaml) can then be used to transform the picking point coordinates or the entire point cloud from the camera frame to the robot base frame. To do this, check out how to use the result of Hand-eye calibration.