This tutorial aims to describe the problem that the handeye calibration solves as well as to introduce robot poses and coordinate systems that are required for the handeye calibration. The problem is the same for eyetohand systems and eyeinhand systems. Therefore, we first provide a detailed description for the eyetohand configuration. Then, we point out the differences for the eyeinhand configuration. If you are not familiar with (robot) poses and coordinate systems, check out Position, Orientation and Coordinate Transformations.
Eyetohand
How can a robot pick an object?  
Let’s start with a robot that doesn’t involve a camera. Its two main coordinate systems are:
 
To be able to pick an object, the robot controller needs to know the object's pose (position and orientation) relative to the robot base frame. This information along with the knowledge about the robot’s geometry is sufficient to calculate compute the joint angles that will move the endeffector/gripper towards the object.  
Now, let’s assume that the pose of the object relative to the robot is unknown. That’s where Zivid 3D vision comes into play.  
Zivid point clouds are given relative to the Zivid camera's coordinate system. The origin in this coordinate system is fixed at the middle of the Zivid imager lens (internal 2D camera). A machine vision software can run detection and localization algorithms on this collection of data points. It can determine the pose of the object in Zivid camera’s coordinate system (
 
Zivid camera can now see the object in its field of view, but relative to its own coordinate system. To enable the robot to pick the object it is necessary to transform the object's coordinates from the camera coordinate system to the robot base coordinate system.  
The coordinate transformation that enables this is the result of handeye calibration. For eyetohand systems, it is the pose of the camera relative to the robot’s base (
Once the pose circle is closed, it is possible to calculate one pose from the other poses in the circle. In this case, the pose of the object relative to the robot by postmultiplying the pose of the camera relative to the robot with the pose of the object relative to the camera:

Eyeinhand
How can a robot pick an object?  
The goal is the same for the eyeinhand configuration. For a robot to pick an object, the object's pose needs to be transformed from the camera’s coordinate system to that of the robot.  
In this case, the transformation is done indirectly:
The pose of the endeffector relative to the base of the robot (

Now that we've described the handeye calibration problem, let's see the handeye calibration solution.
Content by Label  

