Date: Sun, 26 Sep 2021 13:21:44 +0000 (UTC) Message-ID: <1129303090.233.1632662504267@710da787697d> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_232_11380313.1632662504266" ------=_Part_232_11380313.1632662504266 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html 2. Hand-eye calibration problem

# 2. Hand-eye calibration problem

=20
=20
=20
=20

Zivid Knowledge Base has been moved!

This site is getting deprecated an= d will no longer be updated. Please head to our new website= to find the latest= and greatest in Zivid documentation.

This tutorial aims to describe the problem that the hand-eye calibration= solves as well as to introduce robot poses and coordinate systems that are= required for the hand-eye calibration. The problem is the same for eye-to-= hand systems and eye-in-hand systems. Therefore, we first provide a detaile= d description for the eye-to-hand configuration. Then, we point out the dif= ferences for the eye-in-hand configuration. If you are not familiar with (r= obot) poses and coordinate systems, check out Position, Orientation and Coordinate Transf= ormations.

## Eye-to-hand

 How = can a robot pick an object? Let=E2=80=99s start with a robot t= hat doesn=E2=80=99t involve a camera. Its two main coordinate systems are:<= /p> the robot base coordinate system the end-effector coordinate system To be able to pick an object, the = robot controller needs to know the object's pose (position and orientation)= relative to the robot base frame. This information along with the knowledg= e about the robot=E2=80=99s geometry is sufficient to compute the joint ang= les that will move the end-effector/gripper towards the object. Now, let=E2=80=99s a= ssume that the pose of the object relative to the robot is unknown. That=E2= =80=99s where Zivid 3D vision comes into play. Zivid point clouds are given relative to the Zivid camera's coordinate s= ystem. The origin in this coordinate system is fixed at the middle of the Z= ivid imager lens (internal 2D camera). A machine vision software can run de= tection and localization algorithms on this collection of data points. It c= an determine the pose of the object in Zivid camera=E2=80=99s coordinate sy= stem ( ). Zivid camera can now= see the object in its field of view, but relative to its own coordinate sy= stem. To enable the robot to pick the object it is necessary to transform t= he object's coordinates from the camera coordinate system to the robot base= coordinate system. The coordinate transformation that enables this is the result of hand-ey= e calibration. For eye-to-hand systems, it is the pose of the camera relati= ve to the robot=E2=80=99s base ( ) that is estimated with the hand-eye calibration. Once the pose circle is closed, it is possible to calculate one pose fro= m the other poses in the circle. In this case, the pose of the object relat= ive to the robot by post-multiplying the pose of the camera relative to the= robot with the pose of the object relative to the camera:  ## Eye-in-hand

 How = can a robot pick an object? The goal is the same for the eye-i= n-hand configuration. For a robot to pick an object, the object's pose need= s to be transformed from the camera=E2=80=99s coordinate system to that of = the robot. In this case, the transformation is done indirectly: The pose of the end-effector relative to the base of the robot ( ) is known, provided = by the robot controller, and the pose of the camera relative to the end-eff= ector ( ), which= is in this case constant, is estimated from the hand-eye calibration. Now that we've described the hand-eye calib= ration problem, let's see the hand-e= ye calibration solution.

=20
=20
=20
=20
=20
=20

=20
=20
=20
=20
=20