UR5e robot + Python: generate dataset and perform hand-eye calibration

 

This tutorial is built and tested using Universal Robots UR5e.

Introduction

Precise hand-eye calibration requires a good dataset, but it is not trivial how to generate this dataset. In this tutorial, we first set up communication between Zivid camera and Universal Robots (UR) UR5e to extract a dataset consisting of 3D images and robot poses. Then, we use this dataset to do hand-eye calibration. The sample works for both eye-in-hand and eye-to-hand.

 

If you are not familiar with hand-eye calibration, we recommend going through Hand-eye calibration.

Requirements

  1. A UR robot with a standard TCP/IP connection.

  2. Python set up and Zivid-Python installed.

  3. Zivid Checkerboard for calibration.

 

Preparing UR5e robot

To generate a dataset, we want to follow the steps explained in 6. Hand-eye calibration process.

Clone zivid-python-samples from Github, and install the runtime requirements from https://github.com/zivid/zivid-python-samples/tree/master/source/applications/advanced/hand_eye_calibration/ur_hand_eye_calibration.

1 pip install -r requirements.txt

This installs all 3rd party libraries including the UR RTDE (Real-Time Data Exchange) communication protocol.

 

The three files we need for this sample are:

  • universal_robots_communication_file.xml, configuration file with I/O registers.

  • universal_robots_robot_motion_script.urp, robot motion script created and run in UR controller.

  • universal_robots_perform_hand_eye_calibration.py, python script for capturing images and poses, saving them and perform hand-eye calibration.

 

The robot motion script is where we set all robot postures used in the calibration. The python script captures images, reads robot poses, and performs hand-eye calibration. Communication between robot and computer is done through UR RTDE (Real-Time Data Exchange) protocol. An overview of the communication flow is illustrated below.

 

 

Small adjustments must be done to the UR robot motion script before we can generate a dataset.

Modify universal_robots_robot_motion_script.urp robot motion script

We only need to modify the UR robot motion script. First, save the universal_robots_hand_eye_script.urp on a USB drive and load the script on your UR controller. Your controller interface should look something like this:

 

Here, capture_hand_eye is a subprogram that triggers the camera to capture one image for every robot pose, while image_count, start_capture, camera_ready, and finish_capture are I/O variables used to communicate with the python script.

  • output_int_register_24 = image_count

Count the number of images and poses acquired.

  • output_bit_register_64 = start_capture

Bool that trigger camera to capture an image.

  • input_bit_register_64 = finish_capture

Bool that triggers UR robot to move to the next position.

  • input_bit_register_65 = ready_to_capture

Bool that tells UR robot that the camera is ready to be used. 

 

The universal_robots_communication_file.xml states which registers we are writing to. If we change a register in the python script, we must also change the same register in the configuration file.

1 2 3 4 5 6 7 8 9 10 11 12 13 <?xml version="1.0"?> <rtde_config> <recipe key="out"> <field name="actual_TCP_pose" type="VECTOR6D"/> <field name="output_int_register_24" type="INT32"/> <field name="output_bit_register_64" type="BOOL"/> </recipe> <recipe key="in"> <field name="input_bit_register_64" type="BOOL"/> <field name="input_bit_register_65" type="BOOL"/> </recipe> </rtde_config>

 

Every waypoint, robot_pose_X, must be modified to fit the setup. The easiest way is to set the robot in freedrive and move it with your hands to desired postures. Make sure that the Zivid checkerboard is visible for the camera at each of these postures. If the camera is not able to expose the checkerboard, modify camera settings in function _set_camera_settings.

1 2 3 4 5 6 7 8 9 10 11 12 13 def _set_camera_settings(cam: zivid.Camera): """Set camera settings Args: cam: Zivid camera """ with cam.update_settings() as updater: updater.settings.iris = 20 updater.settings.exposure_time = datetime.timedelta(microseconds=10000) updater.settings.filters.reflection.enabled = True updater.settings.brightness = 1.0 updater.settings.filters.gaussian.enabled = 1 updater.settings.gain = 1

 

5. How to get good quality data on Zivid calibration checkerboard explains how we can tune our parameters for the optimal 3D image quality.

If the captured images of the Zivid checkerboard are badly exposed, hand-eye calibration will not work!

 

You can add or remove waypoints, just remember to call capture_hand_eye after every waypoint. We recommend having 10-20 to get an accurate hand-eye transformation.

 

Before we run the python script, it is smart to manually go through all the robot positions and verify that the robot does not end up in a singularity while moving from one position to the next.

If you do eye-in-hand calibration, ensure that the Zivid camera does not collide with the robot.

Run python script

Now that we have modified the robot script to fit your scene, we are ready to generate a dataset and perform calibration!

 

First, run universal_robots_robot_motion_script.urp from the UR controller. When the program starts, it will wait for the python script to set the camera_ready variable to True. Then, run the universal_robots_perform_hand_eye_calibration.py script from the command line by typing;

1 python full_hand_eye_sample.py --<mode> --ip <IP>

where <mode> is either eih (eye-in-hand) or eth (eye-to-hand), and <IP> is the robot's IP address.

 

The different inputs required are displayed by typing the following command in the command prompt.

1 python full_hand_eye_sample.py -h

 

When the script is finished, it outputs the directory we saved our data in, the 4x4 calibration matrix, and the residual in rotation and translation for each of the robot poses. A more detailed explanation about how Zivid calculates residuals are found in hand-eye calibration residuals.

 

When we have the transform matrix, we can use it to convert a 3D point from the camera frame to the robot base frame. Details about how to do this are described in 9. Using the result of Hand-eye calibration.

Python and C++ samples of transforming one 3D point are available at our GitHub repo.