AprilTags

Introduction

This tutorial explains how to track objects using the ROS package apriltag_ros. This package is a ROS wrapper for AprilTag, an open source Apriltag tracking library. For tracking so called visual fiducial markers are used as landmarks, which are detected by the camera and reconstructed in 3D environment.

  • Lines beginning with $ are terminal commands.

    • To open a new terminal use the shortcut Ctrl + Alt + T.

    • To open a new tab inside an existing terminal use the shortcut Ctrl + Shift + T.

    • To kill a process in a terminal use the shortcut Ctrl + C.

  • Lines beginning with # indicate the syntax of the commands.

  • Code is separated in boxes.

  • Code is case sensitive.

Detecting Landmarks

Start your webcam launch file you previously created or turtlebot3 simulation. For landmark detection, the package apriltag_ros can be used. This tool provides a 6D transform between the camera and a known marker, based on 2D image data. In addition, it publishes the pose of the marker in a specific output frame.

  1. Download the the apriltag gazebo model. Apriltag Gazebo Model

  2. Extract it in ~/.gazebo/models.

$ mkdir ~/src && cd ~/src
$ git clone https://github.com/christianrauch/apriltag && cd apriltag
$ cmake . && sudo make install
$ cd ~/robot_ws/src
$ git clone https://github.com/christianrauch/apriltag_ros
$ git clone https://github.com/christianrauch/apriltag_msgs
$ cd ~/robot_ws
$ colcon build
$ source ~/.bashrc
$ cp ~/src/apriltag/lib/* ~/robot_ws/install/apriltag_ros/lib/
$ gedit ~/turtlebot3_ws/src/turtlebot3/turtlebot3_simulations/turtlebot3_gazebo/models/turtlebot3_waffle/model.sdf
<frame_name>camera_rgb_optical_frame</frame_name> <!--Edit line 408-->

To start apriltag_ros start:

  1. You find following launch file in your apriltag_ros launch folder. Edit it there. Don’t forget to build after each change.

Listing 21 track.launch.py
import launch
from launch_ros.actions import ComposableNodeContainer
from launch_ros.descriptions import ComposableNode

# detect only 16h5 tags with id=0
cfg_16h5 = {
    "image_transport": "raw",
    "family": "16h5",
    "size": 0.162,
    "max_hamming": 0,
    "z_up": True,
    "tag_ids": [0],
    "tag_frames": ["marker"],
    "tag_size": [0.162],
}

def generate_launch_description():
    composable_node = ComposableNode(
        node_name='apriltag',
        package='apriltag_ros', node_plugin='AprilTagNode',
        remappings=[("/apriltag/image", "/camera/image_raw"), ("/apriltag/camera_info", "/camera/camera_info")],
        parameters=[cfg_16h5])
    container = ComposableNodeContainer(
        node_name='tag_container',
        node_namespace='apriltag',
        package='rclcpp_components',
        node_executable='component_container',
        composable_node_descriptions=[composable_node],
        output='screen'
    )

    return launch.LaunchDescription([container])

Parameters

Description/Meaning

image_transport

image format: “raw” or “compressed” (default: raw)

family

tag family name: 16h5, 25h9, 36h11 (we use 16h5)

size

tag edge size in meter (default: 2.0)

max_hamming

maximum allowed hamming distance (corrected bits)

z_up

rotate about x-axis to have Z pointing upwards (default: false)

tag_ids

tag IDs for which to publish transform

tag_frames

optional frame names

tag_size

optional tag-specific edge size

For more information on arguments visit: https://github.com/christianrauch/apriltag_ros/blob/master/README.md.

Adjust your launch files to get your tracking system running and visualize the results in RViz. Check the output! In Figure 12 you see mobile platforms locating each other with printed Apriltags on their heads mounted.

Hint

Your webcam must run, and you have to add a TF element to visualize the pose of the marker.

../../../_images/apriltagrobots_overlay.jpg

Figure 12 Different Apriltag markers used in reallife conditions

TF Listener

A tf listener has to be included in a ROS node. It will track the states of all frames of the actual tf tree including their timestamps. A listener offers the possibility to get the transformation between two frames of the tf tree at any time. All couples are possible. The couples do not have to be connected directly. Even child-to-parent connections are possible.

Create a ROS node inside your package my_first_package or create a new package and name the node tf_listener.py.

Listing 22 listener.py
#!/usr/bin/env python3

import rclpy
import tf2_ros

def timercallback():
    global tfBuffer
    try:
        # do a lookup transform between 'base_link' and 'marker' frame
        trans = tfBuffer.lookup_transform("base_link", "marker", rclpy.duration.Duration())
        # returns TransformStamped() message
        print(trans) # print lookup transform
    except:
        # exception is raised on extrapolation, 
        # no connection between frames or when frames dont exist
        print("lookup failed") 

def main():
    global tfBuffer
    rclpy.init() # init ros client library
    nh = rclpy.create_node('tf2_listener') # create a node with name 'tf2_listener'
    tfBuffer = tf2_ros.Buffer() # create a TF2 buffer which saves the TFs for given cache_time
    tf2_ros.TransformListener(tfBuffer, nh) # create TF2 listener which connects buffer with node
    nh.create_timer(0.1, timercallback) # call timercallback every 100ms
    try:
        rclpy.spin(nh) # spin node until exception
    except KeyboardInterrupt:
        nh.destroy_node() # destroy node
        rclpy.shutdown() # shutdown ros client library

if __name__ == '__main__':
    main()

Task

Adapt the marker frame to the specific marker you are using!

Task

Check different couples of frames and compare the transformations.