top of page

Visualizing camera calibration results

Sometimes, when the calibration results just don't seem to make any sense and the pure numbers help you even less in understanding your setup, you need a visualization tool. Look no further!

This article and its accompanying repository take any camcalib result containing any number of sensors with intrinsic and extrinsic data and draws them onto your screen.

Download our examples source code

You don’t need to copy and paste the code snippets we show here and puzzle them together. Just check out the code for the example from our examples repository and follow the instructions in the accompanying to get up and running with ease.


Before we dig into the code, let's review the general contents of a camcalib calibration YAML file. The root level of the YAML file contains the keyword sensors indicating that named sensors with calibration data will follow. On the level below the sensors, each sensor will be named. Camcalib only knows about sensors' intrinsic and extrinsic calibration and puts these in the tree for each sensor below intrinsics and extrinsics. Note, that the sensor model type is stored in sensors.sensor_name.intrinsics.type. Cameras will have the type Pinhole, PinholeRadTan, KannalaBrand, or DoubleSphere whereas inertial sensors are of type IMU.

      axis_angle: ...
      translation: ...
      parameters: ...
      type: ...
      axis_angle: ...
      translation: ...
      parameters: ...
      type: ...  

Check out the expand below to see a full YAML file with multiple cameras and IMUs. We will be using this YAML file in our example, but you can use your own if you prefer.

The YAML file we use in this article and example

This example will visualize the following data:

  • The frustum of each camera's pinhole representation.

  • The image center coordinates.

  • The extrinsic position and orientation of every sensor.

The following image illustrates all the details of a pinhole camera model representation we will be visualizing. The focal length, image width, image height, and image center coordinates are all stated in pixels by camcalib. Consequently, all we need is a conversion factor to render a true or scaled representation of the camera.

Annotated camera visualization.

Check out the expand below to see how our rendered representation above compares with the typical visualization of pinhole cameras in the computer vision literature.

Visualization hint: abstract representation of the pinhole camera

The examples file structure

Now let's dig into the examples codebase. The folder structure of the example looks as follows:

├── camcalib_tutorial_data
│   ├── calibration_result.yaml
├── modules
│   ├──
│   ├──
│   ├──
  • camcalib_tutorial_data contains the calibration result file that we want to visualize calibration_result.yaml. You can use your data here instead if you like.

  • modules contains essential modules and helper classes.

  • provides a minimal container for 6D pose transforms. We use this to efficiently handle combined rotation and translation operations on 3D data.

  • helps us visualize the intrinsic and extrinsic calibration alongside our multiview point clouds. Consider this a simple helper utility for now. We will dive into its details in the following sections.

  • will aid us with loading the YAML file. It also constructs undistort-rectify maps for all camera pairings, but we will not need that feature here.

  •, when run, launches our example. Check out the to see how to set everything up and run the example.

Step 1: Loading the YAML file

This is where we make use of the module.

# 1. import CamcalibLoader module.
from modules.camcalib_loader import CamcalibLoader

# 2. specify calib file.
calibration_file_name = "camcalib_tutorial_data/calibration_result.yaml"
# 3. specify camera pairs as empty list. This prevents the module
#    from generating any undistort-rectify maps. We will not need them.
camera_pairs = []

# 4. load the calib data.
calibration = CamcalibLoader(calibration_file_name, camera_pairs)

With that, the calibration data is loaded.

To make use of the calibration object we created, let’s discuss its member variables:

  • .cameras is a list of all camera names contained within the YAML file. If the YAML file contains IMUs and cameras, this list will only contain the names of the cameras.

  • .sensors is a list of all sensor names contained within the YAML file. If the YAML file contains IMUs and cameras, this list will contain all cameras and IMU's names.

  • .camera_pairs either contains

  • the list of the camera pairs we specified in camera_pairs or

  • if we specify camera_pairs=None, .camera_pairs contains a list of all unique pairings of the cameras listed in the .cameras member variable.

  • .camera_poses contains the extrinsic pose for each camera listed in .cameras.

  • .sensor_poses contains the extrinsic pose for each camera and IMU listed in .sensors.

  • .camera_pair_undistort_rectify_maps is a dictionary that contains, for each pair in the member variable .camera_pairs, the corresponding undistort-rectify maps and rectification data.

Note the difference compared to the module we used in our previous example. Here we added the member variables .sensors and .sensor_poses while preserving the members .cameras and .camera_poses. The reason for this is to preserve compatibility with our previous examples while adding the ability to handle other sensor types as well. This may change in other examples but it's convenient for us now.

Step 2: generate geometry for each camera

To produce the following 3D visualization of a camera, we simply construct a set of lines in open3d.

The structure open3d provides is open3d.geometry.LineSet() and requires the developer to set three members.

  • .points is a set of 3D points that specify each vertex of our desired geometry.

  • .lines is a list of index pairs that tells open3d which vertices to connect to make a line.

  • .colors is a list of RGB colors for each vertex in .points ranging from 0 to 1 in brightness.

Before we can construct the vertices, we need to prepare our camera parameters and scale them to something useful. We assume a metric space that we are rendering our geometry into. So, arbitrarily, a length of 1 in open3d means a length of 1 meter to us. If we are to render a camera with a focal length of 1000 pixels and an image size of 1280x1024, the rendering would be impractically large. For this, we will need a scale parameter that we specify later on.

# Import open3d
import open3d as o3d

# Specify our scale free camera parameters
# the variables _f, _w, _h, _cx, and _cy are the intrinsic parameters
# the corresponding f, w, h, cx and cy are normalized by _f so we 
# can scale them later.
f = 1
w = _w/_f
h = _h/_f
cx = _cx/_f
cy = _cy/_f

# Paramers to draw the image center vector
offset_cx = cx - w/2.0
offset_cy = cy - h/2.0

With the scale-free parameters defined, we can start creating our vertices and lines.

points = [[        0,      0,     0],
          [offset_cx,offset_cy,   f],
          [-0.5 * w,-0.5 * h,     f],
          [ 0.5 * w,-0.5 * h,     f],
          [ 0.5 * w, 0.5 * h,     f],
          [-0.5 * w, 0.5 * h,     f],
          [-0.5 * w,-0.5 * h,     f]]
lines = [[0,1],[2,3],[3,4],[4,5],[5,6],[0,2],[0,3],[0,4],[0,5],[2,4],[3,5]]

Note how the lines variable only states which vertices are connected to each other. We do not repeat coordinates, there is no need to.

Now that our geometry data is prepared, it's time to apply scale and pose transforms.

# rescale cam symbol to visible size
points = np.array(points) * size

# apply camera pose transform
points = (R @ points.T).T + T

Because we set f=1 in the beginning and expressed the parameters normalized by _f we can use the variable size to define how large we want the cameras to be rendered. Remember, we don't want them to be impractically large, or too small. A good value for size is 1/10th to 1/3rd of the typical baseline of your setup.

The variables R and T we use above are Pose.r and Pose.t that we get from the inverse of the extrinsic pose calibration data calibration.sensor_poses[sensor_name].I. Note that we need to take the inverse of the extrinsic pose as the extrinsic pose transform specified by camcalib is the pose that transforms any point from the world coordinate frame into the camera or sensor coordinate frame. The geometry we specified here is expressed in the local cameras or sensors coordinate frame but we intend to visualize the geometry in one consistent world coordinate frame, which is why we need to apply the inverse of the extrinsic pose to our geometry.

Finally, we create the open3d LineSet object

_color=[0,1,0] # Green
colors = [_color for i in range(len(lines))]
line_set = o3d.geometry.LineSet()
line_set.points = o3d.utility.Vector3dVector(points)
line_set.lines = o3d.utility.Vector2iVector(lines)
line_set.colors = o3d.utility.Vector3dVector(colors)

Now, if you render line_set using open3d you should see a visualization of a pinhole camera with exactly the orientation and position as specified in the YAML file. For your convenience, we put all of this into the function construct_camera(size, intrinsics, extrinsic_pose, color=[0,1,0]) that you can use as follows

from modules.calib_viz_utils import *

# ... extract intrinsics and P_world_sensor from loaded calibration data
construct_camera(size=0.05, intrinsics=intrinsics, extrinsic_pose=P_world_sensor)

Step 3: generate geometry for each IMU

For the IMUs geometry, we simply make use of a small coordinate frame mesh to indicate the orientation of the accelerometers x, y, and z-axes. So all we need to do is create a coordinate frame object and apply a pose transform

mesh = o3d.geometry.TriangleMesh.create_coordinate_frame(size=size, origin=[0,0,0])

P = np.eye(4)
P[0:3, 3] = extrinsic_pose.t
P[0:3, 0:3] = extrinsic_pose.r

There are two minor things of note here:

  • we use size again to scale the coordinate frame mesh to something sensible that also matches the size of the cameras.

  • we use .r and .t of the Pose module to construct a 4x4 pose transform matrix as required by open3d's mesh.transform() function.

For your convenience, we put all of this into the function construct_imu(size, extrinsic_pose) that you can use as follows

from modules.calib_viz_utils import *

# ... extract P_world_sensor from loaded calibration data
construct_imu(size=0.02, extrinsic_pose=P_world_sensor)

Step 4: draw the geometry

Now that we have defined our helper functions, we can loop over all our sensors, generate the appropriate geometry and throw the geometry list at our renderer.

scene_geometry = [o3d.geometry.TriangleMesh.\
                                              origin=[0, 0, 0])]
for sensor_name in calibration.sensors:
        P_world_sensor = calibration.sensor_poses[sensor_name].I

        if sensor_name in calibration.cameras:
            # Fetch intrinsic parameters so we can properly render the 3D
            # representation of the cameras.
            cam_calib_data = calibration.calibration_parameters\
            intrinsics = cam_calib_data["intrinsics"]["parameters"]
            camera_model = cam_calib_data["intrinsics"]["type"]

            # Generate the camera geometry (Frustum, image center vector,
            # and camera name)
            # Generate IMUs as coordinate frame meshes and indicate it 
            # is an IMU by writing the sensor parallel to the x-axis.
            scene_geometry.append(construct_imu(0.02, P_world_sensor))
        print("Generated scene geometry for", sensor_name)
        print("Failed to generate geometry for", sensor_name)

# Render scene geometry

When we run the code, we get the following output:

Visualization of our YAML file.

There are a few things to note here:

  • The orientations of the sensor names are aligned with the x-axis of the sensor pointing towards the right of the text and the y-axis pointing downwards from it.

  • Notice that cam1 is upside down compared to cam0 and the IMUs are oriented in many different directions. This is not an accident but the result of design choices by the hardware designers. It does not matter if cameras are mounted upside-down as long as they are facing in the right direction because the image can always be flipped later. But it is important that this information is reflected in the calibration data. Our previous articles on multiview point clouds automatically take this into account during the rectification of the image data so no extra care needs to be taken during image data read-out to flip the images.

  • Imu1 has a large coordinate frame as opposed to imu2 and imu3. This is because it is the reference coordinate frame for all other sensors. The reference frame is drawn larger and covers the coordinate frame of imu1.

Putting it all together

For your convenience, we provide this example in full in our examples repository and follow the instructions in the accompanying to get up and running with ease.


With this article, we have shown you how to easily visualize your calibration data and gain valuable information on how your sensors are mounted and are positioned with respect to each other. Stay tuned for further articles that will help you bootstrap your CV applications even faster with camcalib!

973 views0 comments
Post: Blog2_Post
bottom of page