No matter if you want to construct a panorama image, use cameras to measure angles for an industrial application, or want to measure a point cloud of a real object with stereo matching or photogrammetry, camera calibration is needed at some point.
Our intrinsic camera calibration video guide will show you the entire process of calibrating the intrinsic camera parameters, saving the results, and using them to remove the fish-eye distortion from recorded image data.
Check out our video guide right here, or continue reading below for the text version.
Download our examples source code
The last sections of this article contains a simple image distortion example that we will discuss in detail. You don't need to copy and paste the code snippets and puzzle them together. Just check out the code for the entire example from our examples repository and follow the instructions in the accompanying README.md to get up and running with ease.
To intrinsically calibrate a camera, you will need to complete the following steps:
Print or display a calibration board. Check out our specific guide for that here.
Take pictures of the calibration board.
Put the pictures into a calibration dataset.
Estimate calibration parameters with camcalb and save your results.
The 4th is subdivided into the following detailed steps:
Load the calibration data set.
Configure your calibration board in camcalib. This only needs to be done once per board.
Start camcalib's intrinsic calibration process.
Save your results.
Collect calibration image data
In this guide, we will calibrate a GoPro Hero 4. As a physical calibration pattern, we will project a 12x6 AprilTag grid onto a Samsung 49" QLED 32:9 3840x1080 Super Ultra-wide R1800 screen.
Caution: This method will work with the Hero4 because its large field of view can not resolve the large pixels of our screen. We recommend using a printed pattern or a 4k screen for most other cases. In general, if your camera can resolve the pixels of your calibration target, you should either print the pattern on a higher resolution target or move it further away from the camera.
Before you calibrate, if you keep these three goals in mind, your results will be much better: coverage, convergence, and quality!
1. Coverage: make sure, every pixel sees the calibration pattern at least once.
2. Convergence: make sure you have a few images where the lines of the calibration pattern strongly converge on a vanishing point.
3. Quality: make sure your images are sharp, well illuminated, and motion-blur free. You want as many high-quality checkerboard corners as you can get.
If you follow the three goals, you will end up taking at least 8 pictures to calibrate your camera.
We show the process of recording calibration data here.
Construct a dataset
Once the pictures of the calibration pattern are taken from all required perspectives, we will have to construct a dataset folder structure and place the images into the dataset. The structure of the dataset is as follows:
20220710 ├── Hero4 │ ├── GOPRO7618.jpg │ ├── GOPRO7619.jpg │ ├── ... │ └── GOPRO7639.jpg
Here the “Dataset Folder” is “20220710” named after the date the data was recorded. Inside the dataset folder, camcalib expects further folders – one for every camera you intend to calibrate. We are only calibrating one camera intrinsically, so we place one sensor sub-folder inside the dataset named after the sensor we are calibrating “Hero4”. The sensor folder can have any name that makes sense to you. Options are the camera's serial number, your company's sensor naming scheme, or nicknames that make things easy for you or your operators to remember.
Note that we took 17 images for calibration instead of the minimum required 8. The reason for this is partly the large field of view and partly the size of the calibration pattern. To achieve full coverage while ensuring “Goal 3: Quality” the simple solution is to take more images than you need. Conversely, if you take only the minimum required amount of images and only one is blurry, you fail “Goal 1: Coverage”.
Load data into camcalib
With our dataset folder “20220710” created and the image data placed in the “Hero4” sub-folder, we are ready to start camcalib and load the dataset. Loading the dataset is simple. Just make sure to choose “20220710” folder. Do not navigate into the “Hero4” folder and hit choose. Camcalib will throw an error if you do that.
Configure your calibration board
If you are using a new calibration pattern, you will need to give camcalib its exact configuration. In our case, we are projecting a 12x6 AprilTag Grid Board onto a curved screen as a “calibration board”. The tag size on the screen was 3.7 cm wide (0.037 m), and the ratio of the small to large rectangles is 0.3.
Detect the calibration board’s feature points
Now that camcalib knows what calibration pattern to look for in the scene, we can hit the “Detect Features” button. Just make sure that you select the correct board from the “Saved Boards” menu if you have previously configured a different board.
If the board is correctly configured and the detection was successful, most of the AprilTags will have a blue circle with a green dot in its center and an ID number below the blue circle for each of its four corners.
Warning: If the board was not configured correctly (switched or incorrect values for Rows and Columns), there may be a bunch of misplaced circles centered on random parts of the calibration pattern. In that case, ensure that your board configuration is correct before you proceed.
Run the calibration
With the features correctly detected, it's time finally to calibrate our camera. Don't worry, it's just a choice, checkbox, and button away. First, select the distortion model that you like most. We selected the “Kannala Brandt” model as it works well for fish-eye lenses. Then we enable the “Optimize Object Points” checkbox and hit “Calibrate”.
Once the optimizer converges, we will be shown the final reprojection error in the console, as well as the board deflection map as a consequence of selecting the “Optimize Object Points” feature before hitting the “Calibrate” button.
The checkbox “Optimize Object Points” is especially important for our current use case. Remember, we projected our pattern onto a curved screen and took a bunch of calibration data recordings, so our pattern will not be flat but curved. Typically calibration software assumes that your calibration board is flat, which would cause huge reprojection errors and a bad calibration result. If we enable the “Optimize Object Points” feature, camcalib will assume that we have significant production defects in our board and estimate the board's 3D geometry.
Considering that the screen we projected the calibration pattern onto has a 1.8m radius and the tag size is 3.7 cm with 1.11 cm spacing between the tags, the board is 58.83 cm wide and warped by the screen's curve. Overall the warp works out to be 2.4 cm. Camcalibs estimation of the board warp matches that exactly.
Save your results
When the intrinsic calibration process is done, we have nothing left to do but save our results.
To save the calibration results for later use, hit the “Save Result” button to save the intrinsic parameters to a YAML file. You can also save a PDF report that is useful for documentation and getting in touch with camcalib support if you need any help. For that just hit the “Generate Report” button.
Now camcalib’s job is done, and we can close the tool. Continue reading to see how you can make actual use of the calibration results. We will link you to our example repository and discuss the code required to remove the fish-eye distortion from any future images taken by our trusty ancient Hero 4.
If you would like to see what the calibration results look like for yourself, check out the following files.
Using the results
With camcalib, we have determined the distortion parameters and offset errors of the lens and sensor geometry. But how can we use the camcalib results to remove the lens distortion from our images?
OpenCV provides a vast library of well-tested computer vision algorithms that will help us accelerate the implementation process and reach our goals with only a few lines of code. So let's get started!
To undistort an image efficiently, we will take the following steps:
Fetch and restructure the intrinsic parameters from the YAML we loaded before.
Use the intrinsic parameters to construct a 2D look-up table – a so-called undistortion map.
Apply a remap function with the undistortion map on every image the camera captures during the capture loop.
It may sound more logical to pack this into a single undistort image function, but you would repeat large amounts of the same computations every time you take a new snapshot. It is better to do the heavy calculations – computing the undistortion map – only once and store the result in memory for repeated later use. This way, you have more CPU resources left to do more valuable computations, like computing 3D point clouds.
Tip: installing the NumPy package.
You may need to install the numpy package using apt-get or pip3
# Either sudo apt-get install -y python3-numpy # OR pip3 install numpy
To create the undistortion map, we will need to fetch the intrinsic parameters and restructure them into a camera matrix and a list of distortion parameters.
import numpy as np # Extract the calibration data for camera 1 from the # calibration_parameters variable cam1_parameters = calibration_parameters['sensors']['cam1'] # Extract camera 1 intrinsic parameters from cam1_parameters cam1_intrinsics = cam1_parameters['intrinsics']['parameters'] # Prepare variables map creation image_size = cam1_intrinsics['image_size'] cam1_camera_matrix = np.array( [[cam1_intrinsics['fx'], 0, cam1_intrinsics['cx']], [0, cam1_intrinsics['fy'], cam1_intrinsics['cy']], [0, 0, 1]]) cam1_distortion = np.array( [cam1_intrinsics[k] for k in ['k1', 'k2', 'k3', 'k4']])
A detailed review of what the code does
Before we continue, a few notes on what we did in the code.
We use Python dictionaries to easily extract only the relevant parts of the tree structure in the YAML file. For example, we extract all parameters relating to cam1 by looking into sensors->cam1 or in Python with ['sensors']['cam1']. We do the same to get only the intrinsic parameters.
We create cam1_camera_matrix from the cam1_intrinsics parameters. cam1_camera_matrix is a 3x3 matrix that we save in the form of a NumPy array. NumPy is a powerful tool that will benefit us greatly later on. It takes care of linear algebra as well as advanced filtering operations.
The camera matrix we define here will be the basis for our pinhole camera model.
The variable cam1_distortion is an array with a particular sequence of distortion parameters. The sequence of the parameters depends on the camera distortion model you selected during calibration. We set KannalaBrandt during calibration, determining the sequence we used in the code.
To construct cam_distortion use Pythons list comprehension to shorten the code. We could have also directly listed each parameter in sequence as follows cam1_distortion = np.array([cam1_intrinsics['k1'], cam1_intrinsics['k2'], cam1_intrinsics['k3'], cam1_intrinsics['k4']]]) but that is quite tedious to write, and visually noisy for anyone to read.
After executing the code, we have all the parameters needed to create our undistort map – image_size, cam1_camera_matrix, and cam1_distortion. Now let's build the undistortion map.
Tip: installing the OpenCV package
You may need to install the opencv-python package using apt-get or pip3
# Either sudo apt-get install -y python3-opencv # OR pip3 install opencv-python
import cv2 cv = cv2.fisheye undistort_map1 = cv.initUndistortRectifyMap(cam1_camera_matrix, cam1_distortion, np.eye(3), cam1_camera_matrix, image_size, cv2.CV_32F)
That's it! The variable undistort_map1 will contain a tuple. The first element of the tuple will contain all x-coordinates of the look-up table, and the second element will contain all y-coordinates.
More details on what the undistortion map contains, how it is used, and how remap works.
Let’s briefly discuss the parameters of initUndistortRectifyMap and then look at the undistort map itself:
We import OpenCV with import cv2
Keen eyes will notice that we are using initUndistortRectifyMap. This function can create undistortion maps as well as undistort and rectification maps. We will re-use this function later on to its full extent. For now, we will ignore the rectification component. Check out the OpenCV documentation for more details.
To undistort using KannalaBrandt, we need to access the cv=cv2.fisheye module.
The first and second parameters are the camera matrix and distortion coefficient variables we previously constructed.
The third parameter np.eye(3) passes the Identity matrix as extrinsic data to disable the rectification component. Remember, we only care about undistortion at the moment.
For the fourth parameter, we repeat cam1_camera_matrix to preserve the pinhole parameters. You can also specify a different camera or projection matrix if you like.
Further, undistort_map1 tells us how to construct a distortion-free image by informing us where, in the distorted image (source coordinate), we need to fetch a pixel from and where, in the undistorted image (destination coordinate), to place it. undistort_map1 contains two arrays, one for x and one for y, with the same shape as our image. The source coordinate is encoded in each of the array's values. The coordinate we read the source coordinates from is simultaneously the destination coordinate. The picture below illustrates the application of the undistort map.
Put differently; you must, for each pixel in the undistorted image,
fetch, from the x-map using the destination coordinate, the source coordinates' x-value
fetch, from the y-map using the destination coordinate, the source coordinates' y-value
then fetch the brightness value from the distorted image (RAW) at the source coordinate X_src
and finally, write the brightness value to the undistorted image at the destination coordinate X_dst
where the source coordinate X_src is the sum of the destination coordinate X_dst and the distortion vector dX
Let's apply the undistortion map to our image data
# Load a stereo image pair img_fn = dataset_dir+"Hero4/001.JPG" img1_raw = cv2.imread(img_fn) # Undistort images - undistortion example print("Undistorting image", img_fn) img1_undist = cv2.remap(img1_raw, *undistort_map1, cv2.INTER_LANCZOS4) ud_img_fn = dataset_dir+"Hero4/001_UD.JPG" print("Saving undistorted image", ud_img_fn) cv2.imwrite(ud_img_fn,img1_undist)
In the first step, we load a distorted image from the Hero4: 001.JPG
Once the image is loaded, we apply the remap() function to apply the undistortion map undistort_map1 to our raw input image img1_raw. In the final step, we save the undistorted image img1_undist as 001_UD.JPG.
Comparing the raw input image data with the undistorted result, we see that the white lines on the cutting board, as well as the lines of the calibration pattern, are straight - as they very well should be!
Download our examples source code
This article and video guide should give you a good overview of the first step in camera calibration and how simple and painless it can be. Please also check out our other articles and return to check out our new additions. We are constantly publishing new articles to help you get started with camera and sensor calibration as well as help you get up to speed with your camera calibration results so you can focus on building your own applications without spending weeks or even months on getting the calibration right.