Geometric Calibration

For light fields acquired using our camera array, calibration was performed using the plane+parallax approach described in [8]. In this approach,
  1. The cameras are arranged on a plane.
  2. The camera images are projected on to a reference plane parallel to the plane of cameras. This is done by a applying a homography which is represented by a 3x3 matrix. We refer to these as keystone corrected images.
  3. The (X,Y) positions of the camera centers are recovered using parallax measurements. As described in our paper [8], these positions may be known only up to an unknown scale. However, this is not a problem for most applications. We have found this calibration sufficient for 3D recosntruction, synthetic aperture imaging, light field rendering and space-time view interpolation.

For each of the light fields, we provide:

  1. A text file containing the camera positions. Each line of the file is of the following form:
    Camera-id X Y
    The camera id is an integer. (X,Y) are the coordinates of the center of projection of the camera, up to an unknown scale.
  2. Homographies for projecting the camera images on to the reference plane (keystone correction). The 3x3 homography matrix for each camera is stored in a text file in row-major order.

For light fields acquired using the computer-controlled gantry, we provide the same information. The only difference is that the camera positions are obtained from the gantry instead of parallax measurements. The camera positions are in millimeters and our gantry is sub-millimeter accurate.

Light fields acquired with the lego gantry are treated similarly to the camera array. The blue/yellow corners visible in the original images define the reference plane, and the red/green corners determine parallax. This computed parallax, which is related to the center of projection of the camera by an unknown scale and translation, is included as part of the file name of each rectified image, in the order (Y, X), so that lexicographic sorting of the files produces a row major ordering of camera images. Computed homographies are not included, but are easy to compute from the original images using basic computer vision techniques (corner detection and some linear algebra to compute the 3x3 homography).

Our calibration error is less than 0.5 pixels. This error is primarily due to uncertainity in corner detection. A quantitative analysis of the error may be found in [2], Appendix A.

Calibration for the light field microscope images is a fairly different process, as the microlens array records the 4D transpose of what a gantry or camera array records. The main step is rotating and scaling the image of the microlens array so that it becomes axis-aligned, with each microlens having an integral width and height. However, optical aberrations come into play both within each lenslet and across the lenslet array. For more details, see the Stanford Light Field Microscopy project web page.

Color Calibration

For light fields acquired with the camera array, our procedure to compensate for the variations in color responses amongst the different cameras of our array is described in [3].

© 2008 Stanford Graphics Laboratory
Created by Vaibhav Vaish. Updated by Andrew Adams.
Last update: