3D Scene reconstruction

From 2D images, we can extract a limited range of information like width, height, and color. These can be useful to determine the regions of interest in our images: street signs, lanes, or even roads.

However, for more accurate detections, depth perception
is crucial. Here comes the 3D reconstruction into play. By extracting a 3rd dimension depth, we can determine how far from the
camera the regions of interest are and consequently, their shape. This way we can distinguish the road from the obstacles (cars, pedestrians, curbstones) simply because we know that the road has an increasing distance from the camera while the objects have a constant distance (fig. 1).

fig. 1 Depth Map

How?

The most effective way of constructing 3D images is using a stereo camera. The stereo camera consists of 2 or more lenses that
allow the camera to simulate human binocular vision.

Because we work with mobile phone mono cameras, we can simulate the stereo vision by using multiple photos of the same object taken at various positions. The position known from the GPS information must have a millimeter precision. The process of estimating the 3D structure of a scene from a set of 2D images is called Structure from Motion or SFM (fig 2).

fig. 2 SFM algorithm schema

Between 2 consecutive photos, we find corresponding feature points, pixels that appear on both photos.

The main steps in SFM are:

  1. Calibrate the camera (determine intrinsic, extrinsic, and distortion coefficients)
  2. Detect the matching features between the 2 images
  3. Perform the triangulation (3D reconstruction)

Step 1. Camera Calibration

First of all, we must be pay attention to the coordinate systems we work with (fig. 3).

fig. 3 Image coordinate systems

The transformation from one coordinate system to another can be described by a series of matrix multiplications. The conversion from world coordinates to pixel coordinates is called Forward Projection. The opposite conversion – which we compute in this algorithm – is called Backward Projection (from pixel coordinates to world coordinates) (fig. 4).

fig. 4 Backward projection

Our goal is to describe this sequence of transformations by a big matrix equation! (fig. 5)

fig.5 Transformation from pixel to world coordinates

The intrinsic matrix is determined using a calibration algorithm (eg. chess table pattern from OpenCV). It consists of intrinsic parameters: focal length, the center of the image, aspect ratio of a pixel (fig.6).

fig. 6 Intrinsic Matrix

The distortion coefficients are used to correct the image and the 3D cloud points positioning. The images were taken with a camera usually have many distortions: barrel, fisheye, etc.

The extrinsic matrix is composed of a rotation and a translation matrix. The rotation matrix is also composed of three other rotation matrices – roll, pitch, yaw rotation matrices (fig. 7). These rotations of camera 2 are computed relative to the camera 1. The translation matrix (fig. 8) is computed, by subtracting the position matrix of camera 1 from the position matrix of camera 2.

fig. 7 Rotation matrix from which is composed the extrinsic matrix
fig.8 Translation along the X-axis of the right camera w.r to the left camera

Having the extrinsic and intrinsic matrices we can compute the projection matrix.

Step 2. Feature Matching

There are a lot of feature matching algorithms between 2 consecutive photos (SURF, SIFT, ORB). The SURF algorithm is used to find the matching points between 2 consecutive images (around 2- 3 meters apart) and RANSAC algorithm to filter the outliers (gif 1).

GIF 1: Matching features from consecutive photos

Step 3. Triangulation

Having the matched points and the projection matrix we can perform the triangulation algorithm and plot the 3D points. In the plots above you can see the original photo (fig. 9) and the cloud of points from different perspectives computed using both 2 (fig. 10) and 5 photos (fig. 11).

fig. 10 Original photo taken with a mobile phone camera
fig. 9 3D reconstruction from 2 photos
fig. 11 3D reconstruction from 5 photos

What’s next?

As Future Work we need to focus on:

  • Gather more precise data (orientation and position)
  • Remove lens distortions
  • Fit a surface on the cloud points to detect the road profile
  • Research on other feature matching algorithms
Facebooktwitter