Small Body Exploration
using Visual Methods

Jean-Yves Bouguet(1), Andrew Edie Johnson(2) and Pietro Perona(1)

(1) Department of Electrical Engineering
California Institute of Technology 136-93, Pasadena, CA, 91125

(2) Jet Propulsion Laboratory

Link to the project page at JPL

ABSTRACT:

We are developing techniques for building complete 3D models of small moving bodies from monocular visual input. The task consists of estimating the body motion as well as its three-dimensional shape using only an image sequence acquired by a single camera.


MOTIVATION & AIMS:

The purpose of this project is to build a system that enables accurate and autonomous position estimation near small bodies (such as comets or asteroids) using imaging sensors. The system consists of two onboard capabilities: local feature tracking for continuous estimates of the spacecraft motion, and landmark recognition and tracking for global position estimation. These capabilities must work during orbit and descent phases of missions and should be general enough to handle variations among small bodies including asteroids and comet nuclei, as well as potential change in lighting conditions (old project page).


RESEARCH:

We are experimenting different algorithms for motion and structure estimation from visual input (known as Structure-From-Motion algorithms in the vision community). This page regroups experimental results achieved on several sequences: two orbital sequences, and one descent sequence. It includes: feature tracking results, 3D motion and trajectory estimations, and 3D structure reconstructions (with recovered 3D meshes). The original sequences, together with the calibration data are available upon request to the authors.

The two orbital sequences were acquired using a turn table, and by rotating a rock by 2 degress between two consecutive images. For the first sequence, a large field of view lens was used (approx. 40 degrees FOV). The second sequence was acquired using a TV-photo lens was used (approx. 10-15 degrees FOV) making the overall reconstruction more challenging. The final presented experiment is a descent sequence, acquired with the same TV-photo lens. These sequences are simulating orbital and descent motions of an spacecraft near a comet.




Experiment 1: Orbital sequence #1

Initial sequence (226 images)

mpeg movie (225 frames)

Feature tracking

mpeg movie (225 frames)

Tracked features

mpeg movie (225 frames)


Reconstructed 3D structure and camera trajectory
Method used: Three-frame algorithm for instanteneous motion estimation (using the trifocal tensor), with scale factor propagation.
To improve robustness, the tricocal tensor is parametrized with 11 DOF (the camera being pre-calibrated)
Iterative optimization of camera positions and 3D structure in order to minimize the reprojection error.
Note: Gain of a factor of 2 on the 3D triangulation accuracies after including the 3D structure in the optimization process.

mpeg movie (225 frames) - Top view

mpeg movie (225 frames) - Side view


Reconstructed 3D Mesh (from point features)
Method used: All the points are projected onto a cylinder whose principle axis is the main axis of rotation of the camera trajectory.
On this cylinder, connectivity is then established by delaunay triangulation.
Results a single mesh covering the entire object (no mesh registration necessary)


mpeg movie of 3D model (original/smoothed) -- VRML mesh (original/smoothed) or Open Inventor mesh (original/smoothed)




Experiment 2: Orbital sequence #2 (with tele-photo lens camera... challenging!)

Initial sequence (211 images)

mpeg movie (210 frames)

Feature tracking

mpeg movie (210 frames)

Tracked features

mpeg movie (210 frames)


Reconstructed 3D structure and camera trajectory
After closure enforcement and refinement of camera trajectory and 3D structure by minimization of reprojection error.
3D reconstruction accuracies improved by a factor of 6 after including the 3D structure in the optimization!

mpeg movie (209 frames) - Top view

mpeg movie (209 frames) - Side view


Reconstructed 3D Mesh (from point features)
Same method as previously described: connectivity established on the cylinder defined by the overall camera trajectory.


mpeg movie of 3D model (original/smoothed) -- VRML mesh (original/smoothed) or Open Inventor mesh (original/smoothed)




Experiment 3: Descent sequence (almost planar scene with tele-photo lens... challenging!)

Initial sequence (26 images)

mpeg movie (25 frames)

Feature tracking

mpeg movie (25 frames)

Tracked features

mpeg movie (25 frames)


Reconstructed 3D structure and camera trajectory
After closure enforcement and refinement of camera trajectory and 3D structure by minimization of reprojection error.
3D reconstruction accuracies improved by a factor of 1.7 after including the 3D structure in the optimization!

mpeg movie (25 frames) - Top view

mpeg movie (25 frames) - Side view


Reconstructed 3D Mesh (from point features)
Mesh connectivity established from the projection of the points on the image plane.


mpeg movie of 3D mesh -- {VRML mesh (original/smoothed)+texture image} or {Open Inventor mesh (original/smoothed)+texture image}


ACHIEVEMENTS

We have demonstrated the applicability of automatically reconstructing accurate 3D models of small moving bodies only using a monocular sequence of images. The outcome is a dense 3D map of the body that can be further used for positioning purposes. The next step of the project consists of building a landmark recognition system that would enable a spacecraft to estimate its full 3D position with respect to the body at any phase of the approach from images, and the previously computed 3D model (during orbital motions).


Back to main page

Number of visits since Sept. 3rd, 1998: