Some previous attempts at segmenting independently moving objects via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. We propose a method of segmenting the motion field into distinct rigidly moving objects. Thus motion discontinuities based on self-occlusion depth changes are distinguished from those due to separate objects. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. Each distinct object has a unique Fundamental matrix associated with its motion. This allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the objects. No camera calibration parameters are required. The problem is formulated as a scene partitioning problem and a statistic-based algorithm which uses only nearest neighbor interactions and a finite number of iterations is used to derive the scene partitioning.
A GZIP'd PostScript version of the paper is here. (2.8Meg)