Synthetic Imagery and Truth Data for Evaluating Multi-view 3D Extraction Algorithms

This website is made in conjunction with the work entitled Automated identification of voids in three-dimensional point clouds presented at the 2013 SPIE Optics and Photonics conference in San Diego, California.

The dataset provided here was developed in order to test a variety of 3D reconstruction algorithms under evaluation and development at Rochester Institute of Technology. One such workflow for Dense Point Cloud Generation from aerial imagery is presented here.

Automated scene reconstruction from imagery is an inherently under-determined problem and there are a lack of benchmark datasets available to quantitatively compare the performance and output of such algorithms. While there are data sets available with precisely determined camera positions and models of the objects that were imaged, such data sets are often designed for close range photogrammetry.

The aerial application of these algorithms is becoming more commonplace and can present unique problems of its own. Thus, it was determined that the community could benefit from another dataset. This data set is composed of synthetic imagery, generated using RIT’s Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. Each radiance image is accompanied by a corresponding truth image that contains (X,Y,Z) locations and normal vectors for every pixel in the image. Camera location and pointing information is also provided.