Software method for embedding spatial data into a visual image in tagged image file format

Abstract

The digital image captured by the camera provides only visual data (illumination and colors) but has no relative size data and therefore is unable to be modeled and printed in 3D. To create a 3D image, the object’s distance from the camera and visual data of objects are required. Distance information can be acquired by LIDAR and RGB-D. A digital camera only provides illumination and color information, but the image has a flat plane. i.e. there is no depth information for the object. To get the full 3D effect, we need to combine both the picture and the distances of the objects from the device. To accomplish the goals of the research, it was necessary to merge the digital visual image with distance data from LIDAR or RGB-D sensor to get more complete information about the object and the 3D shape of an object. As a proof of concept, a set of software routines are written to implement the algorithm.

Description

Keywords

Citation