Students and researchers preparing for an archaeological dig in Jordan will be packing a modified version of Microsoft’s Kinect sensor. Jürgen Schultze, a researcher at University of California, San Diego aims to turn it into a hand-held scanner for archaeological finds.
The tool is dubbed ArKinect by the team, which has worked out a way of extracting the data collected by the Kinect’s IR sensor and onboard colour cameras and turning them into a 3D model of an object.
“We are hoping that by using the Kinect we can create a mobile scanning system that is accurate enough to get fairly realistic 3D models of ancient excavation sites,” remarked Schulze.
How it works
The Kinect projects a pattern of infrared dots onto an object, which then reflect off the object and get captured by the device’s infrared sensor. The reflected dots create a 3D depth map. Nearby dots are linked together to create a triangular mesh grid of the object. The surface of each triangle in the grid is then filled in with texture and colour information from the Kinect’s colour camera. A scan is taken 10 times per second and data from thousands of scans are combined in real-time, yielding a 3D model of the object.
One challenge Schulze and his team faced was spatially aligning all the scans. Because the ArKinect scans are done freehand, each scan is taken at a slightly different position and orientation.
To overcome this challenge, master’s student Daniel Tenedorio outfitted the ArKinect with a five-pronged infrared sensor attached to its top surface. The overhead video cameras track this sensor in space, thereby tagging each of the ArKinect’s scans with its exact position and orientation. This tracking makes it possible to seamlessly stitch together information from the scans, resulting in a stable 3D image.
The team is working on a tracking algorithm that incorporates smartphone sensors, such as an accelerometer, a gyroscope, and GPS (global positioning system). In combination with the existing approach for stitching scan data together, the tracking algorithm would eliminate the need to acquire position and orientation information from the overhead tracking cameras. This will then free the ArKinect completely.
A major advantage of the ArKinect is that scan progress can be assessed on a computer monitor in real time. Notes Schulze, “You can see right away what you are scanning. That allows you to find holes so that when there is occlusion, you can just move the Kinect over it and fill it in.”