Our team is interested in finding out whether or not these different tasks can be performed by the same camera. If we can develop sensors capable of carrying out more tasks, we can minimise the amount of equipment needed for space missions.
On behalf of the European Space Agency (ESA), space scientists from SINTEF, Thales Alenia Space and Terma are investigating the potential of 3D technologies to replace the current generation of cameras used by spacecraft and space rovers. It is hoped that 3D cameras will prove more reliable, less labour intensive and more cost effective than the 2D devices traditionally used during space missions.
The cameras currently used are essentially higher-quality versions of the devices that you might find in your local electrical store. In addition to taking photographs of our universe these cameras are utilised during the docking of space stations, for the navigation of foreign planets and in the landing of spacecraft. The successful performance of these operations, however, requires accurate, three-dimensional data; data that 2D devices cannot provide in isolation. To overcome this problem, numerous 2D cameras are mounted together in stereo configurations, and complex computations are conducted to measure the third dimension. Unfortunately, this solution isn’t always reliable and it is also power consuming.
To find about the benefits that might result from a new, 3D generation of cameras in space, I spoke to participating SINTEF scientist Henrik Schumann-Olsen. He began by explaining more about the drawbacks of the 2D cameras currently employed on space missions.
"When you are working in two dimensions, you only have a flat image of the world around you," he said. "At present, in order to obtain real 3D data, astronauts have to employ a system of markers. They are able to determine their angle and direction relative to specific markers. This is very important, for example, if you are planning to dock your craft with a space station. Accurate 3D information is required in order to align the relevant sections of the two vessels. The problem with this system is that markers must be placed on the target objects. In some cases, this just isn’t possible. For instance, if you are trying to collect a piece of space garbage such as a decommissioned satellite, you cannot place a marker on it. You need an active system to recover such objects. Also, when performing entry, descent and landing missions on the moon, Mars and other foreign bodies, we cannot rely on markers to evaluate the structure of the landing surface.
"Another method that is employed today is that of placing numerous 2D cameras together to form what is called stereovision. Data collected by these arrangements can be used to perform some pretty advanced computations, and from here you can create a 3D map of the surrounding area. This method, however, is processor intensive, prone to errors and dependent on the ambient light situation."
Participating European space scientists will deliver a report to ESA highlighting promising technologies that might be used to overcome problems such as these. Schumann-Olsen is confident that 3D cameras can provide a reliable and efficient solution.
"3D cameras have active light sources," he explained. "These sources emit light and then measure its reflection. A modern-day comparison might be made with Microsoft’s Kinect system. Kinect uses a similar principle to the 3D cameras that we are investigating. By positioning its light source slightly apart from its sensor, this device can calculate the distance between an object and itself. Kinect, however, is a spatial system which performs spatial coding of light; it projects patterns to retrieve 3D information. The systems that we are looking at are time based.
"We are interested in time-of-flight technologies," continued Schumann-Olsen. "This is because these technologies are suited to both long- and short-range operations. Spatial systems such as Kinect are only suitable for use over short distances. If possible, we would like to streamline the number of devices required to perform different tasks. One should keep in mind that every space mission includes several phases. The first phases deal with long distances whilst the latter ones are concerned with short-range tasks. For instance, NASA’s Curiosity mission
began by taking pictures of the Martian environment in order to locate suitably flat areas for landing. The scale of this process can range from several hundred meters right up to a kilometre. Subsequent tasks must be performed on smaller scales. Our team is interested in finding out whether or not these different tasks can be performed by the same camera. If we can develop sensors capable of carrying out more tasks, we can minimise the amount of equipment needed for space missions."
Once they have delivered their report to ESA, the scientists will begin work on a prototype 3D camera incorporating the most promising technologies from their findings. Schumann-Olsen believes that these technologies could achieve practical applications far beyond the field of space science.
"3D cameras could be used to perform numerous industrial tasks," he concluded. "Obviously, many industrial tasks have to be performed outdoors. However, most of today’s 3D cameras are suitable for indoor use only. One of the most difficult problems that these cameras encounter when being used outside is that posed by sunlight. As I mentioned, this can be especially problematic in space. The sun will always be brighter than the active lighting systems used by 3D cameras. Most of the systems that are commercially available at the moment use a combination of weak illumination sources with long integration times. Whilst some adapt to sunlight by conducting differential signal detection and through the application of filters, we are also investigating other ways to circumvent these problems. One approach would be to employ very short laser pulses and integration times. Indeed, new sensors reaching the market are pursuing similar avenues."