LONDON Films made in 3D could become more impressive as a result of work being carried out by a De Montfort University Leicester (DMU) researcher.
Dr Cristian Serdean is exploring an alternative way of creating high quality 3D from 2D stereoscopic images – stereoscopic images are created by filming two sets of footage of the same subject but from slightly different angles, corresponding to the viewer’s left and right eye.
Dr Serdean has been awarded a £182,693 grant under the Engineering and Physical Sciences Research Council’s First Grant Scheme to fund the work.
Traditionally, the 3D effect is achieved by shooting stereoscopic images and then merging them for display purposes. The resulting film is seen in 3D with the aid of special glasses designed to pass the correct image to each eye which the brain then processes into 3D information.
This method, believes Dr Serdean, is often inefficient, expensive and inconvenient because it involves having to store and transmit two sets of footage and also requires the viewer to wear special glasses. He is hoping to perfect a different method of representing 3D data which is created using a single set of footage containing the 2D view, plus information about the depth of each pixel in the scene. This 3D data can then be viewed on autostereoscopic displays which allow people to see the 3D effect without special glasses.
The research will look at how to improve the complex process of extracting depth information from 2D stereoscopic video frames, a key step in the production of this type of 3D film.
Pixels are first turned into frequency coefficients using a mathematical function known as a transform. The coefficients are then used to find corresponding points between the two sets of footage in order to estimate the correct depth for each pixel.
Dr Serdean will look at whether a particular type of mathematical transform, known as a multiwavelet, will find the correspondence points between the two sets of footage to a greater degree of accuracy.
“After HDTV, the next big revolution in home cinema is going to be 3D television, where accurate stereo to 3D conversion is an important enabling technology,” said Dr Serdean. “Traditional mathematical transforms used in 2D to 3D processing do not retain information about the pixels’ relationship in space, meaning that when the coefficients are displayed as an image, they no longer bear any resemblance to the original picture.”
“This can be a significant disadvantage which multiresolution transforms such as the wavelets and the much newer and under-researched multiwavelets can overcome.
“Wavelets have been successfully used in stereo imaging for a number of years, but they still have some limitations. Multiwavelets are more versatile and offer perfect localisation in both frequency and space while also correcting some of the drawbacks of wavelets.”
The two-year project will test the capabilities of multiwavelets and whether they can produce higher quality 3D footage than other methods.
“Finding correspondence points accurately is a critical stage of 2D to 3D conversion and it’s by far the most difficult part of this process,” added Dr Serdean. “One point from the left image will have a corresponding point in the right image, but due to the slightly different angles at which the two images were captured, the location of this point will be slightly displaced compared with the location in the left image.
“If we can find the accurate location of the corresponding point in the right image, then using the distance between the camera and the scene and the distance between the two corresponding points in the two images, we can calculate the depth for that point via triangulation.
“Identifying these stereo correspondence points more accurately will mean a significant step forward in stereo imaging, leading to higher 3D footage quality and the development of algorithms and processing tools that are able to work accurately with minimal human input.
“Ultimately, identifying stereo correspondence points more accurately constitutes yet another small step towards demystifying and getting closer to the ultimate image processing system, the human brain.”
Related links and articles: