Time-of-flight technology promises enhanced accuracy - Embedded.com

Time-of-flight technology promises enhanced accuracy

Analog Devices (ADI) and Microsoft have teamed up to produce time-of-flight (ToF) 3D imaging solutions with the goal of providing greater accuracy regardless of scene conditions. ADI will leverage Microsoft’s Azure Kinect 3D ToF technology and add its technical IC and system expertise to create solutions that would be easier to adopt. The goal is to reach a broad audience in sectors such as Industry 4.0, automotive, gaming, augmented reality, and computational photography and videography.

Industry market analysts estimate strong growth for 3D imaging systems used in challenging environments, and where cutting-edge applications such as human collaboration robots, room mapping, and inventory management systems are required to bring Industry 4.0 to life. ToF applications are also needed to create safer automotive driving environments with occupancy detection and driver monitoring capabilities.

In an interview with EE Times, Tony Zarola, senior director of enhanced imaging and interpretation and Carlos Calvo, strategic marketing manager at Analog Devices, have highlighted the foundations of this collaboration . Zarola said, Microsoft has become the benchmark for 3D ToF performance across image sensor manufacturers and is providing ADI with the core pixel technology that is the foundation of the sensors and solutions that ADI is building. Over decades, they have developed the expertise required for the best data capture and revolutionary algorithms they run at the Intelligent Edge or on the Intelligent Cloud. We look forward to combining the best of Microsoft’s and ADI’s capabilities in silicon, systems, software and optics.”

3D ToF design

The recognition of gestures is the ability of a device to identify a series of movements of the human body. Electronic technology is based on the aid of a camera and IC for the identification and scanning of the scene in a 2D or 3D profile. The time-of-flight technique consists of sending a laser beam to the target and analyzing the reflection of the signal.

3D time-of-flight, or 3D ToF, is a type of LIDAR (light detection and ranging) without a scanner that uses high-power optical pulses in nanoseconds to capture depth information (typically over short distances) from a scene. The various IC solutions, with the help of gesture recognition software algorithms, create a depth map of the images received, responding in real-time to body movements. The main advantage of gesture recognition technology is that no physical contact is necessary between the individual and the control system.

A ToF camera measures distance by illuminating an object through modulated laser light and a laser wavelength-sensitive sensor to capture reflected light. The sensor measures the time delay between the moment the light is emitted, and the moment the reflected light is received by the camera.  There are several methods for measuring time delay, two of which have become common: the continuous wave (CW) method and the pulse method. The vast majority of ToF sensors are CW and use CMOS sensors.

There are many confounding factors that make time-of-flight (ToF) measurement hard: interfering ambient light, multipath effects caused by light bouncing off objects in the scene corrupting the true distance, temperature effects, range ambiguity. “The challenges scale from the silicon development to the creation of a full system that performs in alignment with the theoretical sum of the parts,” said Calvo. “It is impossible to look at each component in isolation. For example, a ToF camera with the best sensor but fitted with a non-optimized lens will have poor overall system performance.”

“At the surface ToF cameras have similarities with RGB cameras. One key distinction is that depending on the application, the RGB camera’s image quality is judged with a degree of subjectivity; other applications are only enabled through advanced post-processing. A ToF camera measures an objective physical quantity (distance) upon which a user, depending on the application, may place significant reliance on the accuracy of the image. Some optical imperfections in RGB cameras such as lens flare can produce artifacts that are sometimes considered artistic (e.g. sunlight flare). In a ToF system, lens flare, if not drastically minimized, can cripple the performance of the entire system in presence of strong reflections from objects, Calvo added”

The CMOS sensor used for time-of-flight consists of both an emitter and a receiver; it enables calculations for the distance of objects at a single-pixel level with a performance close to 160 fps.

“At the silicon level, one must consider the key elements of the signal chain: the laser driver, the ToF image sensor with integrated readout and finally the depth compute engine. The challenge begins with the design of a pixel with high responsivity and high modulation contrast in the image sensor itself and ends with the formation of a 3D point cloud that can be interpreted by the next application layer.

“Aside from the components, key challenges stem from the design and production of a depth camera as opto-mechanical design, calibration, electrical design and software implementation are all time-consuming and challenging. Analog Devices (ADI) takes on these challenges for our customers to ease their design process,” said Calvo.


Figure 1. ToF block diagram. (source: ADI)

The pixels collect light from distinct parts of the scene, and their recombination will consititute the reconstructed image. All sensor pixels are controlled by a correlation between demodulation and modulation blocks. Each pixel can be approximated by the model shown in Figure 2.

The current is directed to the Node-A (Da) or Node-B (Db) during the integration time by activating the relevant control signals. The reading takes place with the demodulation stopped, so that the system can read the entire bit sequence.  ClkA and ClkB are modulated 180 degrees out of phase for time tInt1 at the selected modulation frequency. The phase of the light received concerning the ClkA & CLkB clock determines the DA and DB signals. At the end of the integration, ClkA & ClkB are switched off, and the reading phase takes place by sampling the integrated signal (BitlineAInt1-BitlineBInt1).

The conversion of photons into electrical current is governed by a quantum process with Poisson distribution. The parameter that gives us an idea of the goodness of the system is the quantum efficiency, that is the ratio between the number of electrons produced, and that of the photons that activate the corresponding pixels. The number of electrons depends both on the actual modulated light and on the ambient light that corresponds to the noise effect.  A parameter to choose when designing a Time-of-Flight system is FoV (Field of View). FoV must be chosen appropriately according to the coverage requirements of the scene.


Figure 2. Electrical circuit and timing for a pixel. (Source: https://ieeexplore.ieee.org/document/6964815  )

In order to achieve high efficiency at high frequencies, the chip can be fabricated by using a 0.13 μm mixed-signal low-power CMOS process with minor modifications to support efficient Time-of-Flight operation.

ADI and Microsoft

The collaboration between Microsoft and ADI is aimed to enhance ToF technology: ADI is designing a new series of ToF 3D image sensors that will provide less than a millimeter accuracy and will be compatible with the Redmond giant ecosystem, based on the Microsoft depth, Intelligent Cloud and Intelligent Edge platforms.

“We strongly believe that this collaboration will impact all the major industries – consumer, industrial, healthcare and automotive. Up until now, the technology that was developed by Microsoft was not broadly available for scaled commercial applications. We believe the ADI solutions, powered by Microsoft’s imager technology, will be a game-changer across the board,” said Zarola.

There are obvious applications of security systems for enhanced facial recognition along with advancing safety measures for more efficient factory automation. Industry 4.0 will be transformed with collaborative robots working safely alongside humans without being fenced off in a “no-human” area and further optimization in logistics will be enabled by ToF accuracy for box and pallet dimensioning.

Zarola added, “More sophisticated occupancy detection will lead to improved energy efficiencies, safety systems and human-machine interactions. From the home to the car, our ToF collaboration with Microsoft will give new gaming experiences the ability to place and interact with virtual objects in the real world and change how we interact with our cars and increase safety by monitoring the driver and passengers alike. The potential use cases for ToF technology are broad and evolving, so the main applications of today are expected to be superseded by new ideas tomorrow.”

Zarola and Calvo have stated how their customers want millimeter-depth resolution and fine spatial resolution over a wide range of temperatures. Achieving this kind of performance requires an extreme level of time synchronization achieved both at the hardware and software level. “A 10ps timing misalignment between the signals controlling the laser and the pixels in the sensor results in a 1.5mm error in the final distance estimate. If that was not difficult enough, add to that the need for keeping a Time-of-Flight system accurate over a wide range of temperatures that requires advanced processing and calibration algorithms that need to be designed jointly,” said Calvo.

A specific criticism of ToF systems is the modulation frequency at which they can operate and where ADI is most concentrated. Most sources of depth estimation errors tend to be “divided” by the modulation frequency. “We aim to raise the average modulation frequency of a ToF system that will allow making measurements with lower depth noise as well as reduces the adverse effects of multipath or ambient light shot noise,” said Zarola.

Zarola added, “ADI is also looking to solve key challenges that make designing and producing of depth cameras time-consuming and difficult. We are taking on the mechanical alignment, optical design, calibration, electrical design and software implementation along with the traditional obstacles in image capture.”

The combination of Microsoft’s Time-of-Flight (ToF) 3D technology used in HoloLens mixed reality devices and the Azure Kinect development kit, with ADI’s custom solutions, will enable the new generation of high-performance applications to be implemented and scalable, all while optimising time-to-market.

>> This article was originally published on our sister site, EE Times.

 


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.