MADISON, Wis. — Capturing “events” in images by using a bio-inspired approach sounds not just cool but downright futuristic. But how many developers have actually witnessed event-based machine vision technology at work?
Most developers have heard or read about it, and they might be curious. But they‘re stuck on the sideline without hands-on experience with novel non-frame-based machine vision technology.
Prophesee, a Paris-based startup, wants these spectators in the game. The company is rolling out this week a first-of-its- kind reference system for vision system developers to try, test and understand how neuromorphic vision works.
Initial users targeted for Prophesse’s reference design will be “developers of industry automation machines and robotics,” Luca Verre, CEO of Prophesee told EE Times. “Our customers in automotive and IoT systems will also find the reference system useful to characterize [Prophesee’s] sensor performance.” Researchers working at R&D labs and universities will also benefit. But the real key here, Verre explained, is that the reference system will trigger creating “an ecosystem” around Prophesee’s event-driven, non-frame-based vision systems. “Unless there is an ecosystem for it, there will not be a revolution.”
Prophesee’s Onboard reference system will contain a VGA resolution camera integrated with Prophesee’s ASIC, Qualcomm’s quad-core Snapdragon processor running at 1.5GHz, 6-axis Inertial Measurement Unit, a range of connectvitis including USB 3.0, Ethernet, micro-HDMI and WiFi (802.11ac), and MIPI CSI-2.
The sensor is in a ¾-inch optical format, featuring a 15μm optical pitch. According to Prophesee, the vision sensor offers not only extremely fast vision processing but also a high dynamic range of more than 120 dB. It can capture extremely fast motions, “thanks to its sub-millisecond temporal resolution,” the company said.
Prophesee claims power efficiency and system latency of “below 10ms.” The company previously told us that its sensor comes with operating characteristics of less than 10 mW.
Different from traditional frame-based image sensors
For decades, machine-vision system designers have depended on their knowledge and experience of how conventional image sensors capture visual information. Traditional sensors are designed to function at a predetermined frame rate regardless of dynamic scene changes, while each frame conveys information from all pixels, uniformly sampling them at the same time.
So, improving the performance of machine vision systems has focused on higher frame rates and higher resolution enabled by new image sensors.
Prophesee believes it’s time to rethink the paradigm.
Unlike traditional frame-based cameras, each pixel in Prophesee’s asynchronous time-based image sensor decides independently to sample parts of a scene at different rates. “Each pixel individually controls its sampling — with no clock involved — by reacting to light, or changes in the amount of incident light it receives,” Christoph Posch, Prophesee’s CTO, once explained to EE Times. As a result, Prophesee’s sensor selects only the most useful and relevant elements of a scene. This cuts power, latency and data processing demands imposed by traditional frame-based systems, according to the company.
Prophesee’s sensors are based on what the world has already learned about neuromorphic vision. Human eyes and brains do not record visual information in a series of frames. “Humans capture the stuff of interest — spatial and temporal changes — and send that information to the brain very efficiently,” Ryad Benosman, Prophesee’s co-founder, once told EE Times.
With an image sensor not bound by frames, Verre explained, “Our technology will not have to miss important events that might have happened between frames.”
Continue to the next page on Embedded's sister site, EE Times: “Event-Driven vision comes aboard.”