Day 2 at the Embedded Vision Summit 2016 -

Day 2 at the Embedded Vision Summit 2016

Good grief! I can’t believe that we're already deep into Day 2 of the Embedded Vision Summit (see also Day 1 at the Embedded Vision Summit). This morning's keynote — Using Vision to Enable Autonomous Land, Sea, Air, and Space Vehicles — was given by Larry Matthies from NASA's Jet Propulsion Laboratory.

All I can say is that this man could give my mother a run for her money in the high-speed speaking Olympics. I don’t think he drew a breath throughout his hour-long presentation. I'm still in a daze. It was like being hit in the brain with a high-pressure stream of information ranging from all-terrain autonomous vehicles to underwater autonomous vehicles to the automatic assembly of structures in space to autonomous roto-craft flying around Mars and Titan — and that was just the first five minutes.

This was followed by Computer Vision in Cars: Status, Challenges, and Trends , which was presented by Marco Jacobs of videantis GmbH. Amongst all sorts of incredibly useful information was one nugget of knowledge that stuck in my mind — the fact that when 90% of cars become fully autonomous, one effect will be to double the road capacity, which means that a 2-lane road will be able the carry the same number of cars as a 4-lane road, while a 4-lane road will be able to carry the same number of cars as an 8-lane equivalent.

The next two sessions I attended were Fast Deployment of Low-Power Deep Learning on CEVA Vision Processors , which was given by Yair Siegel from CEVA, and Accelerating Deep Learning Using Altera FPGAs , which was given by Bill Jenkins from Intel.

In my spare time, I've been roaming around the Vision Technology Showcase chatting with the various exhibitors and exclaiming “Ooh, Shiny!” more times than I care to mention. I've seen Deep Neural Networks (DNNs) identifying objects like cars and bicycles and pedestrians in real-time before, but the technology seems to be getting more accurate and more powerful day-by-day.

As opposed to simply drawing rectangle around the identified objects, some systems also added distance annotations, where these distances may be derived from stereoscopic vision and/or by extracting structure from motion. Also, some systems colored all of the pixels identified as belonging to a particular object, like the image below where cars are colored green and pedestrians are colored red.

(Source: Max Maxfield /

This is really rather clever. Take a look at the image below (yours truly is the one to the right of the image taking the photograph). In this case, all of the pixels associated with people and their clothes are shaded red. Observe the guy on the left holding his backpack out. As he raised his arm, the system decided that the backpack was no longer associated with the person.

(Source: Max Maxfield /

As one further example, the system shown below didn’t recognize me because we hadn’t been formally introduced. It did, however, recognize and identify all the people working for that company when they came round to say hello.

(Source: Max Maxfield /

One of the most eye-catching demos, which can be seen in this video, is on the CEVA booth where they are demonstrating their CEVA Deep Neural Network (CDNN). This harnesses the power of the CEVA-XM4 imaging and vision DSP core to provide a low-power, low-memory-bandwidth, deep neural network solution (see also CEVA Accelerates Deep Neural Networks).

The way this works is that the small screen in the bottom right is displaying random images — the video camera in front of that screen is capturing these images and passing them to a Deep Learning Neural Network running on the CEVA-XM4 core, which — in this example — is implemented in a Xilinx FPGA. The Neural Network identifies each image and displays the result in the bottom of the larger screen in the upper left.

As I've mentioned before, I don’t know whether to be excited or scared. The first Terminator movie came out in 1984, which is 32 years ago as I pen these words. At that time, the concept of a Skynet-type artificial intelligence taking over the world was purely in the realm of science fiction. Now I'm not so sure… What say you?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.