Choosing solutions for edge AI - Embedded.com

Choosing solutions for edge AI

While it’s clear what’s pushing AI to the edge, there are a number of different ways to implement AI at the edge. For engineers, choices are always welcome, but when it comes to AI, it’s not always obvious which processors and software are optimal for different applications. The Embedded Vision Summit might be the right place to start for clarity.

The event is renowned for being a great place to see real-life demos, as well as for informative sessions on what is both possible and practical today with embedded computer vision and edge AI.

2020’s Embedded Vision Summit will be virtual, but as in previous years, the topics will span all kinds of system that extract meaning from video and images. Recently this sector has become dominated by AI and machine learning techniques, so as you’d expect, the program reflects that, featuring many AI chip makers and other AI industry experts.

The conference will take place between September 15-25, with presentations screened live on Tuesdays and Thursdays and other information and demos available in between. Here are a few highlights to look out for from among the 75+ sessions and 50+ exhibitors on the virtual hall floor.

David Patterson (Image: UC Berkeley)

Distinguished keynote

The show’s keynote speaker this year is David Patterson from UC Berkeley. Patterson, one of the original inventors of RISC computing and the RISC-V Foundation vice-chair, will speak about how the rise of AI is placing unprecedented demands on processors. These demands are creating an opportunity for domain-specific processors, which can be dramatically more efficient. This is achieved by tailoring the chip specifically for the nature of AI inference workloads: large-scale matrix multiplication of low-precision numbers.

Patterson has been working with Google on its tensor processing unit (TPU) design — used for both hyperscale and edge AI compute — and will use this to illustrate the performance and efficiency that domain-specific processing enables. Overall, this change in approach is allowing engineers to utilize deep neural networks in cost- and power-constrained systems like never before.

AI silicon

Indeed, several of sessions will be presented by just a few of the dozens of companies who have developed ASIC designs for various edge AI niches.

The Hailo-8 achieves 26 TOPS with notable power efficiency of 2.8 TOPS/W (Image: Hailo)

Hailo launched its Hailo-8 deep learning processor at last year’s Embedded Vision Summit. This year, the company will present lessons that have been learned in real-world applications of the Hailo-8. This includes video analytics, industrial inspection and smart cities.

Perceive’s chip, Ergo, is designed to allow audio and video processing on the same device, running at under a Watt. Their presentation will be about running modern neural networks at speed on battery-powered hardware.

Aside from ASICs, many other types of compute are suited to edge AI for computer vision applications. This includes DSPs, whose highly parallel nature is a good fit for matrix multiplication operations. While often used for audio AI such as voice recognition (where there is an obvious synergy), despite their modest performance, DSPs do have some interesting applications in lower-resolution visual AI where the power constraints are very tight (people counting, building control, etc).

At this year’s Embedded Vision Summit, Cadence will present its range of edge AI processing IP from Tensilica, including its popular HiFi DSP IP which now supports Google’s TensorFlow Lite.  The company will also have an “expert bar” session where the company’s experts will be available to answer attendees’ questions about its various DSP IP products as well as its specialized DNA processor IP which is designed for AI processing.

Some types of FPGAs are also well-matched for visual AI at the edge. FPGAs are particularly suited to 1-bit maths (an FPGA’s lookup table is essentially a 1-bit MAC); some areas of cutting-edge AI development are reducing precision to 1-bit to reduce memory footprint and power consumption (binarized neural networks).

Lattice will present its small, low-power, low-cost FPGAs at the Embedded Vision Summit. These devices are intended to be added to a system as co-processors to enable visual AI. The company will present hand gesture classification, human detection and counting, face identification and other applications implemented on one of its devices. Lattice has developed a whole software stack to abstract away all the tricky parts of implementing AI on its devices, which will also be discussed.

Lattice will also host an “expert bar” on low-power AI where they will answer questions about how much AI can be done on their FPGAs, and at what power budget.

TinyML

Tiny ML is the field dedicated to doing AI on microcontrollers and other tiny compute devices. The opportunity presented by this field is massive, and the technology is developing quickly.

Don’t miss the panel discussion chaired by Google’s Pete Warden, a leading authority on this subject, with speakers from Microsoft, M12, Perceive, and OctoML. The panel will discuss the critical technology gaps that are still to be filled.


Eta Compute will demonstrate people counting in under 5mW on the company’s ECM3532 SoC (Image: Eta Compute)

Among the 50+ exhibitors on the virtual hall floor, Eta Compute has developed an ultra-low power SoC for doing AI in IoT devices such as smart sensors, which it will show off in several demonstrations.

The company will demo person detection and people counting in just a few milliwatts — they have developed several visual AI algorithms, which when combined with their silicon reduce the power consumption to tiny levels. Their demos will show power-efficient CIFAR 10 object detection, person detection in 3mW, and people counting in less than 5mW.

Sensors

Outside of AI and processing, sensor technologies that enable cutting-edge vision applications will also be showed off at the Embedded Vision Summit.

As CMOS image sensors become SoCs and their power consumption and cost steadily reduces, a broader range of systems can be given the gift of sight. Action camera maker GoPro will present a practical guide to building a vision system using modern CMOS image sensors.

There will also be a panel discussion on the future of image sensors, chaired by industry expert Shung Chieh from Solidspac3 with participants from Aurora, OmniVision, Applied Materials and the University of Pittsburgh. The panel will share its views on the future of image sensors and explore some of the key trends in this area, such as integrating processors on sensor die, neuromorphic sensing and improvements in hyperspectral imaging.

Arrow Electronics will demonstrate an AI-based people monitoring proof-of-concept using Analog Devices’ 3D time of flight (ToF) sensor dev kit — using ToF sensors preserves privacy for detection of social distancing/occupancy management applications. This session is a tutorial on getting started with the dev kit the system is based on. (Full disclosure: Arrow is the parent company of EE Times’ publisher, AspenCore Media)

Applications

The Embedded Vision Summit will also showcase cutting-edge applications for today’s computer vision.

From an oceanographic institute presenting its method of removing water from underwater images to make them easier to process, to a computer-vision based personal trainer that runs on your phone, there is something for everyone here.


John Deere’s presentation will explain how they use computer vision and AI to improve efficiency and quality in agriculture (Image: John Deere)

John Deere, for example, will talk about the opportunities agriculture presents for image processing and AI. They will discuss how they use computer vision at huge scale to improve efficiency and quality, and how they have commercialized their image processing systems, which requires high levels of consistency of various components. The presentation will cover the unusual requirements of agricultural vision systems, the challenges these create and how John Deere has solved them.

You can register to attend the conference here.

>> This article was originally published on our sister site, EE Times.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.