Human-Like Intelligent Embedded Vision from CEVA - Embedded.com

Human-Like Intelligent Embedded Vision from CEVA

I honestly believe we are poised on the brink of a major evolution in embedded systems technology involving embedded vision and embedded speech capabilities (I think it goes without saying that one of the best venues to learn more about the state-of-the-art with regard to anything in Embedded Space is — of course — one of the forthcoming Embedded Systems Conferences (ESC)).

One example I often use to describe what I'm talking about is that of a humble electric toaster in the kitchen. Actually, that reminds me of a rather amusing story that really happened to me last year…

Some time ago, I gave my dear 84-year-old mother an iPad. Initially she was a tad trepidacious, but she quickly took to it like a duck to water. On one of my subsequent visits to the UK to visit with her, I gave her an Amazon give card and told her she could use it to order something with her iPad.

“What shall I buy?” she asked. “What do you want?” I replied. Well, it turned out that she was interested in having a matching electric kettle and electric toaster, so I showed her how to search for them on Amazon. She had a great time rooting around all the various offerings and eventually selecting and ordering the toaster-kettle-combo of her dreams.

After she'd placed her order, I asked: “What would grandma {my grandma; her mother} have thought about all of this — seeing you sitting there ordering an electric kettle and electric toaster over the Internet?”

Now, you have to remember that my mother didn’t even get electricity into her house until 1943 when she was 15 years old — prior to that they had a coal fire for heating and gas mantles lighting their little terraced home.

My mother's response really made me think when she said: “Your grandmother wouldn’t have understood anything about the iPad or the wireless network or the Internet — what would have really gotten her excited would have been the thought of an electric toaster and an electric kettle!”

It makes you think, doesn’t it? But we digress…

Returning to my example of a humble electric toaster in the kitchen, let's suppose the little scamp were equipped with embedded vision and embedded speech capabilities, and let's then envision the following scenario. It starts when I walk up to the toaster and insert two slices of wheat bread. If this is the first time I've used it, the toaster might enquire “Good morning, it's my pleasure to serve you, can you please tell me your name?” To which I might reply: “You can call me Max the Magnificent,” or something of that ilk. The toaster then asks: “How would you like this to be prepared? And I might reply: “Reasonably well done, if you please.”

When the toast subsequently emerges, the toaster might say: “How's that for you?” And I might reply: “Pretty good, but perhaps just a tad darker next time.”

Sometime later, my son, Joseph, meanders into the kitchen, drops two slices of bread into the toaster, and an equivalent dialog takes place; similarly for my wife (Gina the Gorgeous).

It may be that the following day we each wish to toast some bagels, or perhaps some English muffins, and — over time — the toaster will become acquainted with our varying preferences for each item.

The whole point of this is that, in the future, the toaster can use its embedded vision to determine (a) who is doing the toasting and (b) the type of food being toasted. Based on this information, it can give each user the toasting experience of their dreams.

Furthermore, should the occasion arise that my wife is making me breakfast in bed, for example (hey, it doesn’t hurt to dream), then — as opposed to giving me something inedible as is her wont — she could leave the tricky part up to the toaster and say something like: “Can you make this just the way Max likes it?”

I'm sorry… I got carried away dreaming of breakfast in bed in general, and one I actually wanted to eat in particular. O, what a frabjous day that would be! But, once again, we digress…

The reason I'm waffling on about this here is that the folks at CEVA have just announced the availability of their new CEVA-XM4 imaging and vision processor IP that will enable real-time 3D depth map and point cloud generation, deep learning and neural network algorithms for object recognition and context awareness, and computational photography for image enhancement, including zoom, image stabilization, noise reduction. and improved low-light capabilities.

This fourth-generation imaging and vision IP is equipped with the functionality required to solve the most critical challenges associated with implementing energy-efficient human-like vision and visual perception capabilities in embedded systems.

The CEVA-XM4 boasts a programmable wide-vector architecture, with fixed- and floating-point processing, multiple simultaneous scalar units, and a vision-oriented low-power instruction set, resulting in tremendous performance coupled with extreme energy efficiency.

According to CEVA:

The new IP’s capabilities allow it to support real-time 3D depth map generation and point cloud processing for 3D scanning. In addition, it can analyze scene information using the most processing-intensive object detection and recognition algorithms, ranging from ORB, Haar, and LBP, all the way to deep learning algorithms that use neural network technologies such as convolutional neural networks (CNN). The architecture also features a number of unique mechanisms, such as parallel random memory access and a patented two-dimension data processing scheme. These enable 4096-bit processing — in a single cycle — while keeping the memory bandwidth under 512bits for optimum energy efficiency. In comparison to today’s most advanced GPU cluster, a single CEVA-XM4 core will complete a typical ‘object detection and tracking’ use-case scenario while consuming approximately 10% of the power and requiring approximately 5% of the die area.

Of course, having household appliances like toasters equipped with embedded vision and embedded speech capabilities might end up being a bit of a “two-edged sword,” as it were. Did you ever see the British science fiction TV comedy series Red Dwarf ? There was a classic 'Does Anyone Want Any Toast?' scene that will live in my memory for ever.

How about you? What do you think about the prospects of embedded systems that can spot you walking by and bring you up to date with what the washing machine said to the tumble dryer? Are we poised to enter an exciting new world… or a recluse's nightmare?

— Max Maxfield, Editor of All Things Fun & Interesting
Circle me on Google+


Join over 2,000 technical professionals and embedded systems hardware, software, and firmware developers at ESC Boston May 6-7, 2015, and learn about the latest techniques and tips for reducing time, cost, and complexity in the development process.

Passes for the ESC Boston 2015 Technical Conference are available at the conference's official site, with discounted advance pricing until May 1, 2015. Make sure to follow updates about ESC Boston's other talks, programs, and announcements via the Destination ESC blog on Embedded.com and social media accounts Twitter, Facebook, LinkedIn, and Google+.

The Embedded Systems Conference, EE Times, and Embedded.com are owned by UBM Canon.

10 thoughts on “Human-Like Intelligent Embedded Vision from CEVA

  1. “@Max: nI think the best use for intelligent embedded vision will be managing my water cannon to lock onto and knock down those pesky drones that all the visionaries seem to want to fly all over us in the future.”

    Log in to Reply
  2. “@crusty: I may have a solution for you that does't entail any hi-tech at all. I was visiting my cousin in South Florida who is a professional exterminator. I casually mentioned a problem with gnats being attracted to my kitchen compost bin. The nex day,

    Log in to Reply
  3. “Oh, and I forgot to mention my surprise that someone who chose the name “crusty” made no comment at all about toast or the toaster…. “

    Log in to Reply
  4. “That would be cool. You just need to arm the hexacopter with one of those autonomous nerf-gun turrets to protect against the bad hexacopters.”

    Log in to Reply
  5. “Back on topic – I don't think the all seeing / all hearing toaster is very far off. When a few key components, that today reside in a smart phone, get down a bit in cost, it will be very doable in high-end toasters.nnI recently discovered the Google tr

    Log in to Reply
  6. “I'm pretty sure I don't want my kitchen appliances to be anything like a human. I'm also pretty sure I might fell more then little discomfort if I constantly think that something might be watching me. I might tolarate the behaviour similar to Marvin the P

    Log in to Reply
  7. “I agree Duane — things really are moving very fast in this area — there's that company called Sensory that have always-on speech recognition technology that works even in a crowded room/noisy environment. I think things are poised to start changing ver

    Log in to Reply
  8. “Re your point that you might feel more then little discomfort if you constantly think that something might be watching you. I have that feeling already — I'm married LOLnnI agree that this is all going to take some getting used to. I don't like the ide

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.