The use of artificial intelligence (AI) in the form of artificial neural networks — in particular, deep neural networks (DNNs) — is poised to experience exponential growth in a wide variety of embedded systems, but who is going to define, create, and train these little scamps?
Before we plunge into the fray with gusto and abandon, it's worth noting that many people think of DNNs only in the context of computer/machine/embedded vision applications. In reality, however, these little rascals are applicable to a wide variety of tasks (see Deep learning hits a sweet note ).
There are several steps involved in creating a DNN. The first is to define and implement the network architecture and topology. Next, the network undergoes a training stage, which is performed offline on a powerful computing platform using tens or hundreds of thousands of images (in the case of a machine vision application). The result is a floating-point representation of the network and its “weights” (coefficients). The final step is to take the floating-point representation of the network and its weights and transmogrify it into a fixed-point equivalent suitable for running on a target platform.
There have been a lot of interesting developments over the course of the past couple of years with regard to the sophistication of the networks, training the networks, and deploying the networks. For example, early deep learning frameworks supported only linear networks; by comparison, modern frameworks, like Google's TensorFlow, support more sophisticated topologies involving multiple layers per level and multiple-inputs-multiple-outputs.
(Source: Max Maxfield / Embedded.com)
There have also been some interesting developments at the back-end of the process. For example, CEVA's Network Generator can take a floating-point representation of a network — Caffe-based or TenserFlow-based (any topography) — and transmogrify it into a small, fast, energy-efficient fixed-point equivalent targeted at the CEVA-XM4 intelligent vision processor (see Push-button generation of deep neural networks ).
A big problem occurs at the front end of the process. Who is going to define and implement the architecture and topology of these networks? According to IDC Research, in two years' time, by 2018, it is expected that at least 50% of developer teams will want to embed cognitive services in their applications and systems (as opposed to only 1% today). However, while there are approximately 21+ million developers around the world, there are currently estimated to be only around 18,000 data scientists who are capable of developing sophisticated AI technologies.
Working with AI needs to be far more accessible to the broader development community without requiring an advanced degree in machine learning. The problem is that defining and implementing DNNs today — even when using sophisticated underlying systems like TensorFlow — is equivalent to programming in assembly language. What is required is the ability to raise the level of abstraction.
One innovative solution is the Bonsai Platform, which offers a fundamentally different approach for developers looking to build intelligent systems — no machine learning expertise is required. Bonsai abstracts away the low-level, inner workings of machine learning systems to empower more developers to integrate richer intelligence models into their work. The Bonsai AI Engine and special-purpose Inkling programming language enable developers to specify, generate, and train models that can be used to add intelligence into an application or system.
The Inkling programming language is designed to represent AI in terms of what you want to teach instead of the low-level mechanics of how it is learned. The Bonsai AI Engine abstracts away and automates the low-level mechanics of artificial intelligence. The Inkling program is fed into the AI Engine to generate and train appropriate models. The result, called a BRAIN (Basic Recurrent Artificial Intelligence Network), is the compiled and trained intelligence model produced, hosted, and managed within the AI Engine.
The best way to think about this is to consider the difference between laboriously capturing a complex programming task in assembly versus using a higher-level language like C and then compiling this higher-level representation into optimized machine code. Programming in assembly language is time-consuming and prone to error, plus you can quickly become locked into a less-than optimal implementation. By comparison, raising the level of abstraction speeds development and facilitates your ability to explore different “what-if” scenarios.
As an example, consider the two images below. These depict alternative approaches to solving the game of Breakout. The first image reflects a traditional AI architecture that requires neural networks to be constructed by hand. The second demonstrates how Bonsai solved Breakout using just 37 lines of code.
Before: A traditional hand-crafted neural network architecture
(Click Here to see a larger image. Source: Bonsai)
After: How Bonsai solved Breakout using just 37 lines of code
(Click Here to see a larger image. Source: Bonsai)
It's important to note that Bonsai is agnostic with regard to the lower-level AI algorithms lurking “under the hood.” Bonsai currently realizes its AI implementations in the form of Google's TensorFlow neural networks, but it can make use of alternative technologies as and when they become available, thereby future-proofing your cognitive systems.
My belief is that the majority of embedded systems developers haven’t really thought about the possibility of including cognitive (AI) services in their applications and systems, but that this situation is poised to change dramatically in the near-term future. The thing is that we live in a highly-competitive marketplace. Having systems that can better understand what their users want to achieve, that are easier to interface with and use, and that can do things like predicting, recognizing, and alerting their operators to potential and/or real problems will provide differentiators that are hard to ignore.
Tools like the Bonsai Platform — that abstract away the low-level, inner workings of machine learning systems and empower developers to integrate richer intelligence models into their work without requiring them to become AI experts — could well prove to be game-changers. What about you? Can you think of any systems you have worked on, are currently working on, or are planning to work on that could benefit from being augmented with cognitive artificial intelligence abilities?