It seems you can’t swing a Turing Test these days without hitting an “AI”. And while a small number of projects are gunning for the Turing , the vast majority of AI in the air refers to neural networks and image (or other pattern) recognition. In fact, neural networks have pretty much crashed the party, and, for better or worse, are what most people mean by “AI” these days.
AI has been in the cloud for some years now: Voice recognition and machine (e.g., Google) translation accuracy is way up. But what if it’s a sunny day? No cloud. That’s where today’s second-hottest buzzword comes in: The Edge.
Edge computing means that you do lots of crunching, neural networking, or what have you, locally, at the edge of the network. If you still need to call home, the amount of data involved is much less than if you were feeding raw video, say, over the network.
Of course, embedded engineers don’t always have the luxury of fast and guaranteed internet/cloud connectivity. For many products, it’s not available at all. But do not fear. Companies want you to have local AI capability too. It’s coming faster than you might realize, and it’ll work for you whether you’re the final AI arbiter, or just at the Edge.
Lattice is one company taking the AI plunge. Their just-introduced sensAI stack encompasses devkits, FPGAs, software, & IP. And while the learning curve for any new technology will tend to daunting , Lattice is trying to make it easy, whether by offering most of the technology for free, or partnering with expert third parties who can help you with some of the newer, trickier bits, like neural-net training.
Simple nets can have ridiculously low power consumption too: single-digit milliwatts.