As the need for artificial intelligence grows more common and technology needs become more sophisticated, companies looking to adopt edge AI into their products often find it to be a difficult challenge. But what makes it so difficult, and what solutions exist to solve this problem?
Perhaps the single biggest issue that companies face in implementing edge AI is that most companies don’t have the resources in house to develop these sophisticated fast-changing technologies. Lack of trained personnel and little familiarity with design flow often leads to delayed timelines and excess expense to train team members. In addition, there are so many choices, it is impossible for engineers to explore each option. And since every application is different, it may not be appropriate to replicate solutions based on past implementations. However, by asking a few key questions and finding the right partner to take your project from ideation to silicon, any enterprise can develop a roadmap to successfully deploy edge AI in their devices.
Defining Use Cases And Feasibility
It is important to define use cases before exploring implementation options. The first question any business should ask is: What would the customer find truly useful? After pinpointing the functionality the customer wants, your team needs to set development and production cost goals along with the acceptable time to market.
Now comes the challenging part – making technology-related decisions. Is it possible to implement that functionality within the cost, time, power and space tradeoffs you’re dealing with? Working with an experienced partner/consultant or drawing on internal experience is critical at this stage. You won’t have perfect data on which to base your decision, so actual experience is essential in making these judgements.
There are several choices a team can make to implement the customer’s desired functionality in the product. Depending on your available resources and development time, here are some of the choices a team might consider:
- Software only on the existing embedded processor – This may require very carefully coded models in order to achieve the desired performance. Functionality may be limited, but it is generally the lowest cost solution if it works. Because this is a software-only solution, upgrades or bug fixes are more easily addressed.
- Upgrade/replace the existing processor – This can be a great solution if you can make it work and preserve existing code base, and like the solution above, is software-only and can be easily fixed or upgraded. However, this can often start a project down a slippery path that requires extensive power and performance evaluation. Companies may be better off adding a neural network (NN) or similar accelerator.
- Add a fixed neural network accelerator – This is an optimum choice if there is a good match with the needs of the application, as evaluation and design may not be too difficult. It could very well provide excellent power/performance tradeoffs at a very reasonable cost.
- FPGA – This solution is flexible and upgradable, but typically comes with high cost and high power for the final product. Rarely is this a good choice for “edge” products.
- Dedicated SoC – Often this is the optimum choice for high volume, low cost and low power products where use cases are clearly defined.
How To Evaluate The Right Choice
It can be difficult to evaluate the right choice without expertise from trained professionals in the edge AI chip space. Evaluation of each option can often take a long time and require extensive knowledge. For example, evaluating a fixed accelerator versus an FPGA implementation can require engineers with different skill sets. With so many vendors and solutions making conflicting claims, making basic decisions can be overwhelming for most enterprises.
One of the most important steps one can take is to find the right partner who can help evaluate the technology tradeoffs and take the company from the initial research and evaluation stage to the design and implementation of the solution. Also, don’t get hung up on finding the solution with the “optimum” power/performance. If you can identify a solution that will work and has adequate software and technical support, that is likely your best choice. Don’t get caught chasing specs.
Building The Solution
Once functionality and technology have been chosen, the next step is implementation. Often the focus is on the implementation of a neural network model, however businesses also have to deal with the implementation of logic (software/hardware) to handle the pipeline from sensor to final output, requiring unique algorithms at each step.
Questions that might come up include:
- What kind of signal conditioning/filtering do I need before passing the data to a NN accelerator?
- Which NN model should I use? Is there an existing model for my technology selection? Which version of which model is best in my application?
- How do I train my model? Where do I get my data and what biases are built into that data? What volume of data do I need?
- What is the cost and availability of the processing power for training models? Do we train in the cloud or on local servers?
- What level of accuracy is adequate? Is it better to have false positives or false negatives?
- What post processing is required and can I handle that workload?
Final Words Of Advice
With so many vendors voicing conflicting claims, it is important for businesses looking to implement edge AI not to focus on finding the “best TOPS” or the “fastest” solution, as these are elusive goals. The best way to answer many of these questions of functionality, technology choice and implementation is to partner with a person or organization that has “been there, done that.” Someone with the experience to quickly evaluate potential use cases, technical solutions and vendor offerings to help you narrow your choices as quickly and accurately as possible. Focus on vendors that have the most complete solution, with both the engine to implement, but also models, algorithms, and even existing data to help you in your unique use case and create a solid proof of concept.
|Douglas Fairbairn is a Silicon Valley veteran and currently director of business development for MegaChips, a $1 billion Japanese ASIC company expanding into the US. After graduating from Stanford with an MSEE, he spent 8 years at Xerox PARC. He then helped establish the ASIC business as cofounder of VLSI Technology and later of Redwood Design Automation, where he served as CEO until its acquisition by Cadence. He is now leveraging his ASIC and startup experience by helping establish MegaChips as a leading ASIC vendor in the US with special expertise in Edge AI technology.|
- Microcontrollers take on growing role in edge AI
- Training AI models on the edge
- In-memory compute enables more efficient edge processing
- How to deliver ultra-low power ML for more effective embedded vision
- TOPS vs. real world performance: Benchmarking performance for AI accelerators
- AI at the edge will change smart homes forever
For more Embedded, subscribe to Embedded’s weekly email newsletter.