Telink SoC uses RISC-V P-extension for AI/ML on edge devices - Embedded.com

Telink SoC uses RISC-V P-extension for AI/ML on edge devices

Telink Semiconductor has introduced a new connectivity system on chip (SoC) for hearables, wearables and high-performance internet of things (IoT) that uses a RISC-V P-extension to enable small-volume data computation and compact artificial intelligence / machine learning  (AI/ML) on edge devices.

Its latest product line, the TLSR9 series is based on a 32-bit AndesCore D25F from Andes Technology and in partnership with IAR Systems provides IoT designers access to IAR’s embedded workbench development toolchain that supports flexible product development.

The new SoC family is designed using the AndeStar V5 instruction set architecture (ISA), which complies with the latest RISC-V technology. Featuring D25F RISC-V processor, Telink and Andes said it is the world’s first SoC to adopts a RISC-V DSP/SIMD P-extension, making it ideal for a variety of mainstream audio, wearables, and IoT development needs. The D25F has an efficient five-stage pipeline and delivers class-leading 2.59 DMIPS/MHz and 3.54 CoreMark/MHz performance.

By supporting the RISC-V P-extension (RVP), the D25F increases efficiency for small-volume data computation and makes compact AI/ML applications possible on edge devices. Tests have shown that the D25F can increase the speed at which CIFAR-10 AI models (a common type of image classification model) are run by a factor of 14.3 and increase the speed of keyword-spotting technologies by a factor of 8.9. The standard JTAG and Andes two-wire serial debugging port also helps reduce pin cost.

Telink Semiconductor CEO, Dr. Wenjun Sheng, said, “By partnering with Andes Technology and IAR Systems to provide a top-notch processor and integrated development environment for our new TLSR9 product line, we are reducing the difficulty of application development and improving efficiency. Telink will continue to provide quick-to-market, performance-enhanced, cost-efficient solutions to our customers.”

Andes Technology president, Frankwell Lin, added, “We believe the RVP is going to open a new era for data computation on MCUs. Andes contributed the first version of the RVP specification to RISC-V last year, and it is now at version 0.8. We are looking forward to the ratification of the RVP standard, which will open up more and more artificial intelligence of things (AIoT) markets for RISC-V.”

Hearables market looks for wider biometrics

Wearable technology is one of the key markets for this chip, but the scope is wider as users look for a lot more biometric information from their devices. According to the Pew Research Center, around one in five Americans now wears a fitness or activity tracker. These are most commonly worn on the wrist, but that market is beginning to see some saturation, so biosensors in or around the ear, integrated into hearable devices with more biometric sensors and intelligence built in to provide a lot more data.

A recent report from IDTechEx, “Hearables 2020-2030: Technology, Players and Forecasts”, indicates that biometric hearables are set to make a substantial impact on the consumer, over-the-counter (OTC) health, and professional medical markets with a few years, with the market for hearables with biosensors set to reach US$5 bn by 2030. It suggests that unobtrusive, comfortable hearables that are comfortable to wear could increase compliance with medical monitoring or study (e.g. drug efficacy trials, ambulatory recording of heart rate, or blood pressure).

Hearables can also be used to monitor chronic conditions or general health and to track diet and fitness. They are ideal for measuring heart rate, blood oxygen levels, temperature and many more parameters. This means wearables may, in time, be integrated with other technologies including AI/ML and cloud computing to discern the physiological, physical, and emotional status of wearers – and trigger actions in response. In this scenario, the hearable ‘knows’ how stressed the wearer is, how best to calm them down (prompting their smart speaker to play a favorite music track, perhaps), which direction they are looking in and how much mental effort they are making. Hearables with voice recognition tech will ‘inform’ the hearable how often the wearer is speaking, in what tone, and even with what emotion.

Researchers at Cornell University are even researching the use of hearables to track and translate facial expressions. According to IDTechex, while such capability may sound futuristic, a surprising amount of the knowledge and hardware/firmware required is either being developed or has already been integrated into devices.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.