Can machine learning solve the challenges of complex 5G baseband?
The Mobile World Congress (MWC) in Barcelona this year was abuzz with exciting launches and announcements. While a handful of products were trying to rekindle the past, like the retro remake of the Nokia 8110 4G from the original Matrix film, the majority was looking to the future. Two technologies especially stood out as the embodiment of the future of the mobile world: Artificial Intelligence (AI) and 5G communication. But what is the connection between these two fields? Read on to find out.
5G + AI = the future (Source: Shutterstock)
AI was the most popular buzzword, but how much of it is real?
AI is a huge deal. According to KPMG, AI and machine learning garnered $12 billion in venture capital in 2017, doubling the amount from the year before. But, as with every hot trend, the term gets thrown around a lot more than it should. Don’t get me wrong. From autonomous vehicles and robots, to smart speakers and headphones, AI is changing the world. However, many products boast AI as a selling point, when in fact they are based on hard-coded algorithms, so it’s important to differentiate between the buzzword and the real thing.
The use of artificial intelligence might not be to create a super-smart, super-strong humanoid like the synths in Humans, or the hosts in Westworld. It could be used for a much less general and more specialized application. But to be deemed AI, it would be expected to include a learning process and generate a result that could not be achieved with straightforward, brute-force calculations. This type of AI is becoming very common, especially around deep learning and neural networks, which power many technologies that are used by millions every day.
The challenge of 5G: With great speed comes great complexity
Another huge item at MWC was 5G. My colleague gave a superb overview of the status of 5G as of mid-2017, and it’s pretty impressive to see how far things have come since then. 5G is expected to be the enabler of revolutionary technologies like autonomous vehicles, smart homes and cities, mobile augmented and virtual reality and 4K video streaming. The anticipation has been building for a while, and now, the potential of 5G is beginning to crystallize.
One example of a recently announced SoC which is ready to tackle the challenges of 5G is Nokia’s ReefShark, which includes the CEVA-XC DSP for wireless communication. According to Reuter’s, even President Donald Trump’s security team is looking to expedite the launch of a 5G network in the U.S., as a countermeasure to cyber and economic security threats from abroad. So, it’s safe to say now that everyone knows that 5G is the future of communication.
There are many reasons that 5G is so desirable. The bandwidth will be far greater than 4G, which will support streaming content that was not feasible before. The reliability will be much greater which promises to enable mission critical and ultra-low-latency use cases, like autonomous driving. It will also significantly increase the number of connections, enabling the connectivity of an exponentially larger number of devices, forming the internet of things (IoT). So, from the smallest sensors in the home to the most sophisticated automated vehicles, everyone has something to gain from 5G.
The problem is the extremely complex calculations that arise from this. Technologies such as millimeter wave bands of up to 60 GHz, and massive MIMO (Multiple Input Multiple Output - 64-256 antennas) together with low-band sub-6 GHz frequencies introduce new challenges and complex, non-linear computations to the 5G physical layer (PHY). In addition to all these complexities, the 5G definition is still not officially standardized. The 5G New Radio (NR) Release-15 is the latest from 3GPP, but any current SoC must remain flexible to support Release-16, as well as other versions of 5G that do not conform to the 3GPP standard. Meeting these demands requires a new approach, unlike anything before.
5G:with speed comes complexity (Source: iStock)
Taking on 5G challenges with machine learning and powerful DSPs
New platforms such as CEVA’s PentaG address these challenges using an approach that combines communication DSPs with AI and specifically machine learning. The CEVA platform integrates an enhanced version of the CEVA-XC4500 DSP together with a cutting-edge AI processor, a powerful Vector MAC Unit (VMU) Co-Processor, a cluster of CEVA-X2 DSPs, and optimized hardware accelerators.
This type of platform is capable of handling the demanding calculations of 5G with extremely low power and the flexibility to update the software in the future. A demo at MWC showed how the PentaG handled the complex Channel State Information (CSI) with challenging use cases up to MIMO4x4 and 256QAM. This innovative approach for 5G CSI processing using AI, enables 5G advanced receivers (e.g. based on near-Maximum Likelihood MIMO Decoder) to achieve highest throughput. Introducing AI helps to close the gap of unmatched performance and overcome the complexity barrier of the currently available approaches.
Zeev Kaplan is a Communication Algorithms Team Leader in CEVA Wireless BU. He has over 15 years of experience in communications engineering with expertise in algorithms, DSP Cores and systems design. Zeev has a broad experience with LTE, Wi-Fi and wired home-networking (HomePNA, HomePlug, G.hn) networking standards. Zeev Kaplan has a BSc. (Summa Cum Laude) and MSc. in Electrical Engineering from the Technion - Israel Institute of Technology.