There is a lot of excitement and anticipation for the rapid adoption of 5G, the next generation of mobile connectivity. Analysts project that the number of commercial 5G networks will quadruple by the end of 2020, the total number of 5G connections will grow from 5 million in 2019 to 2.8 billion by 2025, and the global market for 5G technology will reach $667.90 billion by 2026. Unfortunately reaching these ambitious coverage goals will not be straightforward, it will require a substantial overhaul to the existing mobile network infrastructure – particularly RF power applications.
(Source: Wikimedia Commons CC-BY-SA-4.0)
To meet RF front end power needs, OEMs have turned to Gallium Nitride (GaN), a semiconductor relatively new to commercial applications. It’s power efficiency, power density and ability to handle a wider range of frequencies have made it perfectly suited for massive MIMO base stations. This four-part series will look at the factors driving GaN adoption, it’s value as a semiconductor, how embedded designers can best incorporate GaN into devices, and what we can expect from GaN innovations on the horizon.
Delivering the full potential of 5G’s multi-Gbps data speeds and ultra-low latency to customers requires mobile operators to improve performance across all network parameters. This means significant investments into spectrum acquisition, network infrastructure and transmission technologies. No matter how it’s done, the national rollout of 5G will very expensive for mobile network operators. Delivering 5G service without breaking the bank is the biggest obstacle preventing its widespread adoption. Despite all the attention around high frequency mmWave, carriers are implementing Massive MIMO technology in the Sub-6GHz range to minimize costs and rollout 5G across national mobile networks.
MIMO, which stands for ‘multiple input multiple output,’ is an antenna technology for wireless communications that uses multiple antennas to transmit and receive signals. Instead of the classic single antenna used in conventional wireless communications, MIMO sends the same data as multiple signals through different antennas. This allows for spatial multiplexing, where each channel carries independent information to the receiver – giving MIMO several advantages over the classic single antenna.
When an RF signal encounters an obstacle like a building, the signal scatters and takes different paths to reach the target receiver. This multipath propagation causes poor reception, dropped calls, and sharp reductions to data speed in single antenna systems. MIMO radios receive and combine multiple streams of the same data, so it’s able to use multipath propagation to improve signal quality and strength. If the environmental scattering is rich enough, many independent sub-channels can be created in the same allocated bandwidth, generating quality and signal gains without requiring additional bandwidth or power. Network operators can focus on building more antennas to meet demand, not more cell towers.
MIMO antenna arrays can also focus signals in the direction of individual users with beamforming and beam steering. A single antenna broadcasts a wireless signal in all directions, but with digital and analog methods, multiple antennas can focus a signal in a specific direction towards a receiver. This dramatically increases spectral and power efficiencies.
5G’s Massive MIMO
Past generations of wireless technology have used MIMO advances in antenna array technology to improve network speeds. 3G introduced single-user MIMO, which leverages multiple simultaneous data streams to transmit data from the base station to a single user. 4G systems use multi-user MIMO that assigns different data streams to different users to achieve significant capacity and performance advantages. With the 5G New Radio standard, MIMO gets “massive.” 4G systems are often equipped with four transmit and four receive antennas: a 4×4 antenna array. 5G Massive MIMO uses many more transmit and receive antennas to increase transmission gain and spectral efficiency; some arrays are as large as 256×256.
Since Massive MIMO uses many more antennas, the signal beam transmitted to receivers is much narrower. It enables base stations to deliver RF energy to customers more precisely and efficiently. Each antenna’s phase and gain are controlled individually, and since the channel information will remain with the base station, mobile devices won’t need multiple receiver antennas. The large number of base station antennas increases the signal-to-noise ratio in the cell, which leads to higher cell site capacity and throughput.
Just as critical, 5G technology builds on 4G network infrastructure and can share spectrum with previous technologies using dynamic spectrum sharing. This gives mobile network operators the ability to increase network capacity, deliver high data rates, and conserve spectrum, all while minimizing operator expenses.
The Promise of mmWave, The Reality of Sub-6 GHz
Millimeter wave technology, or mmWave, and 5G are often mistakenly considered to be synonymous. mmWave is a band on the radio frequency spectrum between 24GHz and 100GHz that 5G networks use, along with ‘low band’ and ‘sub-6GHz’ frequencies. It was previously considered unsuitable for mobile communications since signals in this band suffer from high propagation losses and are blocked by buildings, foliage, rain, and the human body. However, these short wavelengths are able to carry much more data over short distances. It’s clear that to achieve the 20Gb/s data rate goals of 5G it will be eventually be necessary to use mmWave spectrum. While many in mobile communications are excited about its promise, not enough attention is given to the logistical challenge of rolling this out nationwide.
This becomes clear when examining mmWave through the lens of base stations. mmWave base stations have a much smaller range than the cell towers transmitting signals in lower frequencies. To achieve nationwide coverage, researchers estimate that US network operators will need to put up 13 million base station. To put that number in context, today’s US mobile network is supported by approximately 300,000 cell towers. The capital expense of implementing all these mmWave base stations across the country is further compounded by mmWave’s expensive power dissipation requirements. Outside of stadiums and urban hotspots, the national rollout of mmWave won’t be realistic for the next several years.
While original equipment manufacturers work to drive down the costs of mmWave technology, Sub-6GHz bands will be what 5G network operators rely on. Lower frequency signals penetrate further through obstacles like buildings and cover greater areas around towers before fading out, making a fit for both rural and urban areas. This means that Sub-6 5G can also do more with fewer cell sites and use the towers a carrier already has in place.
Massive MIMO Infrastructure Requirements
Even though Sub-6 5G won’t deliver the huge speed improvements seen with mmWave, its Massive MIMO antenna arrays will enable more simultaneous connections, boost signal throughput, and provide the optimal balance between user coverage and capacity. It’s a more realistic implementation path. A rollout of Sub-6GHz 5G will improve the speed and consistency of mobile broadband much more quickly than mmWave deployment. It delivers an immediate improvement over current 4G systems while moving towards a fully integrated 5G network. That’s why many in the industry expect carriers to bid for lower spectrum ranges where they can utilize dynamic spectrum sharing to deliver 3G, 4G and 5G service in the same spectrum bands. We’re already seeing this approach used in international 5G implementations. Korea began its rollout of 5G in lower frequencies two years ago, and China will overhaul its entire network infrastructure to achieve nationwide 5G coverage in the next few years.
That’s not to say that Sub-6 5G will be simple to rollout; these new technologies come complete with significant system design challenges. To employ Massive MIMO technology on 5G base stations, designers are tasked with developing highly complex systems containing hundreds of antenna elements. Many utilize active phased array antennas to provide the ability to dynamically shape and steer beams to specific users. All of these additional antennas equate to better performance, but these large antenna arrays draw much more power and require dedicated RF front end (RFFE) chipsets and amplification.
Building the RF front end to support these new Sub-6 5G applications will be a challenge. RFFE circuitry was already critical to 4G systems’ power output, selectivity, and power consumption. 5G modulation schemes bring additional demands, so wireless infrastructure power amplifiers (PAs) will need to be very efficient to achieve the necessary linearity. In addition, the large difference between peak and minimum power requirements create thermal issues for both power amplifiers and RF front-ends.
In the next article in this series, we explore how the industry is innovating to satisfy the new demands 5G puts on embedded systems, driving a shift from traditional LDMOS amplifiers to GaN-based solutions.
|Roger Hall is the General Manager of High Performance Solutions at Qorvo, Inc., and leads program management and applications engineering for Wireless Infrastructure, Defense and Aerospace, and Power Management markets.|
- 5G roll-out: a marathon not a sprint
- 5G’s biggest challenges for communications service providers
- How O-RAN will transform interoperability in 5G networks
- How 5G, IoT & AI will redefine the retail customer in-store experience
- 10 key trends in wireless technology
For more Embedded, subscribe to Embedded’s weekly email newsletter.