It’s easy to forget that GaN is still a relatively young technology. We’re still within the first few generations of development with lots of potential for improvement and refinement. This article looks at some of the GaN innovations on the horizon and predicts their impact on powering base station over the next several years.
Our expectation is that over the next three to five years, we’ll see improvements to GaN’s already substantial power density capabilities. There are already ways to achieve higher power density using GaN today, but the costs are extreme to where it hasn’t been feasible from a commercial standpoint. As an example, putting GaN on diamond instead of silicon carbide. It’s possible, but the expense doesn’t make it realistic for base stations. Still, there are other cost-effective processes being researched that will improve the material’s raw power density over the coming years.
The appeal within the 5G infrastructure market is obvious – cheaper, more efficient, wider bandwidth base stations. There is strong interest from other industries too. Radar applications in particular would benefit as they are focused on generating as much power and efficiency as possible within a given space. As GaN proliferates in these submarkets, the economies of scale increase and the price point will continue to drop.
Without question, the GaN semiconductor industry’s biggest priority for base stations is increasing linear power. R&D efforts are all focused towards driving linear efficiency over the next couple years.
At the same time, our expectation is that base station modulation schemes won’t significantly change in the next three to five years. It breaks down to a simple calculation of bits per hertz. Whether you are running 256 QAM or 1024 QAM, a system is going to get a certain number of bits per hertz of bandwidth. If those numbers aren’t going to significantly change, the ideal way to generate more out of a system is through linear efficiency improvements.
That’s not to say that it can’t be addressed with increased power from the fundamental device. Even without linearity improvements, the overall power efficiency of the PA will still deliver signal improvements. It also helps designers to shrink systems, since they need less system power and fewer antenna arrays. While additional power or second-level solutions work, the goal for GaN suppliers in the industry is to reduce the trapping effects so that systems become as simple as possible.
The temperature of base stations continues to rise over time. Five years ago, the standard was to spec devices to 85°C. OEMs have pushed that higher to 105°C, and the expectation is that base stations designers will be asked to accommodate temperatures of 125°C. Most GaAs devices have a max temp of 150°C, which only gives you 25°C of rise to work within. GaN suppliers will have to work closely with systems designers to find creative ways to keep embedded elements cooler. This pressure will be more acute in smaller, outdoor units with massive MIMO arrays. Creative solutions exist today, but not at a cost-effective price. We expect that to change over the next few years.
Every GaN supplier is fine tuning GaN device physics to improve linear efficiency, power density and reliability while reducing negative effects of trapping, current collapse and current drift for example. This can be done to some extent on the device level, but to achieve full potential, base station RFFE systems should be developed in tandem with the total architecture chain, and that’s where we see a lot of forward looking activity today.
This is especially critical as the industry shifts from LDMOS to GaN solutions. The technology is fundamentally different. It’s not as simple as substituting in a GaN PA and expecting 10 points better efficiency. There are different system problems and solutions. A base station optimized for LDMOS may not be appropriate for a GaN PA, and vice versa. Optimizing base station systems for GaN should be done holistically.
We’re starting to see this trend now, and we expect wider adoption over the next several years as the performance results speak for themselves. The embedded designers that work with suppliers to bridge this holistic design gap will position themselves as industry leaders. OEMs would of course say that they are already using a systems-level approach. We wouldn’t argue that fact, but we believe there are further gains to be had, especially as the RF portions of the chain become smarter and more integrated.
Smart RF and Artificial Intelligence
Trap mitigation has been a problem for every semiconductor material, and GaN is no exception. High speed switching applications can create extremely challenging trapping environments for GaN power amplifiers. Solving these trapping effects can be complex since the PA behaviors can be contingent on previous signals the PA received. The traditional approach would be to address it at the physical layer, all the way down into the substrate, to address what’s causing the problematic behavior. Current technology hasn’t been able to completely mitigate trapping in this way yet, but it’s always under R&D study.
Another method could be to use software algorithms to predict the variations that lead to trapping. With smart RF controllers and a deep enough understanding of the pre-existing conditions, devices could potentially identify traffic patterns and predict the next spike in activity. Or, recognize a drop in activity and change things at the controller level to reduce power consumption. This has been done for many years in base stations, but there are continuous efforts ongoing to improve the techniques.
This is why OEMs are considering implementing artificial intelligence at the radio level. RFFE systems could optimize themselves over time. Theoretically, if a radio out in the field generates a fault, it could self-identify the error and ‘learn’ from the mistake. Then next time it could prevent the series of events that created the fault, or potentially fix it. There wouldn’t be a need to flag the carrier, send out a truck, and put people in the tower to address minor issues. As you can imagine, this would prevent significant downtime and expense.
Even with 5G still in the beginning stages of its rollout, discussions for 6G are already starting. Early predictions suggest that 6G will be delivered in frequency bands well over 100Ghz, frequencies that we know GaN supports. Most likely that type of solution isn’t going to be a traditional cell tower deployment, but whatever form it takes, we believe GaN’s efficiency at high frequencies and over wide bandwidths make it critical to turning 6G into a reality.
|Roger Hall is the General Manager of High Performance Solutions at Qorvo, Inc., and leads program management and applications engineering for Wireless Infrastructure, Defense and Aerospace, and Power Management markets.|
- 5G and GaN: Understanding sub-6Ghz Massive MIMO infrastructure
- 5G and GaN: The shift from LDMOS to GaN
- 5G and GaN: What embedded designers need to know
- 5G roll-out: a marathon not a sprint
- How data-driven control using ML improves 5G network performance
- 5G’s biggest challenges for communications service providers
- Can machine learning solve the challenges of complex 5G baseband?
For more Embedded, subscribe to Embedded’s weekly email newsletter.