The rise of FPGAs?
With much fanfare Xilinx announced their Zynq line of FPGAs some months ago. Initial versions comprise a pair of hard-IP Cortex-A9 processors surrounded by a sea of programmable logic.
They’re sort of like a super-sized version of Microchip’s PIC10F32X family, which is an 8 bit microcontroller with a small amount of programmable logic (what they calls a “puddle of gates”). Both companies push a new kind of product: instead of an FPGA that happens to have a microprocessor or two, these parts are complete microcontrollers with some (in Xilinx’s case, rather a lot) programmable logic.
My first reaction to the Zynx announcement was a yawn. Sure, CPUs and logic are nice, but what’s really new? We’ve been stuffing processors into FPGAs for years, both as hard-IP or soft versions loaded at boot time.
I met with Xilinx at the recent ESC DESIGN West and they opened my eyes. One of the product managers described how Linux has been ported to the Zynq (more on this in another column; there’s some fascinating data to be had) . “What!,” I exclaimed, “having an FPGA CPU run Linux sounds like a horrible use of expensive cells.” My thinking wasn’t that Linux is bad, but that CPU IP burns a lot of FPGA cells. Run Linux on an external processor and save the valuable programmable logic for the tough parts of the problem.
They gently reminded me that the A9s are hard cores, so there’s no penalty to use them in a general-purpose computing situation. Suddenly the scales fell from my eyes and the Zynq strategy made sense.
FPGAs have always been expensive, whether in a small scale (handfuls of dollars for low-end parts) to astronomical – one can get a nice car for less than the cost of high-end device. Literally. So they’ve occupied markets that can tolerate a bit, or a lot, more money in exchange for the convenience of programmability. The conventional thinking has always held that for high volume applications one uses an ASIC.
That thinking is now obsolete.
ASICs have long been natural choices for high-volume products. But as process geometries shrink the design costs have skyrocketed. One analysis claims that at 28 nm only products that ship billions of units can profitably use an ASIC.
Zynq is at the 28 nm node. The result is Xilinx can stuff a ton of features onto a chip for a reasonable price. I couldn’t get real dollar figures, but one application is for collision avoidance in cars. Apparently in Europe this feature will be required in the next few years, and vendors expect most of the logic will live in the rear-view mirror. If a Zynq is, as claimed, a cost-effective solution then that implies at least the smaller parts will go for just a handful of dollars.
So let’s unroll all of this. A pair of big honking CPUs, surrounded by tons of I/O. A bit of on-board memory. Gobs of fast logic – really fast, including DSP. At prices attractive even to the notoriously-stingy automotive market.
In 28 nm, so power requirements are reasonable (the company cites a watt or two, depending on what resources are used, for the smaller family members). ASICs will still play a role, but I think this technology has marginalized them even more. And it’s reasonable to assume Zynq or Zynq-like FPGAs will follow Moore’s Law down to smaller process geometries, making ASICs even less attractive as their engineering costs spiral to dizzying heights.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at email@example.com. His website is www.ganssle.com.