CEO interview: Envisioning low-cost, general-purpose FPGAs for the edge - Embedded.com

CEO interview: Envisioning low-cost, general-purpose FPGAs for the edge

Advertisement

EE Times sat down with Efinix co-founder and CEO Sammy Cheung to learn about Efinix’s progress in the first 10 years of building an FPGA company from scratch.

While the field-programmable gate array (FPGA) was once dominated by just two players, there are now a number of companies trying to fill the various gaps in the $6 billion–plus FPGA market. One such company is Efinix, which just celebrated its 10th anniversary by opening a large new office in Cupertino, California, not far from Apple headquarters.

EE Times sat down with co-founder and CEO Sammy Cheung at the new offices to learn about its progress in the first 10 years of building an FPGA company from scratch—and its growth plans, especially with a potential Nasdaq listing in the cards sometime soon.


Left to right: Jay Schleicher, vice president software engineering; Sammy Cheung, founder, president, and CEO; Tony Ngai, founder, CTO, and SVP engineering.

Chueng and co-founder Tony Ngai are no strangers to the FPGA business: They have close to 50 man-years’ experience between them in FPGAs, having worked with Xilinx, Altera, and Lattice. Cheung believes they are using that market knowledge to fill the void for low-cost, general-purpose FPGA for the edge and adding RISC-V cores to the mix to broaden its reach to the world of embedded systems.

The company, formed in 2012, now has 140 employees in the U.S., Malaysia, Hong Kong, China, Japan, and Germany. Efinix has shipped more than 10 million devices globally, broke the two-digit million-dollar revenue mark last year, and expects that to become three-digit millions this year. (Cheung didn’t want to be more specific and expects to break even in Q3 of 2022.)

He did, however, tell EE Times, “Also this quarter, we’re expecting we will get 1% market share for the FPGA. This is pretty small, but it’s a baby step. We are very excited about that: At least in a big pie chart, we may see our little color.”

So might he and Ngai be readying the company to go public? Find out below.

Sammy, I heard you’re looking to raise money with a listing on Nasdaq. What are the plans?

Yes, we have had discussions, and this is just the first step. They [Nasdaq] understand we have our 10-year anniversary and congratulated us, and they understand our potential. [In terms of timing,] I would say soon; I wouldn’t say in a year, but not 10 years, either. In terms of listing, we probably need to do a lot of housekeeping work before that, so we will spend this year getting to that stage. We are already profitable, so we should be in pretty good shape in that respect.

So should we expect a listing potentially in the next 18 months?

That’s a good guess. But it’s probably also on the earlier side, considering the world economy, where in the last few months, it’s also been very volatile. We are not in a rush in terms of chasing that capital, but we are also working really hard at looking at a growth plan that best uses capital to deploy our technology and products.

You have obviously started thinking about this because you need scale up growth capital. So I’d like to ask, what is your ambition as a company?

It is to follow our founding philosophy: [to have] Efinix everywhere. That means building products either by ourselves or with our partners, replacing older technology, and making it efficient. One of the biggest things is, very simply, to produce an FPGA that can be small, low-power, and low-cost but scalable to high density and high performance.

We [the industry] haven’t had that combination in the past. Most people are already aware that it is very hard to build a general-purpose custom chip that can then go into an advanced fab like TSMC or Samsung. FPGA used to be the hope, but the incumbent players are really now more focused on the high-end market. So that leaves a huge void for general-purpose devices.

At the other end of the spectrum, why not keep using a microcontroller? The problem is, the world is looking for AI, ML, data processing, and you have to customize your device to make it more expensive and non-flexible in order to achieve some of the performance requirements.

But if someone like us could make the FPGA that much smaller, easier to make? That’s what we are working on. We have gone from 40 to 16 nm. And our next step is to go to 5 nm. The question [we are addressing] is: How we can make an expensive process mainstream-sellable and have a return on investment?

We are not trying to be another Altera or Xilinx. They are great companies. We are looking at something different that addresses more than just the $6 billion FPGA market. We believe that, together with our RISC-V platform, we can conquer a much bigger market, including some of the general-purpose processor or ASIC markets. You can build ASICs, but it is just economically infeasible. So we believe we are falling into this big void [in the market].

You say you have 1,000 customers now, but to get Efinix everywhere, it has to be a much bigger number. So how will you do this and what is the business model?

I think the world is waiting for a different business model. The business models to date have been successful but have tended to be very captive. That means you hire 10,000 people or 100,000 people to work for you. We will probably use a little bit more of a hybrid approach. We expect in the long term, about 15% to 20% of our revenue will be “platform” revenue. Basically, this focuses on general-purpose FPGA devices, and most likely, we will have various partnerships for different applications, increasing our product matrix, big enough to serve bigger markets.

So it’s necessary to have a big-enough product matrix without spending 100 years to do it. How you do that is basically not to do a captive model like back in our Altera days, where everything is home-grown and so complicated. Our technology is very portable. It’s important to use a standard-recipe process. And our software structure allows us to build many different devices, cheaper, in less time, and more economically, allowing us to do more platform partnerships. It’s not licensing. Licensing doesn’t work. It’s like I build a whole house, and anyone can plug in their special things into the house. This [approach] means we can, in a reasonable period of time—say five to 10 years—build a big matrix of products.

The second part [of our strategy] is even more interesting and based on RISC-V. If you go back and look at all the traditional FPGA companies, they have been using proprietary processors, proprietary IP. We have to change. We cannot hire another 20,000 employees to do something different from FPGA. We want to use hybrid in terms of open source.

So as of today, our RISC-V platform is totally built from an open-source platform, and also our goal is not to try to use the old-school process to lock in the customer. Why? If customers use a proprietary processor, and three months later, you say, “I don’t have silicon supply,” then they are stuck. That’s why open source gives the customer so-called freedom. I think new businesses need this business model to provide that freedom instead of being locked down by one or two vendors.

So you are building your product matrix to expand customer base. Does that mean you’ll be selling from the website, or how will you build your market?

We’ll use everything—directly, through channels—and it is a playbook that has been used for many years. There are lot more people interested in working with us now. The FPGA part has to be done directly. However, going back to the embedded side of the equation, once again, it is RISC-V. You have a fully integrated accelerator within the FPGA already, and our way is not to try to complicate the problem. We provide a basic template reference for customers. I have customers coming in who are so happy to just use it off the shelf. They don’t need to scratch their heads to figure out how to port to other company processes.

But some customers are more sophisticated: if they want to swiftly swap their own RISC-V core into our platform, they can do so. It is essentially a soft core. For example, today, a 16-nm Titanium running at 350–400 MHz is probably more than enough to do the control, do the software programmability for 70% of applications. When there are specific functions, they need to accelerate, they don’t need to build special hardware; they just run it in the FPGA fabric either as a standard accelerator or put it as a custom-instruction acceleration.

Can you explain in a little more detail how you’ll go to market with this strategy?

Let’s look at it in a different dimension: With RISC-V, we start getting into the embedded world, so our salesmanship is going to be very different. That market is addressing system software engineers, so the pull is already different [to the traditional FPGA market sell]. We need to have more references, more libraries for kernels, and build more cores to make them easy to use. Hence, more software and more partnerships are needed. It’s a totally different layer, which has a lot more customers than the FPGA.

Ultimately, I think the market size is probably 2× or even more than standard FPGAs. And it will turn much faster because it is software-based.

I come from a hybrid world, where I believe what we’ve tried to do is to minimize the difficulty on the RTL [register transfer level] part by building a much bigger ecosystem in the library for acceleration and custom instruction. It’s much easier than having to customize every time in the full FPGA. So as long as we build it up, there should be an ecosystem for the system software person to pick the chip to build their own system. They even have flexibility to reconfigure their SoC.

What would you say is your competitive advantage as far as product is concerned?

I think the basic FPGA architecture over the last 30 to 40 years is all standard, where logic and routing are two different things. They have been optimized differently, and when the market needed to grow the density, the FPGA chip needed to ensure routability. So they have to grow the routing switch as big as possible everywhere. And then for more complex logic, they had to do more hardening on the logic side. Overall, that just makes things bigger.

That means continually having to shrink the process, and a few years later, as that gets more expensive, it can end up being an expensive elephant. That’s one thing I tell my group: “My business is simple. Don’t build an expensive elephant.” What we are doing is creating a more efficient, fine-grained architecture where the basic cell could be reconfigured to logic and routing. We have rolled out a 40-nm part as a test, which is now running pretty well. It’s already very competitive compared with 28 nm for general purpose only. So when we roll out the 16 nm like the TI 180 coming up, it’s going to blow people away. They will never have seen a device so small and have such low power but running at performance as a top-performance AMD Xilinx.

When it’s small, the architecture uses less power, it’s more economical, but on top of that, we make the methodology easy to integrate in software in the periphery. One of the good things we have, and that my foundry told me, is we use a standard-process recipe. It’s all about economics. First of all, an advanced foundry may not like to run old processes. You use the same water and electricity, and you probably can make more money with the advanced process.

And then when you run a new process, they don’t like a special recipe. And traditionally, for FPGAs to get their performance, they must use steroids. In other words, they need to use special recipes, such as many metals, special metal stacks, special transistors, and anytime you insert a run of FPGA, you need to do a lot of work to change it. For the big foundries, they can do anything. But for other foundries, it may not be so easy to provide those special recipes. That means for other companies, it is very difficult to move to another foundry and still be competitive. But we can.

What’s the process for a customer to engage with you?

Most users know how to use our tools, so the whole process is the same as with other FPGAs. Tool-wise, engagement is the same. And for RISC-V, it’s even easier. We have the SDK [software development kit] they can download, and it’s pretty easy to set up, plus we give it free; we don’t charge. We only charge when they come and say, “Can I build a special SoC platform?” which is fine. We have a customer, Sony, for whom we do the whole sensor, as well as FPGA integration for them.


Efinix recently revealed that its Trion T20 FPGA is being used in Sony Semiconductor Solutions Corporation’s SPRESENSE HDR camera board, which is part of a development platform designed as an open-source environment for edge and IoT applications. (Source: Sony)

What about the 10 million chips you said you have shipped?

Predominantly, it is in industrial. You have to have a place to start with, and our first trials started with industrial. We do see, though, that our expansion will move pretty quickly into a few areas, especially automotive and high-end consumer (tied to AR/VR and mixed reality). Then a little bit further down the line, it is communications and data computing that I think we will see more when we start rolling out our higher-density devices. Right now, the largest one we have available is a 180k LE [logic element] device. Delving further into the industrial side, a key part of what we are seeing is imaging. Imaging is done in many different cameras: thermal cameras, video cameras, ToF [time-of-flight] cameras, printing, and now LiDAR. The common thing in all of these is very parallel data processing and, more so, flexible-parallel data processing. The traditional FPGA is too expensive, too power-hungry, so not suitable for the general-purpose market.

And regarding RISC-V, it’s interesting. Once you get to about 120k LE devices, we have over 70% to 80% of those customers using our RISC-V core. It’s a simple control plane, provides software portability, system integrations, and some customers may be sophisticated in their use of it, but some customers just pick up what we have already, and they love it.

It’s interesting that you’re actually making a success of RISC-V, which maybe people don’t know about.

We are always showing up on RISC-V analyst reports. But we are just so small, and from an immediate business perspective, we have focused so much on the traditional FPGA market. RISC-V has created an embedded dimension for us to sell, but even more powerful is the upcoming story, which we don’t have on the website yet, about rolling out the first tryout of TinyML running on a RISC-V core with accelerations. That will be another dimension to sell into the edge, when people try to insert AI or machine learning into the infrastructure.

A typical specialized AI chip won’t help, because for the edge, they want to integrate more functions, so with us, they can insert the AI function with flexibility with the existing non-AI functions. That part we don’t want to be proprietary. So TinyML is one of the things that we’ll try to roll out, because a lot of processor controllers are using TinyML, but mostly, they’re going to face the latency and performance problem.

That’s another answer to a previous question about how we’re going to augment the market. It’s not by hiring more people; it’s by building software, which is cheaper than continually building chips.

>> This article was originally published on our sister site, EE Times.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.