Embedded design with FPGAs: Development process - Embedded.com

Embedded design with FPGAs: Development process


Editor’s Note: As advanced algorithms continue to emerge for smart product designs, developers often find themselves struggling to implement embedded systems able to meet the associated processing demands of these algorithms. FPGAs can deliver the required performance, but designing with FPGAs has long been considered limited to the purview of FPGA programming experts. Today, however, the availability of more powerful FPGAs and more effective development environments has made FPGA development broadly accessible. In this excerpt, Chapter 4 from the book Architecting High-Performance Embedded Systems, the author offers a comprehensive review of FPGA devices, implementation languages, and the FPGA development process as well as a detailed walkthrough of how to get started implementing FPGAs in your own design. The complete excerpt is presented in the following series of installments:
1: Hardware resources
2: Implementation languages
3: Development process (this article)
4: Building a project
5: Implementation

Adapted from Architecting High-Performance Embedded Systems, by Jim Ledin.

The FPGA development process

While FPGAs are used across a wide variety of disparate application domains, it is possible to identify a set of development steps that are broadly applicable to any FPGA development project. This section discusses the usual FPGA development steps in the sequence they normally occur during a project.

Defining system requirements

The first step in developing a new system, or when initiating a significant upgrade to an existing system, is to develop a clear and detailed understanding of what the system is supposed to do. The requirements definition process begins with a general description of the system’s intended functionality, operating modes, and key features. This information should be written out in clear and unambiguous language and shared with all parties having a stake in the success of the development effort. The goal of sharing the system requirements is to achieve consensus among all of the parties as to the completeness and correctness of the descriptions.

Requirement descriptions must be fleshed out to include specifications for the required level of system performance in terms such as sampling rates of input signals and update rates for actuator output commands. Additional details such as physical size constraints, minimum battery lifetime, and tolerable environmental temperature ranges will guide the design process. In general, a comprehensive set of specifications must be developed that describes the minimum performance thresholds for all system parameters that are judged to be relevant to overall system success.

The full set of system requirements must be complete to the extent that any design solution that complies with all of the stated specifications must be an adequate solution. If it turns out that a design that satisfies all of the specifications is deemed unacceptable for some unrelated reason, this represents a failure to fully state the system requirements.

For example, if a technically adequate solution is determined to be too expensive to produce, the source of the problem is likely to be a failure to fully define cost containment constraints during the requirements development process.

After the top-level system requirements have been defined and agreed upon, it is generally necessary to divide the overall system configuration into a collection of subsystems, each with a cohesive purpose and its own set of descriptive requirements and technical specifications. In a real-time embedded system architecture, the digital processing capability is likely to be represented as a subsystem with a corresponding collection of requirements

Allocating functionality to the FPGA

If the requirements for digital processing in a system architecture exceed the capabilities of microcontrollers and microprocessors that would otherwise be suitable for use in the system, it may be appropriate to consider incorporating an FPGA in the design. Some system architectures, particularly those that benefit from high-speed digital hardware performing parallel operations, are natural candidates for FPGA implementation. Other system architectures may be capable of adequate performance with traditional digital processing, but there may be valuable opportunities to take advantage of the flexibility and extensibility offered by an FPGA implementation over a planned lifetime that envisions substantial system upgrades in the future.

After the decision has been made to incorporate an FPGA in the design, the next step is to allocate the portions of overall system digital processing requirements to the FPGA device. This typically includes the specification of the FPGA input and output signals, the update rates of inputs and outputs, and the identification of components with which the FPGA must interact, including parts such as ADCs and RAM devices.

Identifying required FPGA features

Having defined the functions to be performed by the FPGA, and with knowledge of the interfaces to other devices that the FPGA must support, it becomes possible to develop a list of features that candidate FPGA devices must provide.

Some FPGA families are designed for low-cost, less-complex applications and thus offer a limited set of resources for implementing digital logic. These devices might operate from battery power and require only passive cooling. Other, more powerful, FPGA families support large-scale, full-featured digital designs, are intended to operate at peak performance, and may require continuous active cooling.

The system requirements associated with the embedded application will guide the selection of an appropriate FPGA family for the application. At this point, it is likely not possible to identify a specific FPGA model within the preferred family because the resource requirements of the FPGA implementation have not been fully defined. However, with experience, it is possible to identify a small number of FPGA models that appear suitable for the design.

In addition to the FPGA resources for digital circuit implementation, many FPGA models include additional features that may be important for the system design. For example, a built-in ADC may be useful for minimizing the system parts count. The list of required and desired FPGA features will help further narrow the selection of appropriate FPGA devices for the system.

Implementing the FPGA design

Having identified a candidate FPGA model, and with the detailed definition of the functionality allocated to the FPGA in hand, it is time to begin the implementation of the FPGA design. This will generally involve the use of the FPGA development tool suite and usually consists largely of developing HDL code in the preferred language for the project.

If appropriate, the FPGA implementation might begin with a block diagram representation of the top-level FPGA design. As necessary, components developed in HDL or C/C++ can be incorporated into the block design to complete the full system implementation.

Alternatively, it is also common for entire system designs to be developed directly in HDL. For developers familiar with the language and with a full understanding of the features and constraints of the FPGA model in use, this may lead to the most resource-efficient and highest-performing design outcome.

FPGA development proceeds in phases as the initial design becomes specified in more detail until a programming file for the FPGA device is produced. It is common to iterate through these phases several times for a large project, developing a small portion of the total design during each pass through the steps. These phases are described in the following sections.

Design entry

Design entry is the phase where the system developer defines system functionality using HDL code, block diagrams, and/or C/C++ code. The code and other artifacts, such as block diagrams, define the logical functionality of the system in abstract terms. In other words, the design artifacts define a logic circuit, but they don’t define how it is integrated with the rest of the system.

I/O planning

FPGA I/O planning is the process of identifying the pins assigned to perform particular I/O functions and associating any device features such as the I/O signal standard to use for each signal. As part of the I/O planning process, it may be important to consider issues such as where on the physical device package I/O pins are located. This step is important to minimize the printed circuit board trace lengths for high-speed signals and to avoid forcing circuit signal traces to unnecessarily cross over one another.

The definition of I/O signal requirements is one form of constraint in the FPGA development process. The other primary constraint category consists of timing requirements that determine the FPGA solution’s performance. The FPGA synthesis process uses the HDL code and the project constraints to develop a functionally correct FPGA solution that satisfies all of the defined constraints. If the tool cannot satisfy all of the constraints, synthesis will fail.


Synthesis transforms the source code into a circuit design called a netlist. The netlist represents the circuit constructed from the resources of the target FPGA model. The netlist represents a logical, or schematic, version of the circuit. It does not define how the circuit will be implemented in the physical FPGA device. This occurs in the next step.

Place and route

The place process takes the FPGA resources defined in the netlist and assigns them to specific logic elements within the selected FPGA. The resulting resource placements must satisfy any constraints that restrict the allocation of these elements, including I/O constraints and timing constraints.

After the logic elements have been assigned physical locations during the place process, a set of connections among the logic elements is configured during the route process. Routing implements all of the connections between the logic elements and enables the circuit to function as described in the HDL code. After the place and route operations have completed, the configuration of the FPGA is fully determined.

Bitstream generation

The final step in the FPGA development process is the production of a bitstream file. To achieve the highest performance, most modern FPGA devices store their configuration internally using static RAM (SRAM).

You can think of the FPGA configuration SRAM as a very large shift register, containing perhaps millions of bits. The contents of this shift register fully specify all aspects of FPGA device configuration and operation. The bitstream file produced during FPGA development represents the settings for the shift register that cause the device to perform the intended functions specified by the HDL and the constraints. In terms of traditional software development processes, the bitstream file is analogous to an executable program produced by a linker.

SRAM is volatile and loses its contents each time device power is removed. The real-time embedded system architecture must provide a means for loading the bitstream file into the FPGA each time power is applied. Typically, the bitstream is either loaded from flash memory located within the device or from an external source, such as a PC, connected to the device during each power-on cycle.

Having completed the compilation of the FPGA bitstream, the next step is to test the implementation to verify that it operates correctly. This step is no different than the testing required at the end of a traditional software build process.

Testing the implementation

FPGA development is susceptible to all of the types of bugs that bedevil traditional software development efforts. During FPGA development, you will likely be presented with many error messages related to incorrect syntax, attempts to use resources not currently accessible, and many other types of violations. As in any programming endeavor, you will need to identify the source of each error and fix the problem.

Even after the FPGA application successfully proceeds through all of the stages to bitstream generation, there is no guarantee that the design will perform as intended. To achieve a successful design on a reasonable timetable, it is absolutely critical to perform adequate testing at each stage of development.

The first phase of testing should thoroughly exercise the behavior of the HDL code to demonstrate that it performs as intended. The example project at the end of this chapter will demonstrate the use of the Vivado tool suite to perform a thorough test of the HDL logic in the design.

After the bitstream has been generated, there is no substitute for comprehensive testing of the FPGA as implemented in the final system configuration. This testing must thoroughly exercise all features and modes of the FPGA, including its response to out-of-range and error conditions.

At each step of the design, development, and testing process, project personnel must remain attuned to the possibility of implementing system features that are susceptible to improper behavior in unlikely or rare situations. The occurrence of these kinds of issues can represent bugs that are extremely difficult to duplicate and that can forever tarnish the perception of the embedded system design and the organization that produced it. If you do an excellent job of testing, the likelihood of this outcome will be reduced substantially.

The next section provides a detailed description of the steps in the development, testing, and implementation of a simple FPGA project using the Arty A7 development board and the Xilinx Vivado tool suite.

Reprinted with permission from Packt Publishing. Copyright © 2021 Packt Publishing

Jim Ledin is the CEO of Ledin Engineering, Inc. Jim is an expert in embedded software and hardware design, development, and testing. He is also accomplished in embedded system cybersecurity assessment and penetration testing. He has a B.S. degree in aerospace engineering from Iowa State University and an M.S. degree in electrical and computer engineering from Georgia Institute of Technology. Jim is a registered professional electrical engineer in California, a Certified Information System Security Professional (CISSP), a Certified Ethical Hacker (CEH), and a Certified Penetration Tester (CPT).

Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.