Advertisement

Reliable and power-aware architectures: Microbenchmark generation

Augusto Vega. Pradip Bose, and Alper Buyuktosunoglu

October 02, 2017

Augusto Vega. Pradip Bose, and Alper BuyuktosunogluOctober 02, 2017

Editor's Note: Embedded designers must contend with a host of challenges in creating systems for harsh environments. Harsh environments present unique characteristics not only in terms of temperature extremes but also in areas including availability, security, very limited power budget, and more. In Rugged Embedded Systems, the authors present a series of papers by experts in each of the areas that can present unusually demanding requirements. In Chapter 2 of the book, the authors address fundamental concerns in reliability and system resiliency. This series excerpts that chapter in a series of installments including:
Reliable and power-aware architectures: Sustaining system resiliency

- Reliable and power-aware architectures: Measuring resiliency 
 
- Reliable and power-aware architectures: Soft-error vulnerabilities 
- Reliable and power-aware architectures: Microbenchmark generation (this article)
- Reliable and power-aware architectures: Measurement and modeling

Elsevier is offering this and other engineering books at a 30% discount. To use this discount, click here and use code ENGIN317 during checkout.

Adapted from Rugged Embedded Systems, Computing in Harsh Environments, by Augusto Vega. Pradip Bose, Alper Buyuktosunoglu.

 

CHAPTER 2. Reliable and power-aware architectures: Fundamentals and modeling (Continued)

7 MICROBENCHMARK GENERATION

A systematic set of microbenchmarks is needed to serve as specific stressers in order to sensitize a targeted processor chip to targeted failure modes under harsh environ- ments. The idea is to run such specialized microbenchmarks and observe the onset of vulnerabilities to robust operation. Microbenchmarks are targeted at the processor so that deficiencies can be identified or diagnosed in:

  1. traditional architecture performance (e.g., instructions per cycle, cache misses);

  2. power or energy related metrics;

  3. temperature and lifetime reliability metrics;

  4. resilience under transient errors induced by high energy particle strikes, voltage noise events, and thermal hot spots; etc.

Microbenchmarks can certainly be developed manually with detailed knowledge of the target architecture. However, from a productivity viewpoint there is a clear value in automating the generation of microbenchmarks through a framework. A microbe- nchmark generation framework that allows developers to quickly convert an idea of a microbenchmark into a working benchmark that performs the desired action is nec- essary. The following sections will take a deep dive into a microbenchmark gener- ation process and capabilities of an automated framework. We will also present a simple example to make concepts clear.

7.1 OVERVIEW

In a microbenchmark generation framework, flexibility and generality are main design constraints since the nature of situations on which microbenchmarks can be useful is huge. We want the user to fully control the code being generated at assembly level. In addition, we want the user to specify high level (e.g., loads per instruction) or dynamic properties (e.g., instructions per cycle ratio) that the microbenchmark should have. Moreover, we want the user to quickly search the design space, when looking for a solution with the specifications.

In a microbenchmark generation process, microbenchmarks can have a set of properties which are static and dynamic. Microbenchmarks that fulfill a set of static properties can be directly generated since the static properties do not depend on the environment in which the microbenchmark is deployed. These types of properties include instruction distribution, code and data footprint, dependency distance, branch patterns, and data access patterns. In contrast, the generation of microbenchmarks with a given set of dynamic properties is a complex task. The dynamic properties are directly affected by the static microbenchmark properties as well as the architecture on which the microbenchmark is run. Exam- ples of dynamic properties include instructions per cycle, memory hit/miss ratios, power, or temperature.

In general, it is hard to statically ensure dynamic properties of a microbenchmark. However, in some situations, using a deep knowledge of the underlying architecture and assuming a constrained execution environment, one can statically ensure the dynamic properties. Otherwise, to check whether dynamic properties are satisfied, simulation on a simulator or measurement on a real setup is needed. In that scenario, since the user can only control the static properties of a microbenchmark, a search for the design space is needed to find a solution.

Fig. 9 shows the high level picture of a microbenchmark generation process. In step number one, the user provides a set of properties. In the second step, if the prop- erties are abstract (e.g., integer unit at 70% utilization), they are translated into archi- tectural properties using the property driver. If the properties are not abstract, they can be directly forwarded to the next step, which is the microbenchmark synthesizer. The synthesizer takes the properties and generates an abstract representation of the microbenchmark with the properties that can be statically defined. Other parameters that are required to generate the microbenchmarks are assigned by using the models implemented in the architecture back-end. In this step, the call flow graph and basic blocks are created. Assignment of instructions, dependencies, memory patterns, and branch patterns are performed. The “architecture back-end” consists of three com- ponents: (a) definition of the instruction set architecture via op code syntax table, as well as the high-level parametric definition of the processor microarchitecture; (b) an analytical reference model that is able to calculate (precisely or within spec- ified bounds) the performance and unit-level utilization levels of a candidate loop microbenchmark; (c) a (micro)architecture translator segment that is responsible for final (micro)architecture-specific consolidation and integration of the microbe- nchmark program. Finally, in the fourth step the property evaluator checks whether the microbenchmark fulfills the required properties. For that purpose, the framework can rely on a simulator, real execution on a machine or analytical models provided by the architecture back-end. If the microbenchmark fulfills the target properties, the final code is generated (step 6). Otherwise, the property evaluator modifies the input parameters of the code generator and an iterative process is followed until the search process finds the desired solution (step 5).


FIG. 9 High-level description of a microbenchmark generation process.

Continue reading on page two >>

 

< Previous
Page 1 of 2
Next >

Loading comments...