Reducing tester-based silicon debug effort & time: Part 1 – Testing modes

Neha Srivastava, Nitin Goel, and Aashish Mittal - Freescale Semiconductor

January 07, 2013

Neha Srivastava, Nitin Goel, and Aashish Mittal - Freescale SemiconductorJanuary 07, 2013

Editor's note: In the first of a two part series the authors review the pros and cons of the silicon test and debug phases an SoC must go through and the techniques and tools available to the test engineer.

Rob Rutenbar, professor of electrical and computer engineering at Carnegie Mellon University, calls post-silicon debug a "dirty little secret" that can cost an embedded system-on-chip design project $15 million to $20 million and take six months to complete, and even then there is a possibility the chips will not work perfectly. No designer can be absolutely sure that all components of the design will work seamlessly at the first trial by the customer and subsequently under all operating conditions. A chip may fail at particular frequencies and operating conditions; design functionality may be perfect stand-alone but things can start failing the moment it is plugged into a bigger and more complex cluster of chips at the customer’s end; a customer may configure the chip in an unforeseen scenario, making the chip fail long after it has been launched in the market.

On average, tester activities consume up to 40% of the total time and cost of modern chips and is one of the least predictable processes.

As feature size decreases, operating speed increases, and design complexities increase. The customer may request a new feature or a change in an existing feature, requiring new patterns and techniques to test the new but critical features. As a result testing methods become more challenging and the time and effort expended on testing increases.

Unfortunately this rise in testing complexity has not seen an equivalent increase in dedicated development of testing on a scale comparable to the rest of the commercial chip business. But depending on hit-and-miss methods to make up the difference - enough to convince customers of the device functionality - can prove to be a major oversight in the long run. What is needed, instead, is a robust, rigorous, and – as much as possible - infallible test program.

To maintain high quality and customer confidence, current generation submicron integrated circuits need to pass a high quality and fairly exhaustive testing program before being shipped off to customers or released into the market. Complicating the problem, each part of the testing process has distinctive characteristics due to differences in frequency, IO timing, voltage levels, and other features. In spite of these complexities, it is crucial to develop a set of general practices and techniques that can address generic issues that plague the current generation of submicron-geometry devices.

The number of possible test scenarios for a chip can be quite high, and so, as a first step, there needs to be ongoing discussions among the verification engineer, design team, tester engineer, production team, and the customers themselves to arrive at a concise and feasible list of patterns that are capable of testing all critical functional scenarios with minimum time and effort.

Pre-silicon verification for testing (VFT)
Pre-silicon verification engineers must deal with tester pattern generation and simulation (VFT) to deliver high quality, fool-proof pattern suites to be tried on the testers.

Pre-silicon VFT is a super set of functional verification activities, and involves developing a testbench environment capable of simulating tester conditions and silicon behavior as closely as possible before silicon is available, with device-specific startup routines and generating functional tester patterns.

Design, verification, and testing can no longer remain exclusive domains. They must converge and complement each other to detect potential issues at an early stage.

Once the tester pattern suite is finalized, each pattern needs to be planned in as much detail as possible because it will be the foundation upon which the targeted code with incremental and iterative modifications will run. Dealing with potential issues on the tester requires intelligent and conscious analysis of the design behavior and waveforms.

Some features of verification provide better debug capability compared to the later testing phase:

  • Since the environment is simulated, there is flexibility in setting up test cases at the block and gate level.
  • Verification allows full visibility into design waveforms and internal signals.
  • Inputs can be injected and outputs probed and logged from virtually anywhere in the design.
  • Ability to verify individual blocks before they are integrated in the SoC model is something that can be done at VFT level. In silicon, we cannot physically separate the blocks and thus have to deal with the entire SoC as one unit.
  • The ability to inject faults and errors is also something that can be done only in verification.
  • In simulation, designers have long enjoyed many advanced capabilities such as transaction-level modeling and assertion-based verification.
  • Verification allows much detailed analysis and ability to find point of failure if issues can be reproduced in VFT environment.

Some limitations of VFT compared to testing include:
  • A problem with design simulation and pre-silicon testing is that they take a long time to execute when compared to actual silicon. What takes seconds in silicon could take hours or days in a simulated environment. This limits the amount of testing that can be performed.
  • No matter how high the quality of behavioral models associated with analog blocks, they can never mimic their full behavior. So there always remains an area of uncertainty in fully verifying analog behavior in VFT.
  • Because of the more debug-friendly features of VFT, it is imperative that we build and use a reliable testbench infrastructure with well-defined startup and shutdown sequences and high quality patterns with a minimum of failure and debug events.
  • The patterns delivered for testing functional behavior of silicon must be correct by construction, both from the point of view of pattern behavior itself and in terms of the full infrastructure involving linkers, compile-and-make process, and startup/shutdown sequences. This will eliminate the need for unnecessary iterations that add to the already high tester time, cost, and debug effort. In addition, the reduced visibility of the design behavior makes it necessary to do port toggles, making the debugging of failures all the more tedious and time-consuming.

Once the VFT engineer is sure of the of the quality of generated patterns and VCDs, they can then be sent to the tester engineer to be tried out with real silicon under device-specific PVT (process, voltage, temperature) corner conditions.

The role of functional testing
Once the first silicon is available, the full pattern suite is now exercised on sample chips to test the critical functionality using automated test equipment (ATE).

The objective is to find all possible flaws in the chip and to debug extensively. If a chip fails a test, you should not assume there is a design bug in the chip. There can be multiple issues contributing to the failure: a faulty configuration of chip, wrong voltages applied for testing, software written for testing may not be as expected, or the document being referred may have some wording errors.

Once the flaws are fixed or signed-off on as non-critical and amenable to workarounds, it can be sent for mass production.

Production Testing
Every chip fabricated during the mass production phase is subjected to a customized test program that is loaded into the testers with tests proceeding in a go/no-go format, binning out the ICs that don’t pass the program.

The critical focus is on the time taken for testing. Every extra time increment adds to the cost of the production. So, the ATE must be capable of testing chips in the least possible time to increase yield while keeping the cost as low as possible.

Part 2: A check-list of best practices

Neha Srivastava is a lead design engineer at Freescale Semiconductors (Noida, India Design Centre), working in the Automotive and Industrial Solution Group (AISG) for over 5 years. She has a Bachelor of Engineering (B.E.) degree from Birla Institute of Technology and has worked on multiple SoCs in front-end verification and Verification for Testing domain, with areas of interest being low power designs, safety architectures, and high performance systems. She can be reached at Neha.Srivastava@freescale.com

Aashish Mittal is a principle design engineer at Freescale Semiconductors (Noida, India Design Centre), working in the Automotive and Industrial Solution Group(AISG) for over 12 years. He has a Master of Technology from Banaras Hindu University and has worked on multiple SoCs in front-end verification, Testbench Integration and Verification for Testing domain, with areas of interest being dual core, security , debug, and low power architecture. He can be reached at aashish@freescale.com

Nitin Goel is senior design engineer at Freescale Semiconductor India Pvt. Ltd , working in the Automotive Microcontroller Group (AMCG) for over 6 years . He graduated from Netaji Subhash Institute Of Technology, Delhi in 2006. Since then he has been working in frontend verification domain. Along with an experience in IP Level , SoC Level, and Core verification , he has been working in Tester pattern development and debugging.

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER