Test time is a significant component of ASIC cost. It needs to be minimized and yet has to have maximum coverage to ensure zero-defect scenario for an automotive application. Such test modes usually accompany memory built-in self test (MBIST) mode, which goes through all the bit-cells for all memory banks in a design.
Depending on the implementation of BIST module (Figure 1, below) , we may have parallel and serial access capabilities to test the same. This test is performed at wafer level and package level. We usually have multiple packages available for SoC, which has a different number of power pads available.
Figure 1: MBIST controllers have capabilities of controlling multiple memory banks in parallel or in a serial fashion.
Nowadays, MBIST controllers have capabilities of controlling multiple memory banks in parallel or in a serial fashion. These capabilities can be exploited to get a customized pattern suit for each package and wafer level separately (Figure 2, below ).
Figure 2: Shown are memory arrays and test wrapper configuration.
An SoC is configured for multiple package configurations, which have different functionalities available for the customer. This means that the number of power pads available for each of the package would be different.
A 48-pin package may have only two pairs of VDD and VSS, whereas same die going into 100-pin package can provide even four or five pairs of power and ground pins. The dies (created for multiple package configurations) are thus signed off with the worst case IR drop case, which will be faced, in case of the lowest number of power ground pads.
This means that the power grid design is done keeping in mind that the lowest number of power sources would be available and this further limits the amount of logic which can be switched in these limited configurations.
Today’s SoCs are full of memories, which may be used for video RAM, as Cache memories: L1, L2, L3 or as system RAM. The amount of volatile memory is also increasing in the capacity required. On the other hand, the frequency requirements are increasing.
Therefore, there is an accumulated increase in array selection complexity since higher array sizes need more access times. This means that if we need high capacity memories working at a high frequency, we need to have multiple smaller memory blocks used for implementing the complete memory.
This is needed because the access time of smaller memory blocks would not need multiple CPU clock cycles for read/write operations and therefore, would support higher frequencies without resorting to wait states for the CPU. In addition, these multiple memory cuts would require built-in self test (BIST) logic, which would act as DFT on memory arrays.
Normally due to IR drop closure at the worst case (i.e. in case with the lowest number of power ground sources ), the number of memory array blocks, which can be tested in parallel is hugely limited.
This means that the test strategy for all the packages would be pessimistic due to a single analysis, which was done assuming the power sources that are valid only for the lowest package. Therefore, there is a need to split the test strategy (Figure 3 below ) for each package and increase the number of arrays tested in parallel through MBIST in higher package configurations.
Figure 3: There is an accumulated increase in array selection complexity since higher array sizes need more access times.
Implementing the strategy
We would like to discuss the case of a 90nm automotive design having 63mm2 of area, which had 1Mbit of video RAM. Due to constraints in maximum access time, it had to be implemented in 32 separate blocks with each cut of 8Kx32 size.
Therefore, the BIST engines can be configured to test 1 to 32 blocks in parallel depending on the current requirements and whether the IR drop can be met with all memory banks switching in parallel.
This SoC was configured to go in three different packages: 176-pin LQFP, 208-pin LQFP and 324-pin BGA package. Hence, the same SoC was configured to have different number of memory banks being tested in parallel through MBIST, as shown in Table 1 below .
Table 1: The same SoC was configured to have a different number of memory banks being tested in parallel through MBIST.
Another noteworthy point is that the position of power pads with respect to the placement of memory banks is also an important factor, which would be a deciding factor to determine maximum number of banks that can be switched simultaneously.
Observing the scenario here, we find out that using a package based testing strategy, there is a scope for reduction of test time and finally the cost of shipping the product to a customer.
(Sunit Bansai is senior design engineer at Freescale Semiconductor. Sumeet Aggarwal, who now works at Intel Corp., was previously a design engineer at Freescale, where he is responsible for Backend design of digital and mixed signal SOCs.)