In the mid-90s, Intel’s decision to integrate SRAM into its processors spelled doom for standalone SRAM suppliers across the world. Overnight the biggest market for SRAMs (PC Cache) vanished, leaving only a few niche applications. The SRAM value proposition of being a high performance memory (low access time, less standby power consumption) was highly restricted by its higher price and density limitations (the highest density available today is 288 Mb). Since SRAMs have four to six transistors per cell, it is impossible to compete with DRAMs and Flash memories (both have 1 transistor per cell); fewer transistors per cell translates to lower board density and lower cost. Thus, for traditional storage applications – which comprise 98% of the market – SRAM is an impractical solution.
Since Intel started embedding SRAMs, most SRAM suppliers have moved on, either by shutting shop or by diversifying their product portfolio beyond SRAMs. SRAM use shifted to specific applications requiring high performance, mostly in the industrial, automotive, and defense spaces. The overall market for SRAMs has declined at -13% CAGR (compound annual growth rate) from 2002-13. However, it would be incorrect to assume that as a technology, it is set to die. In fact, in the coming years we expect to see a revival of the good 'ol standalone SRAM, driven by a variety of factors. In this article, we will discuss the technology advances that necessitate SRAMs and also the evolving trends in SRAM technology that make it ready to service the needs of the future.
The return of SRAMs to mainstream embedded design
The irony of the return to SRAMs is that it’s being driven by a reversal of the very trend that sought to replace it. When Intel decided to embed SRAM, this was an intelligent course to take. Apart from being more cost-effective, it was also technologically a superior solution – embedded SRAMs have better access time than external SRAMs, given that access time is the most important factor for cache memories.
Between then and now, processors have become more powerful and haveshrunk in size . As processors become more powerful, they require commensurate improvements in cache memory. But at the same time, increasing the embedded cache memory becomes a growing challenge with every new process node. SRAM has a six-transistor architecture (logic area is typically four transistor/cell). This means that with smaller process nodes, the number of transistors per square centimeter will be extremely high (Figure 1). Such high transistor density can cause many problems, including:
Increased susceptibility to soft errors: Soft error rates are expected to increase seven-fold  as process technology scales from 130nm to 22nm (Figure 1 ).
Lower yields: Due to the shrinking of the bit-cells coupled with higher transistor density, the SRAM area is more susceptible to defects due to process variation. Such defects reduce the overall yield of the processor chip.
Increased power consumption: If the SRAM bit cells have to be of same size as the logic bit cells, then the SRAM transistors will have to be scaled smaller than the logic transistors. The small size of the transistorscauses an increase in leakage current  which in turn increases the standby power consumption.
There are two ways to deal with this issue. One would be to have different process technology nodes for the SRAM area and the logic area in a processor or system on chip. However, this would lead to a situation where most of the area of a processor would be its SRAM. In such a case, the very reason for shrinking the processor chip would be defeated. The other option would be to separate the SRAM from the processor or controller. There are some technological innovations that are in fact speeding up this alternative.
SRAMs in wearable electronics
In today’s world, microcontrollers (MCUs) are found in a wide array of devices. A major electronics boom that we’re experiencing today is in wearable electronics (Figure 2 ). For wearables such as smart watches and health bands, size and power are critical factors. Due to limited board size, the MCU has to be very small and able to run on the frugal power provided by portable batteries.
To fulfill the above requirements, on-chip cache is limited. In future generations, we can expect more functionality to be associated with wearables. In such a case, the on-chip cache will fall short and the need will arise for an external cache. Of all the memory options available, SRAMs would be the most fitting option to act as an external cache. This arises from their lower standby current consumption compared to DRAM, and lower access time than both DRAM and Flash.
However, to fit into the tiny wearable boards, SRAMs will need to evolve. The problems with existing parallel SRAMs include:
- too many pins required for communicating with the MCU
- too large to fit on the PCB.
The Internet of Things and SRAMs
For the last few decades, the SRAM space has been divided between two distinct product families – fast and low-power, each with its own set of features, applications, and price. The devices where SRAMs are used need it for either its high-speed or its low power consumption, but not both. However, there is an increasing demand for high-performance devices with low power consumption to perform complex operations while running on portable power. This demand is driven by a new generation of medical devices, handheld devices, consumer electronics products, communication systems, and industrial controllers, all driven by the Internet of Things (IoT).
The growth of IoT is headed in two distinct directions – smart wearables and automation. Wearables, as we discussed earlier, will be serviced best by SRAMs that have a small footprint and low power consumption. At the same time, the impact of the Internet of Things will be felt in industrial, commercial, and large-scale operations, and for automating individual houses to vast factories and entire cities. SRAMs that can retain high-speed performance while reducing power consumption in a small package will offer significant value in IoT applications.
Microcontrollers from many of the major players have already adapted to the changing demand for such crossover devices, through special low-power modes like Deep Power-Down and Deep-Sleep. During these modes, the peripherals and memory modules are also expected to save power. Thus, to be a preferred choice for IoT designs, SRAMs will have to evolve in such a way that a customer need not worry about a trade-off between performance and power.
How SRAMs are evolving. It is evident that exciting times are ahead for standalone SRAM manufacturers, provided they innovate to align their products with the new-age application requirements. The key areas of innovation for SRAMs include:
- Smaller sized chips: This calls for advancement in process technology as well as innovation in packaging
- Lower pin count: Currently, most SRAMs have parallel interface. Serial SRAMs in the market have only low density options. The need would be to manufacture higher density Serial SRAMs
- High performance chips that consume less power
- On-chip soft-error correction
In the following sections, we describe some of the key innovations in the design of SRAMs that are driving embedded developers to consider their use in their embedded wearable, IoT, and other embedded systems applications.
Chip scale packaging
Chip scale packaging (CSP)  is apowerful technique to reduce the size of chips. As per specifications(J-STD-012), to qualify as ‘chip scale’ the overall packaged part musthave an area not more than 1.5 times that of the die and a lineardimension not more than 1.2 times that of the die. In contrast, for astandard packaged die, the overall chip area could be as high as tentimes that of the die. Thus chip scale packaging can help reduce thesize of a chip manifold. A similar size reduction could be achieved byshrinking the process node. However, in the case of SRAMs, migrating to asmaller process node is fraught with risks, as already explained.
Thisreduction in area is achievable by eliminating the first levelpackaging – lead frame, die attach, wire bonds, and mold compound. CSPchips are mostly packaged at the wafer level where the packagingmaterial is deposited directly on the wafer. The pinout is similar toBGA (ball grid array packaging) whereby solder bumps on the package actas pins. A similar size reduction could be achieved by shrinking theprocess node.
A CSP SRAM would definitely be an excellent fitfor space-constrained boards in wearable applications. It is much easierto design-in than the next best alternative: buying an SRAM die andpackaging it along with the MCU die using sophisticated MCP (multi-chippackaging) techniques. Currently, CSP SRAMs are not in mass production(some suppliers offer it as a made-to-order option), possibly becausethe target market (wearables) has yet to move beyond the embedded niche.However, most of the key players in the SRAM market offer a CSP optionfor many of their other products. Cypress Semiconductor, for example,has CSP versions already available for its product families such asPSoC. Thus, it should not be difficult for manufacturers to extend thesame capability to SRAMs.
Lower pin count
While SRAMsconsume less power than Flash and DRAM, a key problem of using SRAMsfor memory expansion is its parallel interface. While a parallelinterface allows faster read-write times, too many IOs are required forinterfacing. For example, consider interfacing a 1Mb SRAM (64Kb x16)with an MCU. The number of IOs required would be 32 (16 address, 16data). Multiplexing could bring that down to 24. But with everysubsequent increase in density (2M, 4M, 8M etc.), the number of pinsincreases by 1.
The number of IOs available to interface withSRAMs in a tiny wearable board is limited because small MCUs have lowpin count packaging. To connect with these MCUs, SRAMs will have to movebeyond the traditional parallel interface. The success of Serial Flash,EEPROM, etc. reinforce the market’s need for a serial memoryoption. Since MCUs having been using embedded cache for years, the needfor serial SRAM has not been felt until recent years. Serial SRAMs makeinterfacing simpler and less pin consuming (two for single SPI, two fordual SPI, and four for quad SPI). In addition, the number of IOsrequired does not increase with density.
As of today we haveserial SRAMs in low density and comparatively lower access speed (up to25ns access time and 1M density). In the near future, we can expectimprovements in both these parameters. As the wearable products entersubsequent generations, we can expect that MCUs will be required toperform more complex operations. In such cases, it would be useful tohave a higher density cache/scratchpad memory with higher throughput.Thus the evolution of serial SRAMs towards higher speed and density willbe useful for the market. A size reduction using CSP packaging coupledwith serial interface will make SRAMs a powerful option for both cacheand scratchpad memory in wearables.
High performance with low power
Todaythere are two distinct families of asynchronous SRAMs: fast SRAMs (withhigh access speed) and low-power SRAMs (low power consumption). From atechnological standpoint, this trade-off is justifiable. In low-powerSRAMs, special GIDL (gate-induced drain leakage) control techniques areemployed to control standby current and thus standby power consumption.These techniques involve adding extra transistors in the pull-up orpull-down path as a result of access delay increases, and in theprocess, increases access time. In fast SRAMs where access time is thepriority, such techniques cannot be used. Moreover, to reducepropagation delay, die size is increased. This increase in die sizeincreases leakage and, in the process, the overall standby powerconsumption.
So far this trade-off was acceptable by typicalSRAM applications: Battery-backed applications used Low-power SRAMs(compromising performance) while wired industrial high-performanceapplications used fast SRAMs. However, for IoT applications and manyother advanced applications such a trade-off will not serve well. Themain reason is that for most of these applications, high performance isimportant while standby power consumption has to be limited as well,since most of these applications will be operating on battery power.Fortunately, SRAMs are evolving to bridge the performance gap betweenthese two families towards a single chip with the benefits of both.
Microcontrollerslong ago introduced the deep-sleep mode of operation. This mode ofoperation helps in power savings for applications that are in stand-bystate most of the time. The controller can run at full speed duringnormal operation but goes into a low-power mode afterwards, therebysaving power. It is important that a similar operation is available forinterfaced SRAMs too. Asynchronous Fast SRAMs with a deep-sleep mode ofoperation  make it an ideal choice for such applications. These SRAMchips have an additional input pin that helps the user toggle betweendifferent modes of operation (normal, standby, and deep-sleep). Thus,effective power consumption can be managed without compromisingperformance. http://www.cypress.com/?docID=48906.
On chip error correction capabilities
Asmemory process technology scales for improved performance and power,reduced voltage and shrinking node capacitance makes these devices moresusceptible to soft errors. Today, CMOS technology has shrunk to such asize that extraterrestrial radiation as well as chip packaging causefailures at an increasing rate. Traditionally, soft errors have beendealt with through the use of ECC (error correcting code) software orthrough redundancy (i.e., multiple SRAMs storing the same data),especially in systems where reliability is of paramount importance, suchas medical, automotive, and military systems. However, this isexpensive and requires extra board space.
Major SRAMmanufacturers have started implementing error correction featuresdirectly on-chip . To limit the effects of soft errors on modernsemiconductor memories at a chip level, two architectural enhancementsare used: on-chip ECC and bit interleaving. Through on-chip ECC, thesoftware to implement error detection and correction of single-biterrors is hard-coded into the SRAM. Some manufacturers even offer theoption of an extra error pin to indicate the detection and correction ofsingle-bit errors.
Bit interleaving, on the other hand, is usedto limit the effect of multi-bit errors (i.e., a single energeticparticle flipping multiple bits). Bit interleaving works by arrangingadjacent bit lines to different word registers. This converts amulti-bit error into multiple single-bit errors, which can then becorrected by the on-chip ECC. (Learn more about how soft errors are mitigated and corrected ).
SRAMs and the Future
Excitingtimes are ahead for SRAM technology. The technological trends andadvancements favor a grand comeback for this technology, whose adoptionhas been declining for years. ECC-enabled chips are already inproduction. Fast SRAMs with on-chip power management are also available.Serial SRAMs are in production but mostly for very low densityapplications and so are currently not comparable in speed to parallelcounterparts. However, the existing players in the serial market(Microchip and On-semi) happen to be primarily MCU manufacturers. Noneof the traditional SRAM companies have launched serial SRAMs yet. Withmore players entering this market, we can expect innovation to happenrapidly.
Traditional marketing wisdom about product life cyclessays that maturity is followed by decline and then death of a product.The negative CAGR of SRAMs, along with the fact that most suppliers havequit the business, would have classified this product as “declining”.The revival of SRAMs that we are witnessing today and foreseeing for thefuture perhaps requires a revision of the traditional concept of theproduct life cycle in general.
1. Wikipedia: Semiconductor Device Fabrication
2. Scaling Effects on Neutron-Induced Soft Error in SRAMs Down to 22nm Process , by Eishi Ibe, Hitoshi Taniguchi, Yasuo Yahagi, Ken-ichi, Shimbo, and Tadanobu Toba
3. Leakage Current: Moore’s Law Meets Static Power , IEEE Computer, January, 2009
4. Application Note AN69601: Guidelines for Cypress Wafer Level Chip Scale Packages
5. Application Note AN89371: Power saving with Cypress Asynchronous PowerSnooze SRAM
6. Application Note AN88889: Mitigating single event upsets using Cypress Asynchronous SRAM
Reuben George works in Product Marketing for the Memory Products Division of Cypress Semiconductor .He holds a BE in Electrical & Electronics Engineering from theBirla Institute of Technology and Science (BITS), Pilani, in Rajasthan,India.
Anirban Sengupta works as a pricing manager atCypress Semiconductor. He holds a BE in Electrical Engineering from theNational Institute of Technology, India, and an MBA in Marketing fromSymbiosis Centre for Management and Human Resource Development (SCMHRD),Pune, India.