Since its inception, the embedded systems industry has been waiting for an off-stage semiconductor-based character named “universal memory” to come along and replace the memory hierarchy inherited from mainframes, minicomputers, and desktop computers: nonvolatile hard disk drives for long term mass storage and backup, dynamic RAM for local fast memory access, and SRAM and ROM for extremely fast access and code storage.
This desire for the perfect memory that is erasable, randomly accessible, can execute code and data directly, and is capable of high storage densities is still largely an almost impossible dream. So the traditional hierarchy is still in place despite the many alternatives that have emerged – NOR and NAND based flash, ferroelectric RAMs, MRAMs, and resistive RAMs, among others..
Of all of them, flash-based NAND memory has come the closest to meeting the universal memory dreamed of ideal. In many segments of embedded design, such as routers and switches, NAND-based solid state storage is replacing traditional hard drives. And while these drives require defect management as they move to higher densities, they are becoming the mass storage option of choice in much of portable computing, ranging from laptops/tablet computers to smartphones. But as some recent articles in this week's Tech Focus Newsletter illustrate, even there flash NAND is facing significant technical and economic challenges:
The Bleak Future of NAND Flash Memory
An analysis of 45 flash-based devices for use in solid state drives for mobile computing and data centers indicates that while SSDs are making substantial improvements relative to disks, cost is limiting their adoption in higher end applications.
Revisiting Storage for Smartphones
Storage performance on mobile devices, important for end-user experience, can be expected to grow in importance due to higher network throughput to mobile devices from such wireless technologies as 802.11n (600 Mbps peak) and 802.11ad (7 Gbps peak).
Ensuring signal integrity in high speed flash memory systems
Next-gen flash memory features data transfer rates as much as 10 times faster than currently available. Proper design strategies can help deliver reliable, high-performance systems despite increasing distortions in the data-carrying digital signals can cause data transfer failures.
Looking better and better in many new designs is NOR flash, which still plays a vital role in many embedded designs. It's used for both code and data storage. Its main advantage is that the code is executed directly (execute-in-place) from the NOR flash memory arrays. Also, NOR flash can directly interface with the host processor, which enables easy design-in and fast time-to-market.
Now driven by the needs of a newly emerging Internet of Things, NOR flash is finding new markets in a variety of wireless sensor and machine-to-machine applications, working in tandem with small amounts of SRAM and NAND to accomplish the various tasks these highly constrained applications require. Even in many consumer IoT and wearable applications, NOR is finding a role, as is pointed out in “IoT and wearable devices mean rethinking memory design,” by Howard Sian of Micron Technology.
“The space, power and application requirements of wearables and other mobile connected devices that collectively make up the growing IoT ecosystem require a fresh approach to system design, emphasizing integration, faster boot times, and lower standby power,” he writes.
He believes NOR flash with next-generation, high-density, 0.4mm pitch packaging and MCP modules are a great fit for most wearable applications, and offer more than adequate local storage capacity, execute-in-place convenience for application code, and low standby power to extend device battery life.
“With IoT systems automatically offloading collected data to cloud-based repositories,” Sian writes, “the need for local storage on wearables is minimal, meaning the benefits of NOR flash far outweigh any compromises in memory capacity.”
Eventually, though, traditional NAND and NOR memories in both traditional embedded applications and newer mobile and wireless designs will soon run into the same memory latency wall that DRAMs and SRAMs are facing in backbone network switches and routers. At that level, there is about a five-fold difference between data rate within the CPUs and that between the processors and external memory.
In network router systems, impressive improvements in reducing such bottlenecks have been possible, as illustrated in “Achieving 200-400GE network buffer speeds with a serial-memory coprocessor architecture.” Right now embedded network designs are running into severe memory performance problems at the high end. But as the demand for faster network performance increases, similar disparities are likely to occur in what are now low-end applications.
For now, most embedded developers exploring these new untethered network applications are not coming close to this wall and traditional tools and methodologies still apply, although they will require imaginative adaptation. For example, as is illustrated in “Achieving better software performance through memory-oriented code optimization', there is still a lot that can be squeezed out of your code and your compilers if you are clever about it.
What new ways of using traditional memory alternatives have you come up with? Is that enough or are you looking at some of the newer nonvolatile approaches? Which ones do you find most promising?
Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to , or call 928-525-9087.