Flash-based Solid State Drives (SSDs) have enabled a revolution in mobile computing and are making deep in- roads into data centers and high-performance computing. SSDs offer substantial performance improvements relative to disk, but cost is limiting adoption in cost-sensitive applications and reliability is limiting adoption in higher- end machines.
The hope of SSD manufactures is that improvements in flash density through silicon feature size scaling (shrinking the size of a transistor) and storing more bits per storage cell will drive down costs and in- crease their adoption. Unfortunately, trends in flash technology suggest that this is unlikely.
While flash density in terms of bits/mm2 and feature size scaling continues to increase rapidly, all other fig- ures of merit for flash – performance, program/erase en- durance, energy efficiency, and data retention time – decline steeply as density rises.
For example, our data show each additional bit per cell increases write latency by 4× and reduces program/erase lifetime by 10× to 20×, while providing decreasing returns in density (2×, 1.5×, and 1.3× between 1-,2-,3- and 4-bit cells, respectively).
As a result, we are reaching the limit of what current flash management techniques can deliver in terms of usable capacity – we may be able to build more spacious SSDs, but they may be too slow and unreliable to be competitive against disks of similar cost in enterprise applications.
This paper uses empirical data from 45 flash chips manufactured by six different companies to identify trends in flash technology scaling. We then used those trends to make projections about the performance and cost of future SSDs.
We constructed an idealized SSD model that makes optimistic assumptions about the efficiency of the flash translation layer (FTL) and shows that as flash continues to scale, it will be extremely difficult to design SSDs that reduce cost per bit without becoming either too slow or too unreliable (or both) as to be unusable in enterprise settings. We conclude that the cost per bit for enterprise-class SSDs targeting general-purpose applications will stagnate.
SSDs offer moderate gains in bandwidth relative to disks, but very large improvements in random IOP performance. However, increases in operation latency will drive down IOPs and bandwidth. Operation latency also causes write bandwidth to decrease with capacity.
SSDs provide the largest gains relative to disks for small, random IOPs. We looked at two access sizes – the historically standard disk block size of 512 B and the most common flash page size and modern disk access size of 4 kB. When using the smaller, unaligned 512B accesses, SLC and MLC chips must access 4 kB of data and the SSD must discard 88% of the accessed data. For TLC, there is even more wasted bandwidth because page size is 8 kB.
When using 4kB accesses, MLC IOPs drop as density increases, falling by 18% between the 64 and 1024 GB configurations. Despite this drop, the data suggest that SSDs will maintain an enormous (but slowly shrinking) advantage relative to disk in terms of IOPs. Even the fastest hard drives can sustain no more than 200 IOPs, and the slowest SSD configuration we consider achieves over 32,000 IOPs.
The technology trends we have described put SSDs in an unusual position for a cutting-edge technology: SSDs will continue to improve by some metrics (notably density and cost per bit), but everything else about them is poised to get worse.
This makes the future of SSDs cloudy: While the growing capacity of SSDs and high IOP rates will make them attractive in many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications.
To read more of this external,download the complete paper from the open, online archives at Usenix.org. https://www.usenix.org/legacy/events/fast12/tech/full_papers/Grupp2-6-12.pdf