Mobile phones, tablets, and ultra-portable laptops are no longer viewed as the wimpy siblings of the personal com- puter; for many users they have become the dominant computing device for a wide variety of applications. According to a recent Gartner report, within the next three years, mobile devices will surpass the PC as the most common web access device worldwide. By the end of 2013, over 40% of the enhanced phone installed-base will be equipped with advanced browsers.
Research pertaining to mobile devices can be broadly split into applications and services, device architecture, and operating systems. From a systems perspective, research has tackled many important aspects: understanding and improving energy management , network middleware , application execution models, security and privacy, and usability.
However, one important component is conspicuously missing from the mobile research landscape – memory storage performance.
Storage has traditionally not been viewed as a critical component of phones, tablets, and PDAs – at least in terms of the expected performance. Despite the impetus to provide faster mobile access to content locally and through cloud services, performance of the underlying storage subsystem on mobile devices is not well
Our work started with a simple motivating question: does storage affect the performance of popular mobile applications? Conventional wisdom suggests the answer to be no, as long as storage performance exceeds that of the network subsystem. We find evidence to the contrary – even interactive applications like web browsing slow down with slower storage.
Storage performance on mobile devices is important for end-user experience today, and its impact is expected to grow due to several reasons. First, emerging wireless technologies such as 802.11n (600 Mbps peak throughput)  and 802.11ad (or “60 GHz”, 7 Gbps peak throughput) offer the potential for significantly higher network throughput to mobile devices.
Second, Local-area networks are not necessarily the de-facto bottleneck on modern mobile devices. While network throughput is increasing phenomenally, latency is not. As a result, access to several cloud services benefits from a split of functionality between the cloud and the device , placing a greater burden on local resources including storage.
Third, mobile devices are increasingly being used as the primary computing device, running more performance intensive tasks than previously imagined. Smartphone usage is on the rise; smartphones and tablet computers are becoming a popular replacement for laptops. In developing economies, a mobile/enhanced phone is often the only computing device available to a user for a variety of needs.
In this paper, we present a detailed analysis of the I/O behavior of mobile applications on Android-based smartphones and flash storage drives. We particularly focus on popular applications used by the majority of mobile users, such as, web browsing, app install, Google Maps, Face book, and email. Not only are these activities available on almost all smartphones, but they are done frequently enough that performance problems with them negatively impacts user experience. Through our analysis and design we make several observations:
1) Storage affects application performance: often in unanticipated ways, storage affects performance of applications that are traditionally thought of as CPU or network bound. For example, we found web browsing to be severely affected by the choice of the underlying storage; just by varying the underlying flash storage, performance of web browsing over WiFi varied by 187% and over a faster network (setup over USB) by 220%. In the case of a particularly poor flash device, the variation exceeded 2000% for WiFi and 2450% for USB.
2) Speed class considered irrelevant: our benchmarking reveals that the “speed class” marking on SD cards is not necessarily indicative of application performance; although the class rating is meant for sequential performance, we find several cases in which higher-grade SD cards performed worse than lower-grade ones overall. Slower storage consumes more CPU: we observe higher total CPU consumption for the same application when using slower cards; the reason can be attributed to deficiencies in either the network subsystem, the storage subsystem, or both.
Unless resolved, lower performing storage not only makes the application run slower, it also increases the energy consumption of the device. Application knowledge ensues efficient solutions: leveraging a small amount of domain or application knowledge provides efficiency, such as in the case of our pilot solutions; hardware and software solutions can both benefit from a better understanding of how applications are using the underlying storage.
Based on our experimental findings and observations we believe improvements in the mobile storage stack can be made along multiple dimensions to keep up with the increasing demands placed on mobile devices. Storage device improvements alone can account for significant improvements to application performance.
Mobile I/O and memory bus technology needs to evolve as well to sustain higher throughput to the devices. Limitations in the systems software stack can however prevent applications from realizing the full potential of hardware improvements; we believe changes are also warranted in the mobile software stack to complement the hardware.
To read this external content in full, download the complete paper from Usenix.org.