High definition media consumption is undergoing a two-fold growth – one is an increase in number of consumers and the other is a transition towards even higher definition content. This is driven by more widespread and faster Internet access combined with an explosion of mobile devices (cell phones, tablets, wearable devices, etc). As a consequence, many wearable devices are now coming equipped to handle HD media consumption.
The demand for Internet of Things (IoT) and wearable devices is estimated to grow three-fold by 2020, even by the most conservative estimates. This means there will be 50 billion devices. This will create demand for a new family of display drivers and frame buffers – a memory option that is unlike those used in legacy displays. While embedded RAMs could suffice in early generation wearable devices, today’s high definition and large wearable displays require significantly larger frame buffer memories. These requirements differ from the traditional large displays of PCs and televisions by virtue of being battery operated and built with power efficiency as a primary design constraint. A majority of the latest wearable devices will be so space- and power-efficient that they will need to be able to run days and possibly even weeks on a single charge, all the while performing complex operations. This is why we will require a new family of display drivers.
To understand frame buffer requirements for wearable devices, let us first explore the architecture of graphics systems. Every graphics system consists of 3 components – hardware, graphics library, and an application that utilizes it.
While the library and the application are software-controlled, the hardware is controlled by a frame buffer, a contiguous high throughput memory. Each element of a frame buffer corresponds to a single pixel on the screen. The intensity of that pixel is decided by its voltage.
A display’s resolution is determined by:
Number of scan lines
Number of pixels per line
Number of bits per pixel
For example, consider a 1024×768 24-bit image, the most widely used screen resolution for PCs.
1024 X 768 X 24 = 18.9Mb
This is the minimum size of frame buffer required to support such a display. However, simply having a memory of this size won’t suffice if it is a dynamic display with video capabilities. This brings up the throughput requirement for frame buffers.
For a 30 frame per second (fps) video of this resolution, the maximum throughput would be: 18.9 x 30 = 566Mbps
As described earlier, every memory cell in a frame buffer corresponds to a single pixel. In the case of an n-bit color display, each of these n bits is a separate bit plane (e.g. 24-bit color will have 24 bit planes). N cells will store the state for each pixel. The binary values from each of the n bit planes is loaded into corresponding positions in a register. The resulting binary number is interpreted as an intensity level between 0 to 2n – 1. This is then converted into an analog voltage between 0 and the maximum voltage by a digital-to-analog converter, hence enabling 2n intensity levels.
There are two factors that decide the type of frame buffer used for a display – size and throughput. Increasing the resolution of an image requires more memory while increasing the fps of the video requires higher throughput. There are two ways to meet this requirement – minimizing frame buffer size and maximizing throughput or maximizing frame buffer size to minimize throughput (e.g. one doubled while the other is halved). By increasing the frame buffer size (essentially having multiple frame buffers within a single chip), we can reduce throughput because the chip has to go through the input-output cycle fewer times. For example, by doubling the size, two frames can be stored simultaneously in a single buffer, meaning the buffer is called/referenced half the number of times in a given timeframe, thus allowing for lower throughput. Memory options are hence split into two types – high density and high throughput. This aspect will be discussed later in this article.
A closer inspection of the specs of the latest generation computer Graphic Processing Units (GPUs) from Nvidia and AMD shows a significantly higher memory, often in Gigabytes. This is because most modern GPUs are built for gaming & HD rendering, with a host of extra features that suck up memory space – MSAA (which multiplies the size of buffer by sampling frequency), pre-fetching, shadow buffers, deferred rendering and special effects. Even features we take for granted, like windowed scrolling, require additional buffer space. Most gaming buffers use triple buffering (which translates to 3 buffers for every frame) and HDR (usual HDR depth is 64 bits, instead of 24). Many of these high-end GPUs also support multiple high-definition displays, which means a dedicated internal buffer for each of those displays.
However this host of features isn’t yet required for most wearable and portable devices, due to their smaller displays. The ideal approach is to use the MCU’s embedded memory resources as a frame buffer. This has the highest throughput and is the simplest to implement. But most MCUs come shipped with insufficient memory for the latest generation displays in wearables. Moreover, increasing program complexity necessitates greater embedded memory to be used as the MCU’s L1 cache. For the most current generation wearable devices, display resolution is QVGA (Quarter Video Graphics Array), and for such displays the following specifications will suffice – 24-bit | 480*360 | 30fps. This translates to a pixel per inch (ppi) of 300 for wearable sized displays. The memory requirement for such a display is 4Mb with a throughput 120 Mbps. However, future devices will have significantly higher resolution displays, crossing a ppi of 400, like many of the latest generation cell phones. Increasing ppi for displays of the same size means the frame buffer size increases accordingly. As explained earlier there are two ways of implementing a frame buffer of this size: a 4Mb buffer of ~120 Mbps throughput or 16Mb buffer of ~30Mbps throughput. Among the two alternatives, the small buffer option has manifold benefits – smaller footprint (in case of die or CSP), lower power consumption, lower cost, and more options (as you go up the density ladder the number of manufacturers and variants decreases). With wearables, footprint, power consumption, and cost are often the most important deciding criteria for any device component.
The most widely used memory for frame buffering is dynamic RAM (DRAM), despite the fact that the highest performance widely available memory today is static RAM (SRAM). DRAMs have higher power consumption and lower throughput than SRAMs. Even though they have better performance, which is ideal for latest generation high performance portable devices, SRAMs aren’t used in most portable battery-backed devices. This is because of a smaller portfolio – SRAMs are only available in low-density options, peaking at 128Mb. An SRAM has a more complex memory cell structure comprising of 6 transistors, vs. 1 transistor + 1 capacitor for a DRAM. That is why SRAMs are restricted from moving to higher densities, which has proved to be its biggest limitation. Though this limitation has prevented SRAMs from being used in legacy consumer devices (PCs, televisions, cell phones, etc), it isn’t as much a deal breaker for wearable devices, given the smaller memory size required for frame buffering. Furthermore, the higher performance (higher throughput equals lower power) is an advantage in favor of SRAMs in these devices.
SRAMs, once considered a defunct memory type, look set to make a return thanks to renewed need for high performance, especially low power consumption. You can read this article to get a better understanding of how SRAMs in the latest generation wearable and IoT devices. It explores usage of SRAMs beyond frame buffering, from memory expansion to data logging.
Many leading SRAM manufacturers have come out with a slew of innovations to especially cater to the demand from wearable systems – from higher reliability to newer packages. For high definition video recording and processing, Cypress also has a distinct family of HD frame buffers. To know more about how frame buffer memory is configured and about the family of HD frame buffers you can refer the following application note: Using High-Density Programmable FIFO in Video & Imaging Applications