Advertisement

Data storage in non-volatile memory

November 23, 2014

Colin Walls-November 23, 2014

Although flash and other non-volatile memory technologies are widely used to implement embedded file systems, this may be too complex for some embedded applications. In many cases the memory may be most efficiently used as data structures that have been pre-initialized. This approach requires some management of data integrity. This article introduces the challenges and offers some simple solutions to using NVRAM.

Introduction to NVRAM
In a modern computer system, there is a large amount of memory. Most of it is the anachronistically named random access memory (RAM). The name makes little sense, as all memory is random access nowadays. When engineers talk about RAM, they mean volatile semiconductor memory, which can be written to and read from indefinitely so long as power is applied. It was not always like this. In the early days of computers, the most common form of program/data storage was “core memory”. This was, by modern standards, bulky and heavy (not to mention expensive!), but had a useful characteristic: it was non-volatile. Power was required to read or write data, but was not needed to retain it. With the core memory powered down, data would remain unchanged for indefinite periods of time. Interestingly, dropping or vibrating core memory could corrupt its contents, but this was rarely a cause for concern (except in earthquake zones) because computers were far from portable.

Although the working memory of modern computers and most embedded systems is predominantly RAM, it can still be useful to have a quantity of non-volatile RAM (NVRAM) available. This may be implemented using flash memory or some other memory technology that features non-volatility (like MRAM), or it may be regular RAM with a protected power supply (i.e., a battery). There are a number of possible uses for NVRAM in an embedded system:

  • Storage of program code and constant data that is copied into RAM on start-up. Although execution out of NVRAM is generally an option, the speed (access time) of some NVRAM technologies makes this unattractive.
  • Retention of device set-up parameters between power cycles. Many devices are user configurable; this information needs to be stored somewhere.
  • Buffering of acquired data over extended periods, with immunity from power failure. An easy example might be the storage of photos in a digital camera.

NVRAM Management

Broadly speaking, NVRAM may be utilized in one of two ways:
  • A file system, similar to that used on a hard drive, may be implemented in NVRAM. It would need to be implemented in a way that optimizes the use of the medium (like flash) and is resilient to power failure occurring during the writing of data. It may also be prudent to implement security features such as data encryption.
  • Data structures may simply be located in, and accessed from, the NVRAM directly. This requires specific accommodation for the non-volatility.

Many vendors supply off-the-shelf file system software designed for use in NVRAM. It would be economically questionable for a developer to design their own, unless some specialized capabilities were required. The storage of data structures is more application-specific, so this will be addressed further.

Accommodating non-volatility
Using normal volatile RAM is straightforward. It must be initialized to a known value on power up and thereafter may be written to and read from as required. With NVRAM there are two new challenges:
  • On power up, the software needs to recognize whether the NVRAM has been initialized and if not, perform that initialization.
  • The integrity of the data, particularly after being powered down for a while, needs to be verifiable.


NVRAM Initialization
When NVRAM is powered up for the first time, like ordinary RAM it contains indeterminate data and needs to be initialized. On subsequent occasions the software needs to recognize that the NVRAM has been initialized and not overwrite this saved data.

The easiest way to effect this recognition is to use a signature, which is simply a quickly recognizable sequence of bytes that cannot occur randomly. Of course, this ideal is impossible, as any sequence of bytes, however long, could occur randomly. It is just a matter of minimizing that possibility, whilst still making the check quick and easy. If the signature is just 4 bytes, there is a 4 billion-to-1 chance of it occurring randomly. That is good enough for almost any imaginable application. And a 32-bit value may be checked quickly.

By careful choice of the signature values, the chances of an accidental occurrence may be reduced. Intuitively, a sequence of consecutive numbers (say 1, 2, 3, 4) feels more unlikely than a “random” set. After all, when did the lottery last yield a consecutive sequence of digits? Of course, such a sequence is just as likely or unlikely as any other. However, by thinking about how memory works, the unlikelihood of a specific sequence may be increased. What values might memory have when it is first powered up? There are broadly four possibilities:
  1. totally random
  2. all zeros
  3. all ones
  4. some regular pattern reflecting the chip architecture (like alternate 1’s and 0’s)

If it is (1), then any signature will give us the 4 billion-to-1 chance. Any of the others can be detected by use of the right signature. A possibility is the following: 0x00, 0xff, 0xaa, 0x55. This should cover all of (2), (3) and (4) and still be just 32 bits.

Some care is needed with the initialization sequence. It is essential to set up valid data and then initialize the signature as the very last thing in the procedure.

NVRAM integrity
Of course, the use of a signature does not guarantee the integrity of the data. It may be wise to use a checksum or CRC for error checking, or even a mechanism for self-correction of data.

System start-up with NVRAM
When NVRAM is in use, the start-up logic needs to accommodate both the verification of signature and data integrity checking:

Conclusions
Using NVRAM in an embedded design is straightforward, but its functionality does need to be carefully accommodated as described here. The approach of using a global signature and error check is suitable for many applications. For very large databases, a separate check on each block of data might be more efficient. It might also be worthwhile using C++ to hide the NVRAM management from application code developers and thus minimize the possibilities for programmer error.

Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor Embedded (the Mentor Graphics Embedded Software Division), and is based in the UK. His regular blog is located at mentor.com/colinwalls. He may be reached by email at colin_walls@mentor.com.

Loading comments...

Most Commented