The yin and yang of dynamic allocation

May 04, 2008

Dan_Saks-May 04, 2008

A few months back, I explained that each object in C and C++ has one of the following three storage durations: static, automatic, and dynamic:1

• An object with static storage duration has storage that is allocated at program startup and remains allocated at the same location until the program terminates.

• An object with automatic storage duration has storage that's allocated upon entry into a block (typically a function body) and deallocated upon exit from that block.

• An object with dynamic storage duration has storage that's allocated when the program calls an allocation function, such as malloc in C or an operator new in C++. The object's storage lasts until the program passes the address of that object to a corresponding deallocation function, such as free in C or an operator delete in C++.

While it's nearly impossible to write a useful program without using both static and automatic allocation, programs can--and do--manage to get by without any dynamic allocation. This month, I'll look at the case for and against using dynamic allocation.

Quick takes
Static storage allocation typically has no run-time cost. The initialization of statically allocated objects may take place at run time, but the allocation itself usually takes no time. (Of course, the allocation still takes up memory space.) Unfortunately, in applications that juggle diverse collections of objects, big and small, using static storage squanders memory and imposes rather arbitrary restrictions on program behavior.

For example, suppose your application uses two different-sized regions of memory, say, buffers and blocks. Your target system might have enough memory to statically allocate 100 buffers or 250 blocks. Should you carve up space for 50 buffers and 125 blocks, or is a typical application more likely to need up to 60 buffers but only 100 blocks? And what about those occasions when program needs 80 buffers but only 50 blocks? Tough luck?

The run-time cost of automatic storage allocation is usually pretty low--often only an instruction or two on each function entry and exit. On microcontrollers with limited stack space, the cost can be higher. Automatic storage uses memory very efficiently, but it's useless for objects with lifetimes that persist beyond function calls.

By comparison, dynamic storage allocation is much slower than either static or automatic allocation. A call to malloc (in C or C++) or operator new (in C++) may execute tens and occasionally hundreds of instructions in a quest for storage of the requested size. Deallocating storage by calling free (in C or C++) or operator delete (in C++) often requires comparable effort.

You do get something for that effort: dynamic storage allocation uses memory much more flexibly than static storage allocation. Using dynamic storage may add a little overhead to the storage for each object, so instead of 100 statically allocated buffers, you might have room for only 99 or 98 dynamically allocated buffers. But now any run of the program can have up to 98 buffers and no blocks, or up to 123 blocks and no buffers, or any balance of buffers and blocks within those limits that doesn't exceed total available memory.

Risky business
Dynamic memory allocation brings with it a number of risks. Some developers, especially for embedded systems, find these risks too great to accept and follow the advice of the MISRA-C guidelines. (MISRA is the Motor Industry Software Reliability Association in the UK). According to their guidelines:2

Rule 20.4 (required): Dynamic heap memory allocation shall not be used.

This precludes the use of the functions calloc, malloc, realloc, and free.

There is a whole range of unspecified, undefined and implementation-defined behaviour associated with dynamic memory allocation, as well as a number of other potential pitfalls. Dynamic heap memory allocation may lead to memory leaks, data inconsistency, memory exhaustion, non-deterministic behaviour.

Although I have some quibbles with the wording, it covers most of the major concerns. These concerns also apply to operator new and operator delete in C++. Let's look at them (and my quibbles) in some detail.

Undefined behavior is what a program exhibits when it does something erroneous that the compiler and run-time system need not detect (often because they just can't). For example, freeing a pointer to storage that's already been freed, or that never was allocated, produces undefined behavior. The undefined behavior typically corrupts the heap--the data structures that support the dynamic memory. The corruption in the heap usually spills over into other data and run-time structures, and the program eventually crashes.

< Previous
Page 1 of 2
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER