Design Con 2015

Flushing Out Memory Leaks

February 28, 2002

MurphysLaw-February 28, 2002

Flushing Out Memory Leaks
Before you can plug memory leaks, you have to find them. These tools can help

I am dedicating this month's column to my ex-plumber, Patrick. We christened him Paddy Puddles on account of the number of leaks that had to be found and fixed after he left. The leak that caused a large damp patch in the middle of the kitchen was forgivable, since the damage was visible and could be addressed quickly. The worst leaks were the ones under the floor, evidence of which appeared later on in the form of damp patches on the paintwork of a different room. Because the damage was hidden for a while, identifying and fixing the original leak was made more difficult and expensive.

This story will be familiar to those of you who have tracked down memory leaks-and we all have worked with a Programmer Puddles at some point. (Okay, to be fair, some of the leaks were my fault too.) In this column and the next, I am going to discuss puddle hunting and leak location.

What's a leak?

I've had the misfortune of missing some of the recent Embedded Systems Conferences, but at the last ones I did attend, I gave a talk on memory management. The attendance at these sessions far exceeded the attendance at my other talks, which suggests that a lot of programmers worry about dynamic memory management. My talk was mostly concerned with implementing the heap to minimize fragmentation; if you are interested in this topic, you should read "Safe Memory Utilization," an article I wrote for the April 2000 issue of Embedded Systems Programming (p. 110).

In this column and the next, however, I'll examine the difference between fragmentation and leaks, and then investigate a simple technique that I have found useful in detecting and eliminating leaks in my programs.

The standard C library functions malloc() and free() allow blocks of arbitrary size to be allocated to an application for an arbitrary period of time. As blocks are used and freed, the area of storage dedicated to the heap becomes a jumble of used and free blocks mixed throughout that memory area. Any small patches of free space that remain might become unusable. For example, an application might request 30 bytes from the heap. If the heap has 20 small blocks of three bytes each-a total of 60 bytes-the heap still cannot satisfy the request, since the 30 bytes requested have to be contiguous.

In a long-running program, fragmentation can cause a system to run out of memory even though the total amount of memory allocated does not exceed the total available. The article I mentioned previously discusses fragmentation further, describes why it is not as much of a threat as once thought, and investigates heap implementations that minimize the loss of space due to fragmentation.

The amount of fragmentation depends on how the heap is implemented. Most programmers use the heap via the malloc() and free() provided by their compiler, so the fragmentation is outside their control. This is very different from a leak.

A leak is always a bug in the application. More specifically, a leak is a block of memory that was allocated, but will never be freed. If all pointers to that block have gone out of scope or were assigned to point elsewhere, the application will never be able to free that piece of memory. A small leak may be acceptable in a desktop application that will exit at some point, since an exiting process returns all memory to the operating system. But in a long running embedded system, it is often necessary to guarantee that absolutely no leaks exist-a non-trivial challenge.

Avoiding leaks is difficult. To ensure that all allocated memory is subsequently freed, we must establish a clear set of rules about who owns the memory. To track memory, we could use a class, an array of pointers, or a linked list. Lists are commonly used since, by the nature of dynamic memory allocation, a programmer doesn't know ahead of time how many blocks will be allocated at any given time.

For example, imagine that one task is receiving messages from a communications channel. It allocates space for the message and that space will not be freed until the message is completely processed. Since the messages may not be processed in the order in which they were received, some may live longer than others. All pending messages live on a list whose length varies according to the number of messages being processed at any given time. Let's say that this embedded system must forward the messages to another device, and the message cannot be deleted until delivery is confirmed. Since the messages are going to many different destinations, and some destinations may have some down-time, causing retries, it would not be feasible to process the messages in first-in-first-out manner.

In problem areas such as these, dynamic memory management enables more efficient use of the available RAM than a predefined number of buffers. When the memory is not being used by one message queue, it can be used by another queue, or by a completely different part of the program.

One particular problem often arises when there is more than one pointer to a specific block. If the first entity owns the memory and wants to free it, then it must first consider whether any other pointers point at that location. If any do, and the first entity frees the memory, those other pointers become dangling pointers-pointers that point to space that is no longer valid. When the dangling pointer is used, you may happen to get the correct data, but eventually the memory will be reused (via another call to malloc()) leading to unpleasant interactions between the dangling pointer and the new owner of that piece of memory. A dangling pointer is the opposite of a leak.

A leak occurs when you fail to free something; a dangling pointer occurs when you free something that was not yet ready to be freed.

Memory leaks are similar to race conditions in a number of ways. The misbehavior they cause may occur far from where the bug was caused. As a result, these problems are difficult to resolve by stepping through the code with a debugger. For both memory leaks and race conditions, code inspections sometimes catch these problems more quickly than any technical solution.

Adding debug code to generate output is often a better alternative than a source code debugger, but in the case of race conditions, it could alter the behavior enough to disguise the problem. With memory leaks, adding debug code can change the shape of the memory layout, which means that dangling pointer bugs may exhibit different behavior. Another drawback is that if the debugging code consumes memory, you may run out of RAM sooner in the debug version than you would in the production version. Still, a leak is a leak and should remain detectable regardless of these side effects of the debug code.

Drive an automatic

Java has automatic memory management in the form of garbage collection. Java programmers do not have to worry about freeing allocated memory. If you ever program in Java, you will appreciate just how much time and other languages require to keep track of memory.

In Java, you pay a runtime cost in exchange for programming simplicity. This is because manually managing the memory produces a more efficient implementation. When programs become large, however, manual management breaks down. While good manual management will always minimize total heap size, it might not always be easy to implement. In a large program, which involves dozens of programmers, human error will introduce enough leaks to tip the performance scales in favor of an automatic solution.

I read a report years ago about how bar code readers in supermarkets identified the wrong product 1% of the time. The article seemed to imply that this meant that the supermarkets were ripping us off. What the article failed to point out was a comparison with the alternative, where the person on the till has to manually type in the price of each item, which, I am guessing, would have had a higher error rate. The other consideration is that shop assistants, like programmers, vary widely in their skill levels. If the automatic system has a measurable error level, then at least management can allow for it in their plans. If the volume of customers and groceries is high, the supermarket and the customer are better off in the long run with an automatic system.

A similar logic applies to automatic garbage collection. An excellent individual programmer will do far better than an automatic collector, but in a large project, with a large number of programmers, it may be impossible to find and plug all of the leaks. Choosing an automatic system may mean that you compromise performance. You also have to accept that the automatic collector will occasionally mess up. "Java Q&A" in the February 2001 issue of Dr. Dobb's Journal (www.ddj.com) includes a list of ways to trick Java garbage collectors.

While automatic collection is attractive for very large programs, most embedded developers are not developing systems of that complexity. Of those who are, only the minority have access to a programming environment, such as Perl, Smalltalk, or Java, that includes automatic garbage collection. So most of us need to know how to track down leaks in C programs that use malloc() and free(), or C++ programs that use new and delete.

Plumbing tools

A number of tools help in the hunt for memory leaks. The most popular free ones are dmalloc and mpatrol.1,2 These tools provide debugging versions of the heap that record and check all allocations to facilitate analysis of leaks and dangling pointers. dmalloc and a number of similar libraries provide drop-in replacements for malloc() and free().

On a number of projects, I was responsible for tracking down memory leaks and providing some evidence that all such leaks had been removed. Initially, I assumed that one of the tools mentioned above would solve my problems. However, I found that a complete replacement for malloc() and free() was not always appropriate for embedded systems. For instance, I could be perfectly happy with my current implementation and simply want to add some monitoring. If I choose a library that replaces malloc() and free(), I have to port those routines.

Since the free libraries available are Unix-oriented, they use the sbrk() call to obtain blocks of memory from the operating system. I might not have that call available in my operating system-in fact, I might not even have an operating system. When porting, I would also have to address issues such as pointer size and memory alignment issues of the specific processor. These issues have already been addressed by the version of malloc() and free() provided with my compiler library, which obviously has been ported to the processor that I am using, so I want to avoid repeating that work. The other challenge of porting one of these debugging versions of malloc() is that they assume that they can store their analysis data in a file. Alas, the systems I work on generally do not have any file system available. I may have to restrict how much data is stored on a device with minimal resources.

(If any readers have had success in an embedded environment with one of these tools, I would be interested in hearing about it, since fully featured tools such as these offer much more than the simple techniques I will describe in this space.)

I also considered running a portion of the code on a desktop computer using an off-the-shelf tool. Bounds Checker from Compuware was available and already being used in-house for other purposes. This tool is Windows-specific, but in one case, my code was straight ANSI C, so I simply compiled it on my PC and ran it with the Bounds Checker libraries. The Bounds Checker tool checks many parts of the Win32 API also, but I was only interested in heap allocation.

I was disappointed with the results. Large amounts of data were available, but it was necessary to eliminate numerous red herrings to discern the important information. The biggest obstacle was that Bounds Checker delivers its report after the program has exited. Any data that was not freed up before exit is considered a leak. While this is a reasonable definition, it was not a good match for my application. Unlike PC applications, embedded programs often have no need to exit.

My code contained one large loop that ran continuously. It was possible to add some check at the end of the loop that would artificially cause it to exit for the purposes of this test. However, stopping without freeing up all memory resources would cause Bounds Checker to indicate that all memory in use at that point was a leak. Most of this memory was simply waiting to be used again on the next iteration of the loop. I could have made Bounds Checker behave better by writing an exit routine that freed up all the memory that I could access, but such a routine would never run in the real system, and could disguise real problems.

I came to the conclusion that Bounds Checker is more suitable for identifying the exact source of a particular leak, once you know which line of code is under suspicion. I was more interested in the overall picture.

Lists of problems

The tools are ineffective in at least one other scenario. Imagine a linked list of buffers that may grow and shrink as the program runs. Since following the links allows the program to find every buffer, it is possible to free all of them at any time. Now consider a bug that causes a buffer that should be removed and freed to remain on the list. The list will grow indefinitely. If the program erases the complete list, all evidence of the bug will vanish-but the list will start growing again. In a system that runs for a long time, the list will eventually get so large that all memory will be used up.

A bug like this would be a problem even if an automatic garbage collector were managing memory because, strictly speaking, the extra buffers are not leaks; they can still be retrieved.

To solve this type of problem we want to establish whether total memory usage is increasing, regardless of whether it is freed, and regardless of whether it is possible to free it.

Measuring usage

If we are going to modify malloc(), ideally we should replace all calls to malloc() with a different name. I will call it mmalloc(), for "measured malloc." This will allow us to write a function that does a little extra work and then calls the normal malloc(). (This can be accomplished in other ways, such as using a #define to replace malloc() or using the linker to rename the malloc() function in the compiler library.)

One weakness here is that calls to malloc() from functions in libraries that I cannot change or recompile will not be monitored. For example, the standard library contains a function called strdup(), which in turn calls malloc(). I cannot replace this call tomalloc() unless I have the source to the library I am using.

The first pass at measuring usage is to simply add up how much is allocated and subtract any memory freed. For malloc(), this is trivial. Assuming we have a static value G_inUse, then the following code will track allocations:


void *mmalloc(size_t size)
{
	G_inUse += size;
	return malloc(size);
}

mfree() is a bit trickier, since free() is not passed a size. The free() function is passed a pointer to the block. Usually the size is hidden in a header just before the block pointed to, so something like this might work:


void mfree(void *p)
{
	size_t *sizePtr=((size_t *) p)-1;
	G_inUse -= *sizePtr;
	free(p);
}

This would not be portable since your free might not use this convention, or may store it at a slightly different offset.

The size freed may not match what was allocated. Some implementations of malloc() will round up a size. For example, if you request 11 bytes, you might actually receive 12 bytes. In this case, the size 12 would be stored in the header. So allocating and freeing the block would lead to a balance in G_inUse of --1.

We have to do more work to make our monitoring code portable. When we return to this topic next month, I will add some more code to keep track of sizes and then look at what other data is worth collecting. In the meantime, keep a mop and bucket handy.

Niall Murphy has been writing software for user interfaces and medical systems for ten years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy welcomes feedback and can be reached at nmurphy@panelsoft.com. Reader feedback to this column can be found at www.panelsoft.com/murphyslaw.

References

  1. www.dmalloc.com
  2. www.cbmamiga.demon.co.uk/mpatrol/

Back

Return to the March 2002 Table of Contents

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER