To an old hand at writing embedded code, the idea of having parts of the software that are shared between execution threads is second nature. To someone new to the specialism it can be a hard concept to grasp. This article looks at how shared code is implemented and illustrates a couple of examples where it was a solution to an intractable problem.
The knowledge gap
When teaching or mentoring people, I have found it a challenge to avoid making assumptions about what they know. It is so easy to assume that because something is obvious to me, it is apparent to everyone else. On numerous occasions I have discovered that this is not the case.
Of course, the best response to this realization is not to treat everyone else as stupid, but try to explain something clearly and then listen to the echo back of the explanation. I should say that when the boot is on the other foot, I often find people talking to me make invalid assumptions about my knowledge.
In developing software – embedded software in particular – there are certain things that are fundamental, particularly issues around the conservation of resources. More than once I have been surprised by engineers' inability to focus on this issue.
It is common for embedded code to be multithreaded – in other words, there appears to be more than one stream of execution. This may simply be a main loop, which runs alongside interrupt service routines (ISRs). Or it may be multiple, independent tasks (threads) executed using a kernel to give the effect that they are running simultaneously.
Each execution thread has its own machine registers (or appears to – they may be stored while another thread is actually executing) and maybe its own stack.
Thinking of a multitasking system, where the tasks appear to be running concurrently, it is possible that two tasks may call the same function at the same time. (This is also a possibility, of course, with a main loop and an ISR.) There appear to be two “copies” of the function executing simultaneously. There is no need to actually have two real copies of the instructions – that is a waste of memory – each thread has its own Program Counter and other registers. It is only necessary to ensure that the code is not designed in a way to make the concurrent execution problematic – it must be reentrant. We will look at what this means in a moment.
The sharing of code may be more comprehensive: the entire code of two or more tasks may be identical. Obviously this only makes sense if the different tasks operate on different data – maybe each is assigned to a particular “input channel” (whatever that means). Most real time kernels provide a means whereby a task may be passed some parameters on startup.
Writing reentrant code in C/C++ or similar languages is not difficult. The key thing is to avoid statically stored data – either external or internal variables that have been declared using the static keyword. All data should be stored in registers, on the stack or the heap. If you want storage that is shared between tasks, a real-time kernel is likely to provide a “safe” facility to accommodate this requirement.
Real world examples
Many years ago, I was supervising an inexperienced engineer who was developing the software for a simple telecommunications device. I do not recall all the details, but broadly it managed two separate flows of data. The code was written in assembly language – that is what we did in those days. The engineer had a problem. Although the basic functionality of the device was in place, she still needed to add more features and was running out of program memory (ROM). She sought my help. I was surprised by the problem, as the software did not need to be complex and, by the standards of the day, the available memory seemed to be more than enough.
I looked at the code and quickly saw the problem. She had written code to deal with one communications channel (a UART driver) and then made a second copy of the code to handle the other channel. Nobody had ever explained the idea of shared code to her. All that was required was to ensure that the code was reentrant and a single copy would suffice. We both learned a lesson that day. The engineer learned about how shared code worked and I learned that this was not obvious to everyone.
Much more recently, I was involved in the technical support of a customer who exhibited a similar sounding problem. This time, the code was much more complex and written in C. The device was handling ten channels of data. He wanted to enhance the code and had run out of memory. We gave some advice about compiler optimizations, which helped, but not enough. A more detailed look at the code and a few tweaks and we reduced the code's memory footprint by nearly 90%. I imagine you can guess what happened. He had 10 copies of the same code, which we could reduce to a single one. Of course, ensuring the C code was reentrant was straightforward.
However, we also caused a problem for this customer. Prior to our “fix”, he found debugging was easy: just a matter of putting a breakpoint on the code for a relevant channel. Now, the shared code would stop on a breakpoint, but it would do so for whatever channel happened to use the code next, which was a little confusing. We demonstrated a debugger that could provide task-aware breakpoints and the problem was solved. He was happy to buy the tool after the spectacular solution to his problem that we provided. It was a win for all concerned.
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with the Mentor Graphics Embedded Software Division , and is based in the UK. His regular blog is located at: blogs.mentor.com/colinwalls. He may be reached by email at