In Part 1 of this article series, we saw the basics of a state machine-based multitasking system through a very simple example. In this second part, we will study the design methodology for building the software of our system. We shall discuss designing tasks, inter task communication, prioritizing, etc. by building an MP3 Player system that integrates USB, a file system, LCD and capacitive touch sensing.
Designing Your System Software
Here is a recap of the basic principles discussed in Part 1 :
1. Each function is split into small subtasks or states that can be completed within a short amount of time.
2. Saving and restoring the context of a task is achieved by the state machine and state variables.
3. Each task performs a small portion of its job and relinquishes control back to the scheduler. The system is built by many such tasks/functions, which are called repeatedly.
4. This interleaved execution of different states of multiple tasks appears like simultaneous execution of all the state machines.
5. The scheduler is an infinite loop that repeatedly sifts through all the tasks and executes the tasks that are scheduled for the current iteration
6. Simple language constructs are used as much as possible to keep overhead to a minimum.
The design of a state machine-based multitasking system involves a top-down approach. While the scheduler itself sits at the top of the hierarchy, it is discussed in the final section of this article. First, let us see what approach should be used in designing the system’s software.
The system software design can be divided into the following three subcategories:
1. Subsystem definition and task break up
2. Task interactions and priority
3. Task scheduling
As with any top-down design approach, we first break our system down into its major subsystems.The subsystems are those logical sets of jobs that belong to one single technology or functionality.As hinted in the previous line, a subsystem need not be a single function, but a set of functions that interact with each other. In the first step of designing, we should divide our system into top-level subsystems. Then, we can move on to identify each independent sub function within the subsystem.
Let us now consider a system that is more complex than the FM player discussed in part 1 of this article—a USB MP3 player that has an LCD display and touch-sense input.Let us assume that the system has the following hardware:
1. MP3 decoder ASIC
2. Capacitive touch-sense input pads
3. Segmented LCD display
4. A USB host microcontroller
The top-level break up of the system would look like Figure 1 below.
Figure 1: Block Diagram of a Hypothetical USB MP3 Player System
In this USB MP3 player system that we have constructed, we can identify various subsystems and their inner components:
1 . USB subsystem
2 . File system
3 . MP3 subsystem
a . MP3 audio manager
i) Buffer manager
ii) File manager
iii) Playback manager
b. MP3 ASIC driver
4 . Touch-sense input subsystem
a . Capacitive touch-sense driver
b . Touch-detection algorithms
5 . Display subsystem
a . Display manager
b. Display driver
6. System monitor and housekeeping
From here, we move on to break up each subsystem component into functions and sub functions. For example, the MP3 audio manager will need to be broken up into MP3 ASIC-buffer management, file manager and MP3 playback manager, to handle modes such as fast forward, rewind, etc.
Any function that takes more than a fraction of a millisecond to execute must be made into a state machine, as illustrated in the first section of this article. Thus, at the end of the system break down, we have a collection of simple functions and many state machines.
Once all independent functions are identified, they must be grouped together into different tasks for scheduling.For example, the MP3 subsystem requires four state machines.
The “ASIC driver” and “buffer manager” need to run at top priority and consume throughput.The “file manager” and “playback manager” state machines can run more leisurely. It is also possible that the file manager and playback manager are grouped together in a single task.
Therefore, a task in our case is a set of function calls grouped together for scheduling. It is possible that there is no formal function with a task name at all in our system, but instead just a list of calls to our top-level state machines in our scheduler.
It is important to take the time to put together a good design of tasks.Too many tasks result in wasted code space and more scheduling overhead.Too few tasks and a single task utilize too much throughput, thereby reducing the multitasking effect.
The following rules apply:
1. Group two related state machines that depend upon each other in the same task.
2. If such functions take a long time to execute, then place them in different tasks and stagger their execution in the scheduler loop (more on staggering, later).
3. Schedule unrelated tasks together, as long as throughput balancing is not hampered.
4. Create separate tasks for slow response functions (e.g. input processing) even though they belong with other functions (e.g. touch-sense detecting). Schedule these tasks leisurely.
The subsystems in our system interact in various ways.Some subsystems depend upon others for data, timing and output. How the different tasks communicate with each other determines their priority and frequency, as well as intertask communication strategies.In this section, we will first see how to determine the execution frequency and priority, followed by a review of methods for intertask communication.
Execution Periodicity and Order
Now that we have identified different state machines and functions within the tasks, let us identify the execution order of the tasks.We’ll define these terms, first.
Execution periodicity is the frequency with which a task is called from the scheduler. Some tasks may need more frequent calls than others. This also means that all the tasks need not be scheduled in every one of the scheduler loop’s iterations.
This saves a lot of CPU throughput.In the MP3 player example above, the LCD driver is a state machine that writes a single character from the buffer to the display and returns control back to the scheduler. This task will have to be called with the highest frequency to get a complete display written.
The USB-driver task will be called frequently, to send out the USB host signals and keep the connected device alive, but not at the top frequency. The touch-sense driver task should be called frequently to capture the capacitance data.However, the touch-sense detection algorithm task can be called leisurely, since it requires many samples of capacitance data before it can do anything useful.
Execution order is the order in which functions are grouped within a task, and also the order in which tasks are called from the scheduler. Task priority determines task order. The execution order follows the natural order of data flow.
Figure 2 below is a partial top-level data-flow diagram of our MP3 player system showing the data flow when a “Next track” touch pad is touched. Note how the “playback manager” task depends on the “system monitor” task which, in turn, depends upon the “touch sense-detection algorithm” task for its inputs. It makes sense to schedule the “playback manager” task after the system-monitor and touch sense-detection tasks at the same execution frequency, provided throughput criteria is met.
Figure 2:A Partial Data-Flow Diagram of our USB MP3 Player System
The tasks can be prioritized by staggering their execution across two passes of the scheduler loop.The higher-priority task should be scheduled in the first pass, and the lower-priority task for later. This aspect will become clearer in the “Task Scheduling” section of this article.
Let us recap here that our logical unit of functionality is a state machine, and thus we are interested in the communication between two independently-scheduled state machines. A task is only a set of function calls, related or unrelated.
Interstate machine communication in cooperative multi tasking is handled exclusively by global variables. This is due to the fact that all task switching is deterministic and there is no risk of simultaneous access of a shared resource. Since we have this robustness by design, global variables are used for intertask communication.
There are, of course, many types of data communication that need to be handled in a multitasking system. Since many dependent tasks run asynchronously, we need “flags’” to indicate the availability of inputs and population of data structures.
We need “error flags/codes” to communicate anomalies in one subsystem to another. We need “buffers” that hold data collected by one state machine but can be used by another task. “State variables” hold information about tasks’ states, which is handy for other tasks that delegate jobs to the former.
In our MP3 player example the touch-sense system reads the output of the capacitive pads being read by our “Capacitive Touch-Sense Driver.” This is done by using a buffer for each touch pad to store the Analog-to-Digital Converter (ADC) values. The “touch-sense detection” state machine shares a set of flags for each capacitive touch pad and these are constantly read by “system monitor”.
Let us say the user presses the “next track” touch pad. The touch-sense detection state machine analyses the ADC buffers from the lower layers of the subsystem and sets the touch-pad flags to be used for the upper layer, e.g. system monitor.
The system monitor, in turn, sets the “open next file” flag for the playback manager, as well as the “set new filename” flag to be used by the “display manager.” We see that the display manager gets the currently played file name in the “updated file name” buffer.
The display manager depends upon the playback manager for this data and needs to follow playback manager in execution, thus is of lower priority. The file content is again pumped out to the MP3 ASIC through a buffer. This is the gross data flow on a single button press. We also see some hardware action executed by the system monitor to enable perhaps an analog switch or multiplexer too.
In a well-designed cooperative multitasking system, all transitions of control are predetermined, and we therefore do not need to worry about synchronization issues. Thus, we do not have to employ costly synchronization constructs like pipes, messages and semaphores. These complex features are realized using inexpensive and simple flags, buffers, and queues.
Now that all our tasks are identified and all the communication variables are in place, it is time to schedule the tasks and execute them.The purpose of the scheduler is to execute a task at a known moment in time with a known periodicity.At the basic level, a scheduler is an infinite loop calling each task one after another.However, when we need to balance the CPU load and prioritize our tasks, things get interesting.
In this part of this three article series, we studied how a system is broken down into its component subsystems, as well as how we identify the periodicity and order of execution of the components. We also saw how the components speak to each other.
In the next and final Part 3 in this series , we will tie all of this together with a scheduler. We will also look at some methods to analyze our system and derive the CPU throughput used as well as deal with task/state machine scheduling, diagnostics, and provide some tips and tricks.
To read Part 1, go to “Building a simple FM player.”
As Senior Applications Engineer for Microchip Technology’ s AdvancedMicrocontroller Architecture Division, Ganesh Krishna is groupleader for Microchip’s graphics product portfolio, including display drivers,graphics and display libraries, manages peripheral libraries for PIC18 andPIC24F microcontrollers, and develops reference designs and performsbenchmarking.Ganesh earned his Bachelor of Engineering Degree inElectronics and communication from Sri Jayachamarajendra College of Engineering,belonging to Vishveshwaraiah Technological University (VTU).