The Lego NXT Mindstorms kit has been extremely popular. Many schools and individuals have kits and third-party components. Even if the latest product is much more capable, the ubiquity and low cost of the NXT Mindstorms components make them an attractive basis for experimental upgrades. In particular, we can upgrade the on-board computer with a new, powerful, very low-cost microcontroller. We can replace the programming language with one used in the most demanding high-integrity applications, such as flight management and full-authority digital engine controllers (FADEC) used on commercial and military aircraft.
The NXT “brick” is the embedded computer system controlling the robots. It consists of an ARM micro-controller and an AVR co-processor. The brick enclosure provides an LCD screen, a speaker, Bluetooth, and four user-buttons, combined with the electronics required to interface to the external world. The result is a convenient, effective package (Figure 1):
Figure 1. NXT Brick (Source: Lego.com)
Bricks are programmed with a graphical language intended for children learning to program a computer, but the lack of expressive power quickly becomes limiting for more sophisticated programs. Multiple mainstream programming languages are available from third-party providers and are widely used.
The brick’s microcontrollers present another limitation. The ARM is a 48 MHz ARMv7, with 256 KB of FLASH and 64 KB of RAM. The 8MHz AVR has 4 KB of FLASH and 512 bytes of RAM. In contrast, very inexpensive ARM evaluation boards, available from many vendors, are considerably more powerful and have more memory. The STM32F4 Discovery Kit from STMicroelectronics, for example, provides an ARM Cortex-M4 running at up to 168 MHz, with a megabyte of FLASH and 192 KB of RAM, for approximately $15. There are several Discovery Kits available, with varying amounts of memory and devices on-board. Other ARM vendors offer evaluation boards with similar capabilities.
In addition to a modern processor, the STM Discovery Kits include many on-package devices for interfacing to the external world, including A/D and D/A converters, timers, UARTs, DMA controllers, I2C and SPI communication, and others. Sophisticated external components are also included. For example, the STM32F4 Discovery Kit includes an accelerometer, an audio DAC and speaker driver, a digital microphone, four user LEDs, and a user pushbutton. The STM32F429I Discovery has a gyroscope and an LCD screen, as well as much more FLASH and RAM. All kits include extension headers to connect these devices to external hardware.
The Ada programming language is currently used in many safety-critical and high-integrity applications. If you have flown on a modern commercial airliner the chances are excellent that there was Ada in the critical software on-board. For that matter, Ada is used to develop the air traffic management systems controlling use of the airspace itself in both the United States and much of Europe.
Ada was available in the past as an alternative language for the Mindstorms brick, and at present GNAT implements Ada for newer ARM targets. This includes a freely available GPL version that supports some STM Discovery Kits out-of-the-box. Furthermore, the Mindstorms target was the Ada 2005 version, whereas Ada 2012 is available now. The free toolchain and IDE are available at http://libre.adacore.com/ for download.
Such attractive alternatives make a compelling case for replacing the NXT brick with a modern processor and programming language. This article is the first in a series exploring how to do just that. We will replace the brick with an STM Discovery Kit evaluation board and will use Ada 2012 to interact with the sensors and control the effectors.
We start the series by showing how to interface to the NXT touch sensor, including both the electronic circuit and the software device driver. Subsequent articles will show how to drive the NXT motors and the more complicated sensors, including third-party sensors and those requiring more advanced communication mechanisms. The source code and building instructions will be available on-line via a GitHub repository.
Replacing the Brick
With GNAT we can write application code for the target boards but we need drivers for the timers, UARTs, A/D and D/A converters, and other devices required to replace the NXT brick. The Ada Drivers Library (ADL) provided by AdaCore and the Ada community supplies many of these device drivers. The ADL supports a variety of development platforms from various vendors, although the initial – and currently most extensive – support is for the STM32 series boards. The ADL is available on GitHub for both non-proprietary and commercial use here: https://github.com/AdaCore/Ada_Drivers_Library.
Replacing the brick will also require drivers for the NXT sensors and motors – software that is not included in the ADL. We will illustrate development of these drivers in this article series.
NXT Sensor Support
The first sensor we will explore is the “NXT Touch Sensor” because it is the simplest to implement and highlights a number of issues. Other sensors will require A/D conversion or advanced communication that, although supported by the ADL, are more complex and therefore not ideal starting points. We will explore those in later articles.
External Circuit Required
The NXT Touch Sensor is simply an on-off momentary switch. A two-state input is handled by a “discrete” input to the processor, as opposed to an analog input to be quantized. The General Purpose IO (GPIO) hardware on the board provides these inputs and outputs (both discrete and analog). They are connected to the extension headers on the board.
An external electronic circuit is required to connect the sensor to the board. We can select one of the GPIO pins and connect the sensor to the corresponding header pin with a jumper wire. We will also require 3V power and a ground connection, both of which the headers supply.
The circuit will supply power to the sensor switch, with the MCU sensing the relative presence or absence of a voltage on the GPIO pin when the switch is pressed. When the sensed voltage is below a given threshold the signal is said to be “low” and the GPIO bit representing the discrete input is zero. A sensed voltage above the threshold is said to be “high” and in that case the representative bit is a one. The values “high” and “low” are known as “logic levels,” a term we will use in our software later.
Either level can be used to signal the discrete input being “on.” If the input is considered “on” when the signal is high, the input is said to be “active high.” On the other hand, if it is considered “on” when a low voltage is sensed, the input is said to be “active low.” Whether it is active high or active low depends on the circuit connecting the discrete input to the MCU, but the choice must be known to the software.
As you can see, the details of the circuit are important for the software in interpreting the state of the input pin. Merely detecting whether the pin is high or low is not sufficient to know if the input is active. The details are also important to proper function and safety. Done wrong, we will either have an unreliable input or we will potentially damage the power supply. Let’s examine some possible circuits.
We could simply connect the input pin to ground with the switch in between, as shown in figure 2:
Figure 2. Noisy Circuit (Source: Pat Rogers)
Here, the GPIO input pin is connected to the normally-open switch S1. When S1 is pressed the input will be connected to ground and will show a low voltage. If we have specified that this input pin is active low, it will be considered active. Unfortunately, there will be a problem when the switch is not pressed. In that case there is no voltage at all on the input pin, neither high nor low, and so electrical noise in the system may be sensed on the input pin. As a result, the input could fluctuate between high and low, hence between active and inactive. This is clearly an unacceptable result.
We might tie the input pin to the supply voltage instead of ground. This circuit is shown in figure 3:
Figure 3. Short Circuit (Source: Pat Rogers)
Now the problem occurs when the switch is pressed. When S1 closes, the power supply will be shorted directly to ground. “Bad Things” can happen in that case, including burning out the power supply.
To prevent either of the above situations we must alter the circuit to prevent noise from affecting the input readings and to avoid short-circuiting the power supply. A properly sized resistor will do this. The choice is then whether we want the input to be active high or active low.
If we want the input to be active high, we put the resistor next to ground, with the switch next to the power supply. This circuit is shown in figure 4. The input pin will always be connected between the switch and resistor in these figures.
Figure 4. Active High Circuit (Source: Pat Rogers)
Now when the switch is pressed, the input will sense the supply voltage and will be active when configured as an active high input. The resistor will limit the current flow so that the power supply is not potentially damaged. When the switch is not pressed, the pin will be tied to ground so no stray noise will be sensed.
If we want to configure the input to be active low, we swap the positions of the switch and resistor, as shown in figure 5:
Figure 5. Active Low Circuit (Source: Pat Rogers)
In this case, when the switch is pressed the input will sense low voltage, so an active low input would be active as far as the software is concerned. When the switch is not pressed the input will sense the supply voltage. As an active low input it will then be considered inactive.
When the resistor is tied to ground it is known as a “pull-down” resistor. It “pulls” the input’s sensed voltage low. In contrast, when the resistor is tied to the supply voltage it is known as a “pull-up” resistor. It “pulls” the sensed voltage high. The discrete input circuity within the MCU typically has internal pull-up and pull-down resistors for each pin, but these may not always suffice. The resistor selection is part of the software configuration of a GPIO pin.
The Lego NXT brick conveniently contains the required interfacing electronics within the brick itself. We are replacing the entire brick so we will have to provide the circuit, but for the touch sensor that means only that we must connect a resistor in a configuration that matches the logic level specified for the discrete input.
That covers the external circuit. Now let’s examine the software that implements the touch sensor.
Device Drivers Required
Before we look at the device drivers that will be required to work with the sensors, we should first introduce the constructs of the Ada language that are going to be referenced in that examination. This introduction will not be comprehensive by any means, but will serve to make the discussion understandable to those unfamiliar with Ada. Note that free learning material is available on the AdaCore website. See http://university.adacore.com/ for this material.
In Ada, “packages” represent static modules. Ada packages contain declarations for anything required: types, constants, variables, procedures, functions, tasks (threads), and so on. Packages support strong separation of interfaces from implementations because there are two distinct textual parts to them: the package declaration, known colloquially as the “spec,” and the package body. The package spec contains the declarations for entities intended to be made available to client code (i.e., other code making use of the code in question), whereas the body contains the implementations corresponding to those declarations. Moreover, the package spec and body provide compile-time visibility control: the compiler will not allow client access to the entities located in the package body.
In a very real sense the package body is like the private part of a C++ class. Similarly, the package spec is like the public part of a C++ class, except that it can be further divided into a part at the end into which clients do not have visibility. In Ada terms, this part at the end of the spec is known as the “private” part, but in C++ terms it is more like the protected part of a class because under certain circumstances other, directly related units do have visibility into it.
Recall that we said that packages are static modules. A C++ class serves that purpose but is also used to define types. In Ada we use explicit type declarations, distinct from the package construct, to define types. There are many kinds of types allowed: enumeration types, numeric types, array types, task types, among others. Perhaps the most important kind of type is the so-called “private type.” A private type in Ada is one in which clients do not have compile-time visibility to the representation and, therefore, cannot invoke representation-dependent operations. This is the classic “abstract data type” (ADT) that has been so influential in programming language design, including that of Ada, C++, and Java.
In Ada, to achieve compile-time visibility control for a type’s representation we combine the package construct with the type construct. Specifically, we declare the type in the visible part of the package spec but only give the full representation of the type in the package private part. Doing so allows clients to use the type, e.g., to declare variables, but does not allow them to access the internal representation of those variables. For example, consider the GPIO facility on the typical ARM board. There will usually be a number of GPIO “ports” defined, each containing sixteen pins. The GPIO port can be represented in Ada as a private type (simplified greatly for the sake of illustration):
Line one contains the start of the package spec, for a package named “GPIO” in this case. (The actual package in the driver library has a slightly different name, as will be seen shortly.) Line three contains the partial declaration for the “GPIO_Port” private type. The declaration is located in the visible part of the package so clients have compile-time visibility to the type name. They can use that name in their client code to do whatever is defined for the type. However, clients do not have visibility to the type’s representation because that part occurs on lines seventeen through nineteen, in the private part of the package (which starts on line fifteen). There, the full type for GPIO_Port is a “record type” that corresponds closely to a struct in C++. Clients cannot treat an object of the type as a record, nor can they access the components within it, due to the visibility control afforded by the package.
In this case, and in many cases, allowing copying via the assignment operation does not make sense. In C++ we would hide the assignment operator declaration to prevent client calls. In Ada, we achieve that effect by adding the “limited” property to the type declaration on line three. The type is both limited and private.
The operations defined for the ADT are the procedures and functions declared in the visible part of the same package, in this case on lines nine and eleven (and thirteen). Along with many other capabilities, the package allows us to “set” and “clear” a port’s pins, i.e., to drive them high or low. Of course, we can also query whether they are set or cleared, configure them, and so forth.
Using Device Drivers
Now that we have established the basics of abstract data types in Ada, we can talk about the device drivers.
On any board there are often several instances of a given hardware device. For example, an MCU likely provides several hardware timers, a number of ADC and DAC devices, DMA controllers, UARTs, and so on. Each device typically consists of a collection of registers located at addresses defined by the MCU vendor. Abstract data types are ideal for representing hardware devices. In particular, we can define the representation and operations once, and then have as many variables of the type as there are hardware instances provided by the vendor. Each individual variable can be placed precisely at the address corresponding to the device’s location. As an ADT, the compiler ensures that clients do not access the registers directly. That way, if the driver is not working correctly, the only place the bug could be is in the package that defines the driver ADT, because only that package has the necessary compile-time visibility to operate on the registers.
Therefore, the Ada Drivers Library provides a large number device drivers in the form of abstract data types. There are many other packages provided as well, at higher levels than the device drivers, but we will focus on the drivers.
The ADL provides a complete interface for the GPIO input/output pins as an ADT declared in package STM32.GPIO. (The “STM32” part of the package name indicates that the ADT is for devices in that MCU family by STMicroelectronics.) The term “pin” is used loosely here and throughout the literature. The term refers both to the general purpose input/output line seen by the MCU internally, and to the physical header pin connected to that I/O line. Moreover, these I/O lines are aggregated into GPIO ports, each port containing sixteen I/O lines. Different boards offer different numbers of GPIO ports, thus different numbers of total I/O lines. The combination of a port and an associated I/O line is represented by the type GPIO_Point in the ADL package. We simply refer to such a pair as a “pin” and are more precise only when necessary.
Given the GPIO package, we can sense whether a discrete input is high or low. However, we don’t have a notion for them being “active high” or “active low.” As we showed earlier, merely sensing the pin as high or low does not mean the switch is activated; the circuit determines that. In addition, mechanical switches typically “bounce” when actuated, meaning the sensed value fluctuates over some short interval before settling at the final new state. We will define a type that represents active high and active low discrete inputs, with “debouncing” routines included. The package declaration is as follows:
This code adds a couple of wrinkles to the ADT concept described earlier. The first difference is that the type Discrete_Input is not only a private type, it is also a “tagged” type. That additional property indicates that full object-oriented programming (OOP) semantics are supported for the type, especially dynamic dispatching on the operations, full inheritance with extension, and run-time polymorphism. Absent the “tagged” property the type is just an ADT, without the run-time flexibility. Whether or not to make a type tagged is one of many design choices made when declaring types. The device driver ADTs are typically not tagged types because, in general, there is no need to support full OOP at that level. In contrast, the code that uses the drivers to provide higher-level abstractions typically does define ADTs as tagged types. For example, the pulse-width modulation (PWM) package defines a tagged ADT utilizing a low-level hardware timer. The timer ADT is not tagged. In our discrete input example we will utilize inheritance so we make the type tagged.
The second difference is that type Discrete_Input has two discriminants: one (an access value, essentially a pointer) for the GPIO_Point and one for the associated logic level (see line 8). Discriminants are a means to “parameterize” individual objects of a single type. When users of the package declare an object of the type, specific values for the discriminants must be specified. In that way we can share one type but have each object of that type be different in some way. To understand the rationale, consider a bounded buffer type: we want to define one buffer type but be able to have each buffer object have a different capacity. We don’t want the type definition to hard-code the capacity, we want to allow the users to choose, per object. Discriminants make that simple to achieve. Thus, when declaring a Discrete_Input object, clients can specify the pin and the logic level for that specific object, like so:
Foo : Discrete_Input (Pin => PB4’Access, Active => High);
GPIO ports are referenced with letters starting with ‘A’ and continuing up through the number of ports provided by the vendor. Pins are referenced numerically, ranging from 0 through 15. The port/pin pairs are commonly referenced with names that start with ‘P’ followed by the port letter and the pin number. Thus, for example, pin 4 on port B is referred to as “PB4.” The variable Foo is the name of an active high discrete input variable reading GPIO “pin” PB4. The expression PB4’Access is analogous to &PB4 in C++ because it creates an “access value” designating the object PB4, but with checks preventing misuse (e.g., dangling references). An access value corresponds roughly to a pointer value in other languages, but at a higher level of abstraction. In particular, they are safer because they are never invalid unless the programmer explicitly circumvents the language-defined checks.
GPIO pins must be initialized before use. We must set various configuration parameters because the GPIO pins are truly general: they can be inputs or outputs, analog or discrete, have differing speeds, and so on. The procedure Initialize_Hardware does all that configuration for us.
Next comes the function Active_Indicated. As the name suggests, this function returns a Boolean value showing whether or not the input is currently active. It is an “expression function” so the implementing expression is provided immediately. As you can see, it simply compares the pin’s logic level with the state of the pin. We should emphasize that this function is not debounced. The result could fluctuate over successive calls if the input is connected to a mechanical switch (as it will be for the touch sensor). That is why the function name includes the word “indicated” rather than suggesting a steady, continuous state. The complementary function Inactive_Indicated is identical except that it returns the opposite state indication.
To debounce inputs we define two procedures. One waits for the pin to be in the active state for the specified amount of time. The other does the same thing but waits for the inactive state. Both routines take a parameter specifying which discrete input to handle and a second parameter specifying the debounce time interval. This interval is the time in which the pin must be continuously in the given state in order to be considered settled. Both routines use polling, so they are not always appropriate. A similar approach using interrupts is possible but polling is simpler to illustrate.
The combination of a GPIO pin with a logic level is sufficient for implementing the semantics of discrete inputs. Therefore, in the private part of the package (lines 32 through 36) the full declaration of the type does not have any additional components (lines 34 and 35). Remember that in Ada, the private part of a package declaration controls compile-time visibility in a manner similar to the protected part of a class in C++.
Let’s examine the implementation of the two debouncing routines. In both cases the idea is to iteratively query the state of the pin to see if it is in the desired active or inactive state. Not only must the pin be in the target state, it must be in that state continuously throughout the specified interval before we consider the bouncing to have stopped. These requirements can be implemented as follows. This is actual source code, but for the moment consider it pseudocode and focus on the algorithm. We’ve left the comments in place this time because, in combination with the code, the algorithm becomes self-explanatory.
Even if one does not know Ada the intent should be clear. It is not a coincidence that Ada looks like pseudocode.
Note how the algorithm is independent of a specific target state, e.g., active high or active low. We could call either Active_Indicated or Inactive_Indicated on line 9. The code uses function In_Target_State to express that independence. Similarly, it doesn’t really matter what input type is involved (line 2). It need not be a discrete input. The type can be anything that has distinct distinguishable states. Therefore, the code above is actually the body for a “generic procedure” that defines the algorithm but does not force specific types or functions. (Generic units serve the same general purpose as templates in, for example, C++ and Java.)
Here is the declaration of the generic procedure. It has “generic formal parameters” to represent the specific type used (line 4), and the function to call (line 5) to determine when the current state is the required target state:
This generic unit is a template for a procedure. Once made concrete, that procedure will take two parameters: one named “This” (line 7) of the type described on line 4, and the other a value of type Time_Span specifying the required debouncing interval (line 8). Thus, whatever type is specified to match line 4 is used as the type for the procedure parameter on line 7, whereas the type for the parameter on line is 8 is fixed to a pre-existing type from package Ada.Real_Time.
As templates, generic units are not usable until “instantiated” into concrete units. That step specifies the client’s actual parameters for the generic formals. For example:
This instantiation creates a concrete procedure named Poll_For_Active (line 1) that works on type Discrete_Input (line 2) and calls the function Active_Indicated (line 3) within the procedure body. Within the generic template, in other words, the function is referred to as In_Target_State, but in the concrete instance it is the function Active_Indicated. The result is a procedure with a declaration like so:
Note that the procedure’s first formal parameter is of type Discrete_Input (line 7 in the generic unit declaration). Therefore, the body of Await_Active simply calls Poll_For_Active, passing the parameters through to the inner call:
Alternatively, you may have noticed that the parameters for the instantiated procedure are almost identical to those of Await_Active. Because of that similarity we can avoid having the body of Await_Active call Poll_For_Active and can, instead, have the entire implementation be a simple renaming of this new instance:
This renaming in the package body completes the earlier declaration of Await_Active in the package spec, doing what a body would do otherwise. The names of the formal parameters are different and the second formal parameter has a default initial value not defined by the generic, but these differences do not cause difficulties. The names of the formal parameters don’t (necessarily) appear at the point of calls, and the default input value, by definition, does not appear there either. Thus, when a client calls Await_Active they are really calling Poll_For_Active, directly.
The body of Await_Inactive is exactly the same except for the target state, so it requires a separate generic unit instantiation in order to call the other function, i.e., Inactive_Indicated:
The body of Await_Inactive is similarly a simple renaming of that instance of the generic.
We could have replicated the generic unit’s polling algorithm in each procedure, rather than using a generic, but this approach makes the implementation simpler. In addition to the productivity advantage, this approach is more robust as well: we only have to get the algorithm right once.
Finally, there is the implementation for procedure Initialize_Hardware that clients call to initialize the hardware pin connecting the switch to the MCU. That pin is a discriminant of the “This” parameter, specifically This.Pin, which is a reference to a GPIO_Point port/pin pair the user specified when declaring the Discrete_Input object. To repeat the object declaration example:
Foo : Discrete_Input (Pin => PB4’Access, Active => High);
Foo is an object that could be passed to Initialize_Hardware. We will also access the Active discriminant in the procedure body, as follows:
First, we must enable the clock for the pin because it is not powered unless explicitly used (line 4). The “.all” in the parameter means “deference” so the effect is to pass the designated GPIO_Point value to the procedure. For the object Foo, that would be PB4.
Next, we use a type named GPIO_Port_Configuration which is declared in the STM32.GPIO package (line 2). This type is a record type with member components specifying the required configuration. Config is an object of that type. On line 6 we specify that this pin is an input to the MCU. On line 7 we choose between the internal pull-up and pull-down resistors dedicated to each pin. The discriminant This.Active controls the choice. On line 8 we arbitrarily chose a relatively high speed for accessing the pin. Finally, on line 9, we call the routine that performs the configuration for This.Pin using the selections in Config.
Now we have an ADT representing discrete inputs with “active high” and “active low” semantics, along with operations that manipulate objects of the type. Our implementation of the NXT touch sensor is based on inheritance from that ADT.
The type Touch_Sensor has the same two discriminants (line 6) as type Discrete_Input, for the same reasons, and clients will be required to specify their values when declaring objects.
The package provides three procedures to manipulate Touch_Sensor objects (lines 10 through 21). The package also provides a procedure to initialize the hardware that interfaces a touch sensor device to the MCU (line 23). The implementation of this procedure (shown in the package body) merely calls through to the routine from the Discrete_Inputs package.
As we said earlier, a touch sensor is just a switch and, therefore, uses a discrete input to interface to the MCU. We said we would use inheritance, from the Discrete_Input type to a new Touch_Sensor type, but there is more than one way to use inheritance. If we use “interface inheritance,” clients will be able to take advantage of the types’ relationship by applying the parent type’s operations to objects of the new type. That may or may not be appropriate. On the other hand, if we use “implementation inheritance,” clients cannot take advantage of the relationship. Only the implementation of the new type will be able to invoke the parent type’s operations. (We are using the term “parent type” informally here. It is the same notionally as “superclass” or “base class” in other languages.)
The determining question, then, is whether clients should be able to apply Discrete_Input operations to Touch_Sensor objects. We think not. A touch sensor is a physical thing that can be pressed or released; the fact that it interacts with the MCU via discrete inputs doesn’t imply that clients should be able to manipulate them as such. That means we want to use implementation inheritance. (Minimal interface content is always a good thing, to the extent possible, because clients will depend on it. In practice there is no such thing as a temporary interface.)
In Ada, we select the kind of inheritance by choosing where in the package we express the inheritance relationship from the parent type to the new type. If we express the inheritance relationship in the visible part of the package, the inherited operations will be visible too. The parent type’s operations are part of the new type’s interface. That would achieve interface inheritance. Alternatively, if we only express the inheritance relationship in the private part of the package, the inherited operations are not visible to clients. Only the private part and the package body will have visibility to the parent type’s operations. That would achieve implementation inheritance.
Therefore, type Touch_Sensor is not visibly derived from type Discrete_Input (lines 6 and 7); there is no expressed inheritance relationship visible to clients. Only the full declaration (lines 27 and 28), in the private part hidden from clients, expresses the relationship between the two types.
Declarations within the package body do have visibility to the inherited operations so the procedure bodies can call them, as shown below on lines 5, 11, and 24:
The body of Await_Toggle simply calls the other two procedures defined in the same package. It is a convenience routine. On line 24 the parent version of the initialization routine is called.
That is all that is required for a touch sensor interface. This overall approach provides a number of advantages, including code reuse, separation of concerns, and especially simplicity. Each unit and its implementation is small and simple, and can be understood individually in terms of the other units’ interfaces.
Now let’s put it all together into a demonstration program shown below. We will use the STM32F429I Discovery kit for this program because it has a convenient LCD. The package LCD_Std_Out (referenced in the “context clause” on line 4) provides routines to write text to the LCD like a console. Programs using the Ada Driver Library refer to the supported target boards via packages named “STM32.Device” so that is the package referenced in the context clause on line 3. These context clauses make the unit specified available for reference within the unit applying the clauses. They are somewhat like import clauses in Java, and (to a lesser extent) #include directives in C and C++.
After the program clears the LCD and initializes the external button interfacing hardware (lines 16 and 17), it reduces the default debounce time (line 19) referenced indirectly via the body of the Toggle routine. That reduction is just to make the button a little more responsive, and to illustrate the intended usage. We print a prompt on the LCD (line 22) and then print how many times the touch sensor has been toggled, looping forever. The little blank string appended to the end of the output (line 26) ensures we clear the last few letters of the prompt.
To run the demonstration program you will need both a target board and one of the NXT touch sensors, as well as one of the interfacing circuits discussed above. All you really need for the circuit is a breadboard, a suitable resistor, and some jumper wires. Use the white wire coming out of the sensor connector for the sensor output, and either the black or red wire for connecting the sensor to ground. Figure 6 illustrates the connections, the sensor, and the target board running the demonstration program.
Figure 6. Breadboard and System (Source: Pat Rogers)
In figure 6 we have attached jumper wires from the board’s 3V and ground connection header pins. These are the red and black wires, respectively. The green wire is the discrete input connection to PB4. Current flows into the resistor and then to the sensor switch, with the discrete input connected in between. Hence this is an active low circuit (i.e., the one depicted in Figure 5) and the source code reflects that fact in the declaration of the Touch_Sensor object (line 11). If we change the circuit on the breadboard to be active high, we must also change the logic level specified on line 11, otherwise the button will not behave as expected.
Building and Running the Program
You can build and run the executable on the command line via these commands:
- gprbuild –P demo.gpr -p
- arm-eabi-objcopy -O binary objdebugdemo_touch_sensor
- st-flash write objdebugdemo_touch_sensor.bin 0x8000000
Line one builds the entire project. Line two converts the executable to a binary image suitable for downloading to the board. Line three downloads the converted image starting at the address indicated. Note that the generated executable will be located in a subdirectory of the “obj” directory. By default that subdirectory is the “debug” directory but there is another “production” subdirectory possible. The choice reflects the builder switches applied. You control those switches and the corresponding subdirectory by a “scenario variable” named “PLATFORM_BUILD” defined by the project files. To override the default, specify the value of the scenario variable when building:
gprbuild -P demo.gpr -XPLATFORM_BUILD=Production –p
Now steps two and three above would reference the files in the “objproduction” subdirectory instead. The “-p” switch tells the builder to create any missing directories.
Doing all this is much easier using the IDE, as is debugging. The GPL toolchain includes the IDE (named “GPS”) and we have provided the full demonstration project. There is a tutorial for using GPS on the same website used to download the toolchain itself, but the gist of it is to invoke GPS on the specified project:
Then, within GPS, use the icon to download to the board. That icon will invoke an action that first builds the project, if necessary, and automatically converts the image format. (If you get a warning in GPS saying that a directory is missing, it is one of these “obj” subdirectories. You can ignore it. A build will create it, depending on the value of the scenario variable.)
Changing the Target Board
We mentioned that the package “STM32.Device” represents the target board. To be precise, the package represents the MCU. We require a way to reference the GPIO pin and so this package is necessary. There is another package in the ADL, the “STM32.Board” package, representing the actual target board in use. That package makes the external devices directly available as well, but in this main procedure we did not require that access.
The level of indirection provided by these package names makes it easy to use different MCUs and boards. For example, we could change the demonstration program to use the STM32F4 Discovery Kit but would still refer to it as “STM32.Device.” If we did change to that target board we could not use the LCD because that board does not have one attached. It does have several LEDs though, so we could toggle an LED whenever the touch sensor is toggled, or indicate it in some other way.
Source Code Availability
The full, buildable project for this article is available here: https://github.com/AdaCore/Robotics_with_Ada. The next article explores how to drive the Lego NXT motors, among other things. We will update the project content for each article as each becomes available.
Dr. Patrick Rogers has been a computing professional since 1975, primarily working on microprocessor-based real-time applications in Ada, C, C++ and other languages, including high-fidelity flight simulators and Supervisory Control and Data Acquisition (SCADA) systems controlling hazardous materials. Having first learned Ada in 1980, he was director of the Ada9X Laboratory for the U.S. Air Force’s Joint Advanced Strike Technology Program, Principle Investigator in distributed systems and fault tolerance research projects using Ada for the U.S. Air Force and Army, and Associate Director for Research at the NASA Software Engineering Research Center. Dr. Rogers is the head of the US Technical Advisor Group (TAG) to ISO/IEC JTC1/SC22 Working Group 9, the group that is responsible for the definition and evolution of the Ada language. He has B.S. and M.S. degrees in computer systems design and computer science from the University of Houston and a Ph.D. in computer science from the University of York, England. As a member of the Senior Technical Staff at AdaCore, he specializes in supporting real-time/embedded systems developers, creates and provides training courses, and is a developer of the bare-board products for Ada.