This “Product How-To” article focuses how to use a certain product in an embedded system and is written by a company representative.
PC-compatible industrial computers are increasing in computing power at a rapid rate due to the availability of multi-core microprocessor chips, and Microsoft Windows has become the de-facto software platform for implementing human-machine interfaces (HMIs).
PCs are also becoming more reliable. With these trends, the practice of building robotic systems as complex multi-architecture, multi-platform systems is being challenged. It is now becoming possible to integrate all the functions of machine control and HMI into a single platform, without sacrificing performance and reliability of processing.
Through new developments in software, we are seeing industrial systems evolving to better integrate Windows with real-time functionality such as machine vision and motion control. Software support to simplify motion control algorithm implementation already exists for the Intel processor architecture.
Motion control algorithms
Motion control algorithms are generally implemented using (from simplest to most complex): PID (Proportional Integral Derivative) equations, IIR (Infinite Impulse Response) filters, or MRAC (Model Reference Adaptive Control) algorithms. PID and IIR are probably the most common, due to their relative simplicity in analysis and implementation.
If someone is dealing with a single-axis controller it will probably be implemented using one of these filters. MRAC (aka MRAS) algorithms are far more complex and are generally used to deal with difficult to characterize systems and may utilize PID or IIR algorithms for the actual controller part of the system.
In the simplest case, the math is one-dimensional, meaning simple “difference equations” that involve performing a series of multiplications and additions (and possibly divides and subtracts) on a sequence of sampled data inputs and, depending on the algorithm, prior outputs to produce a control output value that is sent to a control actuator (thus closing the control loop).
The number of data points, or “taps,” processed by the algorithm is a function of the “order” of the controller, or the complexity of the filter, and might involve a small array of current and prior input and output samples (where the inputs are typically the measured positions and/or velocities, etc.) or even 10s or 100s of taps.
For example, an application may need to “pre-filter” the sampled input data stream to remove unwanted noise, requiring that the control algorithm work with many data points (i.e., a large input vector) in a pre-filter to generate a decimated sequence of “smoothed” input data points for the controller, or perhaps to synthesize a velocity signal from sampled position data.
In complex controllers, especially coordinated multi-axis systems, the control calculations may become more cumbersome than just a few simple multiplications and additions; The math can turn into matrix arithmetic, transforming into matrix multiplication and inversions.
Architecture Support for the Algorithms
In the past, the high-speed math that is required to implement the above described algorithms quickly enough to generate smooth, complex motion profiles has often required a special processor, typically a digital signal processor (DSP).
Algorithms such as those described above that perform repetitive operations on large amounts of data are particularly well-suited to the application of DSPs (Figure 1, below ). There are drawbacks to using DSPs for motion control, however, including hardware cost and system/software complexity.
|Figure 1. Many legacy industrial systems are built from multiple processing platforms, each with its own processor architecture, sometimes including expensive, hard-to-program DSPs|
A DSP is poorly suited to general purpose computing tasks. Since DSPs are generally not supported by general purpose operating system (GPOS) functions, they typically cannot support a user interface or easy access to file and network I/O, without expensive custom programming and interfacing.
A DSP's instruction repertoire is focused on providing functions designed to quickly execute basic mathematical functions on multiple operands. DSP instruction sets typically include a set of very fast multiply-and-accumulate (MAC) instructions or Single Instruction Multiple Data (SIMD) instructions that perform matrix math evaluations for algorithms that are used frequently in machine vision and complex motion control applications.
Because DSPs don't do general purpose computing well, typically they are used as imaging or motion control co-processors connected to general purpose processors in hybrid-architecture systems.
A more cost effective way of building systems, enabled by the advent of multi-core general purpose processors that incorporate SIMD instructions with DSP-like functionality in their instruction sets, is to build industrial control systems using a single architecture.
The SIMD instructions that comprise much of the code in DSP applications have an analog in many modern general-purpose processors; in Intel' Architecture Processors these instructions are also known as the Intel Streaming SIMD Extensions (Intel SSE) instructions.
Like a DSP, these instructions perform mathematical operations very efficiently on large arrays of data. Unlike with a DSP in a co-processor arrangement, a general-purpose processor that supports SIMD is capable of integrating general application algorithms with the complex mathematical algorithms as part of a unified logical execution stream in a single processor.
A Library for Industrial Control
Some general purpose processor vendors go beyond simply providing support for complex math functions in the silicon. For example the Intel Integrated Performance Primitives (Table 1, below ) library is a collection of functions optimized for Intel Architecture SIMD instructions. The library takes full advantage of the latest processors' support for the Intel SSE instructions.
Segments of the Intel IPP library that are directly applicable to robotics applications, for example, include the library's signal processing functions and matrix math operations as well as other arithmetic functions.
Motion systems might also make use of things like the FFT (Fast Fourier Transform) functions or other transform functions to perform on-line analysis of the system under control in order to modify and adapt the control strategy being used as the system runs,
|Table 1. Some of the function groups found in the Intel IPP library that are directly applicable to robotic and industrial control applications|
In essence, the Intel IPP library gives someone a way to make use of the specialized SIMD instructions found on Intel Architecture Processors in order to improve the performance of their vector and matrix math operations without having to learn how to use these instructions directly.
While programmers can delve into the world of Intel SSE and use the instructions directly, for maximum performance, these instructions require programming in assembly language, making their applications more difficult to implement and maintain.
The Intel IPP library also provides a way to easily adapt application software to newer platforms. Successive generations of processors continue to incorporate new SIMD instructions (First there was MMX, which was followed by multiple generations of Intel SSE. Next up is the wider Intel Advanced Vector Extensions – Intel AVX – SIMD instructions ).
The library is designed to automatically adapt to the specific architecture of the CPU that the application code is running on. It can automatically determine which processor is being used, at run time, and then use the optimal set of SIMD instructions for that processor, which makes moving forward with successive generations of processors easier.
In order to maximize utility and performance, the Intel IPP library is “thread safe,” meaning that it can be safely called from multi-threaded applications. Even more important, the library does not utilize mutexes or software locks to implement this thread-safe condition, so that it is well-suited to use with applications that run on a real-time operating systems (RTOS). Generally, the best multi-threading performance can be achieved by managing threading within the application and operating system, rather than relying on the library to do it.
Real-Time OS Support Evolves with the Architecture
Though the PC is becoming the de-facto standard architecture for industrial control systems, PCs running the Windows OS alone do not make ideal core platforms for building industrial control systems. Windows must be augmented by software that enforces the integrity of real-time control functions.
A real-time operating system is required for deterministic control. In essence, determinism means that a computer program will always execute at the precise time that it is needed by the application, regardless of what else is going on. For example, a machine that does closed-loop control of a motion operation, say a robot arm, must know at all times where the arm is located.
If the control program needs precise position data and it's not immediately available, it might tell the arm to move when it shouldn't. Though the speed of the events being controlled is a critical factor in selecting a real-time solution, speed of the process is not the main factor that drives the need for determinism. The key requirement for determinism is reliability ” a machine must respond within an absolute bounded response time to events in its environment or productivity and safety will be compromised.
Real-time operating systems are sometimes called “machine-directed” because they give a prioritized structure to responding to real-world events. In contrast, desktop computer operating systems such as Windows are sometimes called “human-directed” because the computer's response time to things that happen at human speeds (such as mouse clicks) is not as time critical.
Whereas machine-directed real-time operating systems are ideal for coordinating industrial control functions, human-directed operating systems are often best for implementing operator interfaces or diagnostic displays. Windows, for example, is supported by a wealth of software that can be used to develop HMIs (Human Machine Interfaces).
The optimal platform for factory systems is one that combines support for real-time functions with support for human-directed application software. Often this means running two (or more) operating systems at the same time on the same computer, for example a real-time operating system such as TenAsys' INtime alongside Microsoft Windows.
Hosting multiple operating systems on a single platform poses many challenges for the system software developer. In the world of office servers, the solution is to employ a symmetric multiprocessing (SMP) operating system, a single operating system environment that runs over multiple processors on a single platform.
In industrial control applications, the problem is more complex, because operating systems of different types (real-time and human-directed) need to be combined onto a single platform. The deterministic response of the RTOS must be assured. The solution is to employ “embedded virtualization.”
Embedded virtualization involves selective virtualization of a processor's I/O resources in order to ensure that the response to time-critical events is not delayed due to task scheduling latency of one of the OS environments. For example, in TenAsys' new eVM (embedded Virtual Machine Manager), each RTOS is given exclusive control of its own I/O as required by the real-time application(s). Windows task scheduling is protected via the Intel Architecture's built-in hardware mechanisms from impacting the response to time-critical events.
Cost Savings from Combining Platforms
So, using the appropriate hardware/software combination, one multi-core platform can replace what would otherwise require multiple platforms dedicated to different pieces of the application (e.g., a vision subsystem, a motion subsystem and a control/HMI subsystem ” all with their own processors, memory, disks, power supplies, etc. ).
Combining these multiple platforms into one saves costs and maximizes application performance by allowing machine builders to dedicate real-time and human-directed portions of the application to their own processor cores.
A trend in this regard is for multi-core industrial system platforms to run the real-time OS on separate processor cores alongside the Windows OS. By combining real-time and non-real-time elements on the same computing platform, using multi-core processors, machine builders can decrease system costs by reducing the number of computers in the system. Figure 2 below shows a typical robotic system incorporating machine vision. The motion control algorithms are run on one core, the machine vision algorithms on another, and the HMI and data logging and process control functions are supported by other cores.
|Figure 2. Using new multi-core processors with real-time operating systems and support for embedded virtualization enables the PC platform to support the performance and determinism requirements of high-end robotic systems.|
Such a system (i.e., a computing platform with Windows and a real-time OS) gives machine builders the ultimate platform to build anything they want. It's optimized for performance, price and scalability.
Because the PC platform is well-supported with software and development tools, designers can easily implement motion control, vision and HMI functionality with off the shelf software packages such as Intel's IPP library, Microsoft's Visual Studio, and control software packages such as ProConOS, ISaGRAF, CoDeSys, and LabVIEW, combined with the designers' own proprietary code, to provide functionality beyond what is supported by dedicated PC add-in cards.
Paul Fischer is a technical consulting engineer with Intel Corporation for the Intel IPP library. He has nearly 30 years experience with real-time and embedded systems in a variety of engineering and marketing roles, related to the application of software to real-time and embedded systems. Fischer has an MS in Engineering from UC Berkeley and a BSME from the University of Minnesota.
Kim Hartman is VP of Sales & Marketing at TenAsys, serving the embedded market with HW analysis tool and RTOS products for 25 years. Kim has recently been a featured speaker for Intel and Microsoft on the topic of embedded virtualization. He is a Computer Engineering graduate of University of Illinois, Urbana-Champaign and degreed MBA professional of Northern Illinois University.