Design Con 2015

Sympathetic Algorithms

Don Morgan

March 15, 2001

Don MorganMarch 15, 2001

Sympathetic Algorithms
Controlling a motor without sensors sounds hard, but it can be done. One approach is to run a sympathetic software model of the system.

Last month, I wrote about sensor-less motion control as it applied to stepper motors, as well as more complicated applications that make use of permanent magnet motors. Recall from my previous column that a stepper motor requires no position sensor. Its entire rotation is divided into a multiplicity of poles (and complete electrical cycles). This is done by energizing the windings so that the rotor rotates incrementally from pole to pole and electrical cycle to electrical cycle. So, because of this structure, we always know exactly where a stepper motor is relative to its starting point. However, the stepper motor is a specialized motor whose torque, velocity, and acceleration may not meet the needs of every application.

As an alternative, we can make use of a three-phase brushless permanent magnet motor's back-electromagnetic force (EMF). As I wrote last month, back-EMF is a voltage generated by the interaction between rotor flux and stator windings when a motor's shaft is turned. To achieve rotation in this manner, we read when a motor's back-EMF crosses zero, and insert a delay proportional to the speed of rotation before commutating. Unfortunately, this scheme only works well while the motor is turning. If it's not turning, we can't detect back-EMF because it isn't being generated. Each of these techniques involves specialized and, to some extent, expensive motors. But what if we wanted to use the induction motor, the cheap and rugged workhorse of the industry?

Field-oriented control

Several issues back ("Field-Oriented Control," September 2000, p. 179), I presented an algorithmic technique known as field-oriented control. This technique allows us to view the currents that control the motor torque from a static frame associated with the rotor, as opposed to the stator, thus avoiding some very complex math and control issues. From the point of view of this static frame, we have two vectors: Id and Iq. Iq represents the current (flux) in the rotor, while Id represents the current (flux) in the stator. These two vectors are at 90 degrees to one another, with the magnitude of the Iq vector determining the torque in the system.

Of course, the flux in a permanent magnet motor is fixed. So it is really only necessary to control Iq to generate accelerations and velocities. In an induction motor Id would have to be created because the rotor is an electromagnet and needs to be driven. By means of some relatively simple mathematical manipulations, Id and Iq are translated from the static rotor plane to the constantly moving stator frame so that commutation cycles can be generated, as well as the current necessary to drive the three phases. The general equations are:


The formula above is the product of two others:


and:


Equation 1 converts the currents in the individual phases to a homopolar system, but leaves them changing rapidly over time. Equation 2 is a rotation matrix using the feedback from the encoder as q, thereby moving the currents from the rotating frame to the fixed frame.

This system simplifies brushless and induction motor control. In fact, it is one of the most popular methods of motor control in use today; it would be difficult to find a major motion control card that did not have some version of this technique (and PID) available. As simple as field-oriented control is, it depends heavily on the feedback from the encoder. What if it is missing? In some applications, certain sensors are either undesirable or impractical. In motion control, this could mean no reliable mechanism exists to get the data to the controller because the motor is in a problematic location. It could also be that the cost of adding the sensor would be unreasonably high. With the application of certain techniques, we can still make a system like this work. What I will briefly describe can be applied to any motor we have discussed (as well as numerous other applications). And it will work standing still as well as at full velocities and accelerations.

An observer

Say we have a known system, but we don't have access to the internal data that we need to control it. Intuitively, we might ask: if we can't directly measure the internal values we need to control it, perhaps we can calculate them? Sure, if they're linear, why not?

We can set up a sympathetic system, or algorithm, alongside the real one to accept the same inputs and generate an output based on our knowledge of how the real system works, and then use the calculated variables to control the real system. This sympathetic system is called an observer.

In most systems, many internal states are of interest. If some internal state variables are not available, you must calculate them.

The observability of a set of unknown variables depends on whether or not their values are uniquely determinable from a given set of constraints. These constraints can be expressed as equations involving functions of the unknown variables. The unknown variables are said to be observable if their values are uniquely determinable from the given constraints.

The observer receives the same data as the real system and calculates the internal states based on its model of that system. The starting condition of the real system is usually unknown, but we can compare a calculated value with the measured output vector and use the difference to correct the system. This particular form of observer is called a Luenberger observer.

The Luenberger observer, however, has a major drawback that can make it impractical for a particular system: it depends heavily on the precise setting of the parameters and the precise measurement of the output vector. Any disturbances (noise) in the measurements, parameter differences, or internal noises (which can include such things as different timing on the power stage of the amplifier as opposed to the model) can make the observer unusable. This is where the Kalman filter comes in.

Kalman filters

Kalman filters employ statistics to estimate the outputs based on the inputs, as well as data which it considers noise.

Many people believe that the Kalman filter is one of the greatest discoveries of the twentieth century. It is used to control complex dynamic systems such as continuous manufacturing processes, aircraft, ships, and spacecraft. It has even been used for predicting the likely courses of dynamic systems that people are unable to control, such as the flow of rivers during floods, the trajectories of celestial bodies, or the prices of traded commodities.

The Kalman filter provides a means for inferring the missing information from indirect (and noisy) measurements. It can be shown that no other linear function of the inputs and outputs can give a smaller mean square estimation error--especially if all noises (random variables) are Gaussian noise processes. The optimal estimate from noisy data can be obtained by the method of least squares.

Method of least squares

We often want to approximate a process or system without knowing all the details of the black box that we're examining. What we usually have are discrete points generated by the system. These are measurements. To approximate the actual system, we come up with an equation or system of equations that can generate the same results based on the same input. Of course, without knowing all the details of the system we are analyzing, a high-quality approximation is not likely.

As so often happens in signal processing and engineering, the basis for our assumptions will be the Weierstrass approximation theorem: let f(x) be any function continuous in the (closed) interval [a,b]. Then, for any e > 0 there exists an integer n = n(e) and a polynomial Pn(x) of degree at most n such that f(x)-Pn(x) < e="" for="" all="" x="" in="" [a,b].="" in="" words,="" we="" can="" achieve="" any="" desired="" maximum="" error="" if="" we're="" willing="" to="" approximate="" the="" actual="" system="" with="" a="" polynomial="" of="" any="" order,="" n.="">

The simplest statistical approximation is probably the straight line. It is useful to start there in order to review some of the necessary terminology. Suppose we have a data set consisting of measurements, made at regular intervals (all offset from zero). Now we want to generate a polynomial to approximate these same points, but we know that our effort will not be exact. Even if the polynomial is allowed to extend to any order we might choose, some differences will always exist. These differences are called residuals.

Now, by definition, the principle of least squares states that of all polynomials of degree, we should select the one for which the sum of the squares of the residual is the least. Now, we organize our data set into a set of points (tm, xm), where tm represents our sampling interval and xm the data taken at those intervals. Figure 1 shows our example data set. We will begin by fitting a straight line to the data. A straight line is given by:


Rewriting that to determine any differences between its results and the actual results


In this formulation we are taking the sum of the squares of the difference between the results of our straight line approximation and the real or measured events. If our straight line model is a good fit, the differences will be minimal-especially if we are neglecting any noise that might arise from processing and measurement. Unfortunately, unless our data set is unique, a straight line will not be adequate to approximate it. That means that we have chosen the wrong model for our data set. This brings us to one of the major challenges of observers-developing a good model.

System models

In order to control a dynamic system, you need to know what it is doing. When we know what it is doing and we can describe it, we can model it. This usually involves describing the physical plant or system using one or more differential equations. These differential equations form the transfer function of the system.

In many cases, it is possible to derive a closed form for our description. In this case, however, we wish to use this model to estimate, predict, or smooth. So it is to our advantage to create a model that we can use iteratively.

In our previous example, we had data points. It might be possible to accumulate an infinite number of data points but it is often more reasonable to construct a means of representing the data so that we get the same results with a minimum of overhead. As a result, models are built that we can easily cast in recursive structures. What does that mean?

If we start accumulating data to compute a mean, we could do this




We could continue this way forever. But we get the same results if we do this:





Let's see how this works with a simple model. We will try an RC circuit. An example network is shown in Figure 2 .

By inspection, we can see that the current flowing in the capacitor is going to be equal to:


The capacitor voltage will, therefore, be the integral of the capacitor current divided by the capacitance:


We can model this circuit, then, with the differential equation:


We calculate this model at regular intervals and integrate to produce x0. We can simulate the action of this RC network-but with a bonus. We find that we have access to the system variable x0.

Next month, we will pursue modeling and recursive treatments of systems, with some ideas as to how observers and modeling might be applied to other forms of signal processing.

Don Morgan is a senior engineer at Ultra Stereo Labs and a consultant with 25 years experience in signal processing, embedded systems, hardware, and software. Morgan wrote a book about numerical methods, featuring multi-rate signal processing and wavelets, called Numerical Methods for DSP Systems in C. He is also the author of Practical DSP Modeling, Techniques, and Programming in C and Numerical Methods for Embedded Systems. Don's e-mail address is dgm@baykitty.com.

Figures

Return to April ESP Index

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER