Sensor fusion: Of sea captains, gyroscopes, and thermal cameras

September 29, 2015

agvaniya-September 29, 2015

We can think of a naive model where we integrate each axis along the time domain, thus obtaining a single angle for each gyro. Knowing three orthogonal angles provides us with full three-dimensional orientation. Tracking this orientation can help us keep the drone stable by detecting when the orientation begins to change in an unwanted direction and fixing it by adjusting power distribution in the motors to get back to the orientation we want to maintain.

To summarize:

Assuming θ to be an orientation angle, ωθ, to be angular velocity around corresponding axis, and t to be time, in our simple model

therefore making our “tracking model” to be the good old motion equation

or in our case (since we deal with angles)

This model is very simplistic and does not account for second order effects. As one example of a serious problem that this model introduces, the gyro is not a precise instrument, so each and every sample we read from it contains a tiny error. The error is negligible in itself, but when something negligible gets integrated, bad things happen. Assuming the angular velocity remains constant over a small period of time, the equation now becomes:

In other (human) words, we integrate the error function e(t) or (simply put) accumulate the error over time! In this way, a small error becomes a big error and is responsible for a property of a gyro known as a “random walk.”

To recap, a gyro can be a very precise mechanism over a short period of time, but becomes increasingly unreliable as time goes by. Some methods can be used to overcome this: one (which I will only mention briefly, because it deviates from our subject) may include using raw angular velocity without integrating it to simply try and keep the angular velocity at zero across all axes, thus ensuring a stable drone. Another (and more interesting) method involves compensating for the random walk periodically by introducing a secondary source of information—one that is less precise but more stable over long periods of time, or (in other words) relies on sensor fusion.

Depending on the sophistication of the drone—I’ll pretend the boy in our story got his hands on a very serious piece of equipment for his amusement, so I hope you don’t mind—we can easily think of at least two very stable sources of information which could be harnessed here. One is our good old friend gravity, measured in the form of a three-dimensional vector, and is a shiny arrow that always points downwards and has an approximate magnitude of 9.8 meters per second squared. The other is Earth’s magnetic field, which (unlike gravity) tends to shift over time, but at such a slow speed that (for our purposes) we can consider it stationary; this second friend is essentially the red and blue arrow of a compass pointing to magnetic north. (To be fair, I would say that relying on a compass is fairly difficult, because of the sensitivity of the former to any sort of magnetic disturbance or presence of ferromagnetic substances.) Having one or both of these sources harnessed can give us a reference against which to reduce the accumulated error. The device which measures linear acceleration is known as an accelerometer. Our little drone will have three of those as well, one for each cartesian axis.

Now, assuming that, for a stable horizontal position, the x and y axes would read zero acceleration and the z axis would read 9.8 meters per second squared downwards [Earth’s gravitational acceleration, or (0,0,-9.8) in a vector form], we can calculate the angular deviation from that value whenever the accelerometer assumes a different orientation. If we were to use the magnetic compass to supplement our measurement system, we would read the angle at which our drone is oriented towards magnetic north and find the angular deviation of that initial orientation whenever the drone starts to drift.

Together, these three inputs can be fused in several different ways—catching moments where angular velocity (the one that is unaffected by summation error) measured by the gyro is approximately zero, solving orientation equations based on accelerometer and compass readings, and finally resetting the nominal orientation of the drone to the freshly calculated angles (the new θ0, if you will). Of course, this is all a very simplistic approach, but it is enough for our demonstration purposes. (A much more complicated and robust fusion system would be based on the famous Kalman filter—introducing all the sensor readings together with different weights and trying to perform an optimization based on the covariance matrix.)

< Previous
Page 2 of 4
Next >

Loading comments...