The real world is messier than most engineers like to admit. When we work exclusively on the digital, either in hardware or software, we sometimes forget that the analog world never behaves in exactly the same way twice. I was taught this lesson the hard way, while trying to model the behavior of a mechanical brake for a piece of precision physical therapy equipment.
A brake is nothing more than a couple of metal slabs connected to a shaft: one fixed slab presses on the other, limiting its movement (and thereby that of the shaft). The amount of force applied during braking can be controlled fairly precisely-in our system via an electrical signal. However, the resulting torque on the shaft is difficult to predict accurately.
In the real world, friction and other physical effects count for a lot. Individual brakes have large frictional differences; they heat up as they are used, which changes surface characteristics; and the surfaces wear with use as well.
Building a software model to allow us to control the amount of torque required to “slip” any brake to within the required 0.5 lb-in turned out to be an elusive goal. Even if you have all the variables at your disposal, it's unlikely that the analog world will produce a consistent result every time. (Even when we controlled all the variables in our system, we found variations in the slip torque up to 10 times greater than our required precision.) The development of a closed-loop control algorithm for the electrical signal driving the brake, based on the torque applied by the user and the shaft's velocity at each instant, was, in the end, a far more sensible use of engineering time.
The digital is merely a model for the analog, and models are never perfect.
Keeping these crucial lessons in mind, we learn to solve such difficult problems by designing the software to adapt to the real world as it changes. Then all the designers need know at the outset is the range of permissible values and what decision to make at each.
Artificial intelligence takes this approach to its extreme. Far from enabling robots to laugh and cry, artificial intelligence is simply a system for making dynamic decisions. We provide the system with a database of known facts and it learns other facts as it operates. By the application of a priori rules, the system can make decisions based on its current knowledge and environment. Embedding artificial intelligence is the subject of an article we'll feature next month. In the meantime, consider this…
When we simulate a complex system, imperfections in the model are accepted as a trade-off. Some systems, like the helicopter described in Jim Ledin's article are simply too expensive or dangerous to be used during the early stages of software development. The purpose of the model is then to get us through those early stages, until the software is “safe” to test on the real hardware. The model, therefore, need not always be 100% correct to be useful.
We can best deal with differences between the digital and analog worlds by paying close attention to them. If we don't, our systems may fail to adapt correctly to their environment.