Engineers can slip into the habit of using the same algorithms to tackle whatever problems are presented to them. A bit of lateral thinking can lead to more interesting and sometimes more optimal solutions. It's not always easy to look at something in a different way, but exercising your brain in this manner can be fruitful for your project (and keep your brain young, too). Read on and see if these anecdotes help you to think outside of the proverbial box.
Spin like a top
In a Dr. Dobbs article on rotating graphical images, Robert Grappel discussed the challenges of producing an image that is a rotation of a map already in memory.1 A little mathematical magic with sine and cosine allows us to map the position of a pixel to its new position, and when the new position has been established the color from the original pixel can be copied. If you repeat this operation for all pixels in the image, you've rotated the image.
There's a catch, however. Rotating a pixel position by some angle may result in a new position that's not a whole number. But physical displays are made up of pixels in discrete positions, so any fractional or floating-point result must be rounded, both in the horizontal and vertical axes, to the nearest pixel. Most of the time this works fine, but occasionally two adjacent pixels will map to the same new pixel after rounding. This isn't a big problem, since whichever color is copied last is the one that will remain on the final image and the pixels are so close to each other that this effect won't be visible on a high-resolution image. The bigger problem is that when two pixels are mapped to one, a pixel somewhere nearby in the resulting image won't get color information from anywhere. These “holes” can give the final image a speckled appearance.
You can deal with this in a number of ways. One is to find any such holes and fill them by giving them the average color of their surrounding pixels. Another is to perform the entire mapping at double the resolution of the physical screen and then derive every dot from the average of the four dots in the memory map. Either approach is complex and processor-intensive.
Let's look at the problem from the other direction. Instead of rotating the original pixel by +R degrees, let's rotate the final pixel by –R degrees. We know the complete set of final pixels, since they all lie in the area of the resulting image. In most cases, every pixel in the display area of the image will be part of the result. If we take each of these pixels and rotate them backwards we'll find the pixel that would have been its originator. Now copy the color from that pixel. We still have the issue that the result of the rotation is a floating-point position, so we need to round to the nearest pixel. We also have the concern that two adjacent pixels of the result may occasionally map to the same pixel of the original image. Again, on a high-resolution image that doubling up won't be visible to the eye.
Because we've performed our operation for every pixel of the result, we know that all pixels in the result will now be populated. Our result will have no holes.
That was a perfect example of solving a problem backwards. Ask yourself, “If I had the answer, would I be able to discover the question?” Other mathematical challenges are amenable to this sort of approach. Finding the square root of a number is fiendishly complex. If you already had the answer, however, checking it by squaring it is a simple case of multiplying it by itself. So if we guess at the answer we can check it. If the guess is too large, the result of squaring it is larger than the original value. We can iteratively improve our guesses as we get the result of each trial.
In an earlier article on watchdogs I discussed the appropriate approach to picking the timeout period for a watchdog timer in an embedded system.2 Most programmers start with an approximation of the time the system takes to go around its main loop. They multiply this approximation by some factor to allow for a variation in the loop time, and the result is the watchdog timeout period. The engineer is starting with the shortest possible timeout and working out from that.
My advice is to stay away from this approach and instead consider what would happen with a very long timeout. Some systems need to reset if they hang, but a delay of several minutes might not be a problem. Other systems have a shorter requirement because they have to reset before the system has time to do physical damage. On a robot arm the time to swing the arm far enough to cause damage may be just a few dozen milliseconds. The timeout, therefore, should be chosen based on that period to ensure the system will reset and enter a safe state before the arm damages the goods on the assembly line–or the assembly line workers. It's better to choose the timeout based on the external impact of the system, rather than starting with the system's internal performance. Engineers tend to be technology-driven and that makes them more inclined to think from the inside out.
Whenever you're trying to establish a design limit, as well as asking what is the smallest that limit can be, remember to look at the other direction, in case the answer at that extreme might be more useful.
Spell it backwards
In my misspent youth writing games for the ZX Spectrum I recall reading about the developers at Psion in the UK. They wrote a version of the board game Scrabble for a machine that had only 48K of memory for the code and the large dictionary required for the game.
Most human players look at the letters in their set and, seeing a word or a selection that is almost a word, they search the board for a home for that word. At Psion they tried this approach for the algorithm that plays against the human player, but later turned it on its head. Instead of starting with the letters available to the player it started with the board and looked for every word that would possibly fit anywhere. Once that relatively small list was established they checked to see if any of them could be completed with the letters in the player's set.
This sounds like a backwards way of doing things, but it has its advantages. As letters are used and replaced, the set available to the player changes after every turn. This means any data calculated for the letters in the player's possession is useless for the following turn. Only a small portion of the board changes at each turn, however, so the majority of possible words found in the unchanged board area remain valid for the next turn. Moreover, information about legal words that fit the board is useful to both the human and computer players, so it can be used for the “suggest-a-move” feature for the human player.
In a similar spirit of starting from the wrong end, some new e-mail spam filters may be based on algorithms that detect mail that is genuine rather than applying rules to detect spam. Given the proportion of e-mail traffic on the Internet that is now spam, that might be an advantageous way to approach the problem.
If you calculate a checksum for a block of data, you can then check the data for validity at the other end of a transmission. The receiver, however, needs a copy of the checksum to validate the data. The checksum could be sent separately, but then how do you know the checksum itself was sent without errors? You'd need another checksum to check the checksum, and the infinite recursion of this approach makes the original message vanish in a puff of Cartesian logic.
The alternative is to place the checksum inside the original packet. As long as it's always in the same position, the receiver can extract it and compare that value to the newly calculated checksum.
A far more elegant solution is encapsulated in the Internet checksum algorithm.3 If we could decide the answer to the checksum at the receiving end of the transmission and make that answer the same for all packets, the receiver never needs to locate the checksum for a specific packet. The Internet checksum algorithm does just that. The one's complement sum of all the 16-bit words in the packet is placed into a predefined location within the packet. By placing the sum there, the packet has been modified and it is guaranteed that if the one's complement checksum of the packet is now calculated the result will be all ones, or 0xFFFF in hexadecimal.
So the receiver has only to calculate the one's complement sum, and if it's equal to 0xFFFF the packet is valid. Making the answer constant and varying the “question” means the receiver doesn't even need to know which position within the received packet contains the checksum.
In some cases starting with the answer can lead to innovative cheating. I won't comment on whether the following techniques are ethically justifiable, but they do show that the engineers were exercising their right brain.
Compiler vendors know that they're going to be tested against certain benchmarks, in particular the Whetstone test, so they use those same benchmarks to test their compilers in-house. Some have taken this a step further by putting code in the compiler that recognizes the source code being compiled as the Whetstone test and then optimizes it based on foreknowledge of the exact nature of the test. Precalculated results can be output without any need for the calculations required by the benchmark to be performed. An external tester using the official Whetstone source code for the test might not realize he was being duped and getting the correct result a lot faster than he should.
A similar trick has been used to pass a water-ingress test required by the CE mark. The test involves allowing a predefined amount of water to slowly drip onto the product. The product passes if it still functions correctly after the water has landed on it and–presumably–rolled down the front, rear, or sides of the unit. One design shaped the top lid of the device as a recessed tray. The tray was deep enough to hold exactly the volume of water specified by the test. This product would pass the test. However the response of the system might have been a lot different if the test was allowed to run longer and the tray overflowed. This is defiantly a case of obeying the letter, but not the spirit, of the law.
Every fault is a fashion
Engineers spend a lot of time trying to get rid of unwelcome artifacts in their data. Consider noise on the signal from a sensor. We can use ferrites and twisted pairs of wires to try to reduce noise. At some point you don't want to reduce the noise any further. Low levels of noise can actually increase the accuracy of the final result. By keeping a low level of noise, the accuracy of the signal, averaged over a period of time, is actually better than a noise-free signal.
Consider a voltage that corresponds to a value of 127.2 when converted to a digital value by an ADC. An 8-bit converter is only going to provide the value 127, since the decimal place is not supported by the converter. In a noise-free environment every reading will be 127. But if there's random noise the signal will vary, while still averaging 127.2. Successive readings would be 127 most of the time, but 128 will appear often enough to raise the average above 127. There's no decimal place in a single reading, but the average of multiple readings can have that decimal place. While we want to eliminate most noise, some of it is good for us.
We had an analogous situation in a text-messaging application I was working on. Registered users who paid a monthly subscription on top of their cell phone bill were allowed to send text messages from their cell phones to a server and receive information in reply. There were some scenarios where unregistered users were able to get access to the server and could send unauthorized messages. There was much head scratching over how to prevent these users from requesting messages. Finally it dawned on us that we should not treat this unauthorized use as hacking, we should simply treat them as customers. We already had a billing arrangement with these users, since they were cell phone subscribers. They just hadn't registered to pay an extra monthly fee for the information services. If they somehow sneaked into our information service, all we really had to do was charge them a premium rate per message. They'd either realize that registering for the monthly service was more economical or they'd continue to contribute to our bottom line.
By defining these users as customers rather than nuisances, we saved ourselves a security headache and even made a few dollars. The lesson here is that sometimes the enemy can be morphed into a friend.
Wheels of fire
One of my first lessons in thinking differently was in the university. We were provided with a set of raw material with which to build a car that could be propelled by rubber bands. Winding up the rubber bands to provide drive to the wheels was an obvious source of power, but building the axels to allow the wheels to turn smoothly was a challenge. All of the energy from the rubber bands wanted to be released at once, and the wheels did not get enough traction on the smooth gymnasium floor to avoid spinning. Mechanisms to get the rubber bands to unwind more slowly were not easy to design or build.
The lateral thinkers in the class were able to realize that there was nothing in the specification that said that the miniature car had to be propelled by its wheels–or even have wheels. By replacing the wheels with four simple spikes the car could be sprung from a catapult built using the rubber bands. The car slid instead of rolled. This still satisfied the objective of completing a straight-line course in the minimum time. The only complaint I have about the idea of eliminating the wheels is that I wasn't the one who thought of it!
Back to basics
The reason the wheel-free cars worked is not because they were a more efficient way of transferring energy from the rubber bands to the vehicle, but because they were simple. Fewer parts and less complexity meant that it was possible to build the solution in the time available. Occam's Razor says that given alternative equivalent solutions, choose the simpler one.
Simplicity was also a key contributor to my final example. A few years back I worked on a telephone answering machine that could forward messages to a server, which would then relay them as e-mail, with the message from the answering machine attached as a .wav file, to the owner of the phone. While the market for such a device never materialized, there were nevertheless some interesting technical challenges along the way. Margins are low in the telephone market and volumes are high. If you want to add more than 10 cents to the bill of materials you probably have to go to your boss's boss to get it signed off. We wanted to send digitized sound files down the phone wires so we'd need a modem, which is a pricey addition to an answering machine and would have restricted us to the very high end of the market. Many chips that generate DTMF touch tones from the numeric keypad can also function as 2400-bps modems. At that speed the file transfer would have been unmercifully slow.
Eventually I had an Occam's Razor moment and realized that the best way to send a voice message over a phone line was . . . as a voice. Even though the messages were stored digitally on the answering machine, the quickest way to transmit them was to convert them back into sound and play them down the phone line. A lot of quality was lost by redigitizing the messages at the receiving end, but the main goal was to be able to make out the words. A technique that would have been completely unacceptable for music worked just fine for speech. The small amount of extra information required was provided by a combination of caller-ID and sounding a few touch-tones at the start of the message.
There's no guaranteed way to generate new ideas but, as you can see from these anecdotes, it's often a case of starting at a place other than the obvious or conventional start point. It's sometimes necessary to throw away the current way of doing things in order to see that alternatives are possible. Finally remember Einstein's advice: “Everything should be made as simple as possible, but no simpler.”
Niall Murphy has been designing software for fourteen years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Niall teaches and consults on building better user interfaces. He welcomes feedback and can be reached at . Reader feedback to his articles can be found at .
1. Grappel, Robert. “Rotating a Weather Map,” Dr. Dobbs Journal , June 1999. www.ddj.com/184410967
2. Murphy, Niall. “Watchdog Timers,” Embedded Systems Programming , November 2000. www.embedded.com/2000/0011/0011feat4.htm
3. “RFC 1071–Computing the Internet Checksum,” Network Working Group, The Internet Engineering Task Force. www.ietf.org/rfc/rfc1071.txt.