# JackCrens

Research & Development Engineering Management

Jack Crenshaw is a systems engineer and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at jcrens@earthlink.net.

## JackCrens

's contributions
Articles
• Crenshaw is back with gambling tips in Part 4 in his series on parameter estimation, which is supposed to end us somewhere near the Kalman filter.

• Understanding the intimate relationship between vectors and reality means means understanding vectors, their properties, and how to manipulate them.

• In the February, 1992 issue of Embedded Systems Programming, a new column, Programmer's Toolbox, was introduced. The name was not chosen at random: The intent of the column is to provide useful tools and techniques that readers can apply to their own problems in embedded systems development. The idea is to have a library of reusable modules. written in several languages, that others can use to avoid re-invention of the wheel and improve their productivity and software quality. In this paper, I'll describe the thoughts behind the column. the software tools that have already been presented, the ones yet to come, and the ultimate goal of the effort. A successful effort will require support from the user community. Part of the message of this paper is a plea for help and feedback.

• If you've studied calculus at all, you learned early on a sad but true fact of life: There are many more calculus problems that can't be solved than there are problems that can be. Getting a neat, closed-form solution to a tough problem is always satisfying, and makes a mathematician or physicist feel like a real hero. But in that real world, no closed-form solution ever seems to exist for the problem that's facing us at the moment. So when an integral or derivative can't be found ... when that elusive closed-form solution doesn't exist ... what do we do?

• Understand how to cut down on the noise in your system, using the math behind the Kalman filter.

• You need math to estimate a curve. Here's part two in Jack's series on state estimation.

• On the road to the Kalman filter: The job of the least squares fit is to give an estimate of the unknown coefficients of the mental model.

• Jack Crenshaw tells you how to calculate trajectories to get your space craft to the Moon and back.

• Jack Crenshaw continues his discussion about strategies to employ to improve software quality and whether or not quality software come from quality people, or can you get quality software from mediocre people?

• Test early and often: Jack describes his testing techniques and other survival lessons from Apollo missions to today.

• Here are two of the longer comments from readers who were inspired to write after reading Jack Crenshaw's column "How to write software."

• Jack describes his personal method for software development and offers tips for staying successful in software.

• Jack Crenshaw's column features algorithms and plug-and-play routines, along with explanations of how they work.

• Jack Crenshaw reveals his latest on how to translate dynamics, whose math is defined by Newton's laws of motion, into usable code for applications such as trajectories and triangulation.

• Software folks have their own ideas on how to get things done: some good, some brilliant, others crushingly bad. The trick is to discern the good from the bad and adopt the brilliant ones as your own.

• These ripping yarns from an old-timer embedded systems developer will make other old-timers smile and new-timers thank their lucky stars.

• Jack Crenshaw describes what he and team members did to research trajectories for the Apollo missions.

• The company that can get the same algorithms working in its own software, royalty-free, is going to be able to sell its product at a lower cost, and thereby eat your lunch.

• In engineering, all problems are simple. You just have to know how to look at them.

• Back by popular demand, Jack Crenshaw's "Integer Square Roots" column from 1998 is still useful, according to readers. Here's a refreshed version.

• SimpleVec.zip code goes with a series on matrix and vector mathematics from Jack Crenshaw's Programmer's Toolbox column in ESD magazine.

• Jack Crenshaw's SimpleMat.cpp is a very simple set of matrix operations done in the old Fortran style. Its value lies in its efficiency.

• Jack Crenshaw's MatTestA.cpp code to define the entry point for the console application.

• What is reusable code? Is it a template to base projects on or single piece of code used in multiple programs? Microsoft seems confused.

• If you doubt that we ever landed on the moon, here's how things were done before the minicomputer, as told by a NASA number cruncher who helped land a man on the moon.

• Why do people continue to use Fortran? Because it can handle conformal arrays when no other high-order language can.

• Is it really too much trouble to use the right tools?

• If you can toss around matrices as you might a Frisbee, you're one of the superheroes. If not, you're on the sidelines, watching.

• Engineers and mathematicians have been encountering the zero vector for as long as vectors themselves have existed. Here's how to handle it when writing a computer program.

• Jack adds more functions to the vector class and either admits high treason or finally sees the light, depending on your point of view.

• Avoid thrashing memory and eating clock cycles when using complex objects, such as vectors and matrices, in your code.

• Jack wraps up his discussion of SimpleVec.cpp and demonstrates some useful things you can do with vectors.

• When making a library of C++ classes for vector and matrix math, style still puts the art in programming. Jack gives his implementation du jour.

• A vector means many things to many people. Pilot, mathematician, physicist: all of their definitions can help you do your job.

• Using vectors and matrices simplifies math and code tremendously and reduces the chances you'll make programming errors.

• In this final installment, Jack shows you how to solve what's probably the hardest problem he can think of: to numerically integrate a function.

• We take a final walk through the Rosetta Stone and the Taylor series, learning how to estimate derivatives using finite differences.

• This innocent-looking equation captures everything we need to know about the transition from continuous time to the digital world.

• GetTickCount keeps on tickin'. Before moving on, Jack responds to some of your comments.

• Jack discovers that when the only tool you have is a hammer, every problem looks like a nail.

• Either Windows CE has a serious bug or some programmers just never learn. Oldtimer Jack Crenshaw saw this coming a long time ago.

• Part Rube Goldberg, part Tinker Toys, Jack's retro toys are taught the basics of digital logic and computer programming in an unusual way.

• Using K-maps to simplify logic equations in hardware and software makes a lot of sense, but Quine-McClusky is more a systematic approach.

• Jack tackles Karnaugh maps again before meandering through flip-flops and winding up at ripple counters.

• Crenshaw offers up "The New and Improved World's Best Root Finder" and he's got the code to prove it.

• Simplification can actually reduce the number of logic gates you need. And Karnaugh maps are a good simplification tool.

• From Aristotle to Boole, Jack traces a history of logic and then explains how to simplify Boolean equations.

• Jack claims he's reached the end of his search for the Holy Grail of derivations. Read on and decide for yourself.

• Jack's spent over three decades working on his root finder. At long last, he finally understands why it ever worked in the first place.

• Even the most robust root-finding algorithm has an Achilles' heel. All it takes is one pathological function to expose it.

• Jack's Rosetta Stone equation leads to some useful predictive techniques. Here's one based on difference tables.

• From theory to practice in one easy lesson. Jack connects the digital and analog worlds with one simple equation.

• Being digital, embedded systems "think" in discrete time, but they interact with a continuous-time world. Jack's got the key to bridging that gap.

• Jack's last two columns provoked a flood of responses. This month he takes stock and makes plans to add to the toolbox.

• The days of the Bad Guys list have ended. From now on, only the elite Good Guys are worthy of mention. Assume everyone else is bad.

• This month, Jack shows you how he made the world's best root finder just a little bit better. So here it is. Use it with confidence.

• You will not find a better root finder in the known universe. Jack knows what he's talking about; he's tried them all.

• After apologizing for certain misdeeds and offering a glimpse of the future, Jack asks whether an RTOS is even worth the trouble.

• A move gone awry adds names to the list of Good Guys and Bad Guys. You might be surprised to see who ends up where.

• This month, Jack discusses the tragedy of September 11, a storm in Florida, and-we kid you not-the final function minimizer.

• The minimizer series is winding down. This month, Jack adds the features that make this addition to your toolbox safe for the road.

• Converging on the minimum of a function got a lot easier once Jack realized Brent had it all wrong. Here's the right way to do it.

• It turns out that Brent was wrong when he combined bisection with parabolic interpolation. There's a better way to converge on a minimum.

• It's taken a long time and there's still some tweaking to do, but the basic minimizer is done. Hopefully, the result is worth the wait.

• Despite making every effort to avoid it, Jack keeps running into new problems with minimization at the heart of the solution.

• The fat lady hasn't sung yet, but she's warming up backstage. This month, Jack unveils part of his long-awaited minimizer.

• Jack returns from holiday refreshed and ready to solve the problem of minimization once and for all.

• Jack reaches a temporary solution to the problem of minimization. Optimizations will come later.

• Locked in battle with his twin nemeses--Dell and minimization--Crenshaw pines for the good old days and gains some mathematical ground.

• Locked in battle with his twin nemeses--Dell and minimization--Crenshaw pines for the good old days and gains some mathematical ground.

• A journey from the Babbage machine to Turbo Pascal helps to place Sweet in the grand scheme of things.

• In these days of 32-bit processors and 80-bit floating point numbers, the carry bit almost seems to be an anachronism.

• Much has been said about the strengths of object-oriented methods. Claims of higher productivity, easier development, greater robustness, and ease of maintenance are often made. However, experts often warn of the need for a "paradigm shift." It takes a very different mindset to be comfortable with the object-oriented concepts, and people who learned to develop software in the "good old days" often have a difficult time making the transition.

• I'm very sorry to keep all my readers waiting. It's not my usual way of doing business. I'm well aware that, because I tend to write multiple related columns, I owe it to you guys to keep 'em coming. In my defense, I can only say: Two really vicious computer malware attacks, one heart surgery, seven weeks of in-home therapy, and two sessions in civil court. Please be patient, folks. I'm pedaling as fast as I can.

• Ridgerat, your last sentence says it all: We are still only at the beginning of the story. Two points: First, it's true that the KF needs a model. I've tried to emphasize that point. In most uses of the KF for guidance & control, etc., the model is based on the laws of physics. But it needn't be that way -- a purely math-based model, like a polynomial or Fourier series, is a perfectly valid model. And yes, the recursive least squares in my examples fits your one-step predictive requirement. Look at the graphs again. At any given step, you have your best estimate of the state, which in the examples are the coefficients of a polynomial. Using those coefficients, you predict the value of f(x) you expect to get at the next measurement time. Then you compare it to the actual measurement, and update the estimate of state. See my Equation 3: e = y - y_bar. The y_bar is the prediction, y the measurement. Second, please don't be misled by the notation. I've been using x as the independent variable, and y as the dependent variable, because that's the traditional form for both the function y = f(x), and the graphical notion of the x-y graph. In the KF, the independent variable is something else; usually time, or the discrete equivalent of it. The state variables are x, and the measurables are y. I was going to get around to that, but I didn't want to throw a change of notation into the mix yet. You jumped ahead of me. In the end, the difference between a least-square solution and the KF is almost exclusively the optimal use of the covariance matrix. We'll get around to that.

• Don't worry, we'll be covering probabilities and distribution functions in quite some depth. I just think it's pretty interesting how much good stuff you can get done without them.

• "By this definition, yes, Gauss' calculations were "real time" " Grin! I knew I was going to get into trouble with this one. You're absolutely right, of course. The time when the data is needed matters, and Gauss didn't need the results for a year or so. On the other hand, the measurements were taken only days or weeks apart, so Gauss had all his measurements before he started. The method he used was indeed batch.

• Lots of folks commented on my lack of concern for overflow,and I guess that theoretically you're all right. But c'mon, guys: Get real. The issue is not the best possible way to sum billions of numbers. The whole point of the exercise was to show how an inherently batch process can be warped into a sequential one. If, in the real world, you're writing software that finds the average of a billion numbers, I have to gently suggest that you're using the wrong algorithm. Even when you think you know that the thing you're measuring is very nearly constant, are you quite sure that it's NEVER going to change? What about sensor variations with anbient temperature? What about drift in the reference voltage? Please concentrate on the general ideas. Don't get lost in the forest of trivia. in the case of something as "constant" as the speed of light, we can't be 100% certain that it is

• About reliability: Around 1999-2000 I went through an orgy of buying old Heathkits on eBay. The thing that amazed me was how many of these 40-year-old kits still worked, and worked nicely. My pals all told me that I'd have to replace all the electrolytic caps, that I'd have to replace the tubes, and that I'd better bring the units up on a Variac to make sure I didn't smoke'em. I did none of that. Oh, sure, some of the units were non-functional or way out of spec. But the majority of them simply came up, and worked as well as when they were new. For a time, I used to check all the electrolytics, as my pals suggested. But every time I tested one, I found it not only good, but with the correct capacitance. After awhile, it just got too boring. I went through the test equipment first. In fact, one of the earliest purchases was a tube tester. Then I tried a few to see which units were still working at shop quality. After that, I used my best units like the VTVM to test everything else.

• Mike: Good point re vacuum tube audio. Most of Heathkit's solid state amps,etc., near the end of their reign had a lot of bells and whistles. I used to call it "trying to out-knob the Japanese." Not a good plan, that. But today, there's a definite market for a simple kit, without all the frills. What red-blooded American audio enthusiast wouldn't love th build an updated version of the Dynakit or W-7M?

• More on kits, modularity, learning, etc.: When I was 12, someone came out with a single-tube radio receiver kit. using all that cheap surplus electronics. It was your prototypical breadboard kit: the 1/4" plywood board and a layout pictorial came with the kit. You just glued the pic to the board, then started mounting parts. No soldering necessary. You screwed a bunch of Fahnstock clips into the board. Also the prewired tube socket, variable condenser, and coil. A cheap headphone and a 45-volt battery was also included. I was told years later that the tube was one of a line of military low-voltage tubes. The circuit was a regen. You adjusted a feedback coil until the circuit was right on the verge of squealing. A really simple circuit, but with a long outside antenna, I got great reception, esp. at night. Later, the same guys brought out an audio amp for it. Another 1-tube, battery-powered kit, but it drove a cheap 5 x 7 speaker, complete with mounting board and grille. The combo gave me many hours of pleasant listening. Now here's the point: If those same guys had offered an FM receiver kit, a multiband option, a superhet upgrade, perhaps a battery eliminator, etc., I would have bought each one. Up to and including a push-pull audio amp, a preamp with tone controls, etc., etc. All on plywood boards with Fahnstock clips. I would have been deliriously happy to have my whole bedroom full of that stuff. I've always wished Heath had come out with Microprocessor kits that worked the same way. Get some little something, like perhaps the 6800 educational kit. Something real simple and easy. Forget high clock speed; 100 KHz would be just fine. Heck, 1 KHz would be fine for learning. But also add a bunch of upgrade kits, including a floppy drive (I guess nowadays we'd use flash cards), a simple OS and assembler, etc., etc. Just keep those upgrades coming.

• Jack, are you sure about that light bulb in the audio oscillator? In the Wien Bridge circuit of the AG-9a, the bulb was in one leg of an impedance bridge. Its temperature-sensitive resistance kept the positive feedback gain just enough to give a stable output amplitude. The AG-9a had a distortion level to die for. I bought one during my eBay orgy. When I tested it for harmonic distortion (40+ years after it was new) and nulled out the fundamental, the only thing I could see on a scope was a little hum and some white noise. Awesome.

• I built one of those computer kits for the Auburn physics dept. It was the only time I really got tired of kit-building. The instructions would say things like "Prepare a shielded cable 17" long. Strip 3/4" of insulation from one end and twist the shield into a pigtail. Strip 1/2" of insulation from the other and clip the exposed shield." Then it would say, "Now do that 99 more times." ;-)

• Modern politicians and gov't workers seem much more interested in sending racy photos over Facebook.

• (cont) Around 1999-2000, I discovered the wealth of used Heathkit gear for sale on eBay. I went a little wild, buying all the audio and test gear I had once owned, plus a whole lot of other stuff -- impedance bridge, oscillator, cap tester, harmonic and IM distortion meters, and more -- that I had always wanted, but couldn't afford. Now I have tons of stuff, all (sadly) in storage. I very much believe, as you do, that the world has become a more boring place since the youthful yen to build things seems to have vanished. As a kid, I built model airplanes, model cars, a soap-box derby car, weird things to two behind our cycles, and a homebrew tandem bike. Also my personal pinnacles, a Norton twin "dirt bike" and a homebuilt race car. Kids today don't seem to want to build things. They just want to buy the next video game. And who's to blame them? Open up a modern electronic gadget, and what do you see? A circuit board, a chip or chips, and a battery. No way of understanding how it works. If _I_ were going to start selling kits again, I'd make them educational, and open enough so you can see which part does what. Jack

• Jack, as usual you and I seem to have been down a lot of the same roads. I built my first Heathkit -- a ham transmitter for a college roommate. I kept thinking, "Why in the world would this guy let me have all the fun?" Not long after, I got into hi-fi, and built the still-coveted W-5M amp and WA-P2 preamp (only needed one each, in those days). Later I got the stunning SS-1B "Range Extending" speaker system. It was made to supplement the SS-1 bookshelf speaker, but I used a Jensen horn midrange. The combo of Heath amp and speakers were, at the time, state of the art, at bargain-basement prices. Then I decided I wanted more audio gear, and the decision was, homebrew or buy. I decided homebrew, so my next kits were a scope, VTVM, audio oscillator, and regulated power supply. Those were magical days. ca. 1955, Heathkits were everywhere. About every college fraternity in the nation had a Heath amp in their rumpus room. Our Physics department had shelves of scopes, oscillators, and VTVM's. Lots of profs -- and this grad student -- built their own audio and TV gear. Later, I built the AR-15 solid state receiver -- another case of stunning performance. I think I built about 33 kits before losing count. In 1978 I went to work for Heath, in the computer dept. At that time, they were struggling in their competition with the Japanese. As you say, in a time of etched circuit boards, surface-mount devices, and robotic parts-placers, the advantage of free labor didn't really outweigh the cost of "kittifying" the design. One of their seminal products, a top-end O-scope, was held up for years because they couldn't find tiny pots to match those used in the Japanese gear. (cont)

• Lou! It's so good to hear from you. Hope you remember me as the (short-term) Chief for Software. Always thought you got a very raw deal in the internal computer kit wars.

• @istell: "I noticed that you use the term "centifugal force", many argue that it is not a true force." I get this a lot, but I frankly don't know why. It seems to me that anyone who has driven a car fast, or a motorcycle, or a plane, or has ridden on any one of dozens of carnival rides, should know what to call that force that's mashing them into the seat. I recently met a guy who used to fly F-104's. If _HE_ doesn't know the source of all those g's, nobody should. Of COURSE centrifugal force is real. The only thing is, it depends, as Einstein taught us, on your frame of reference. All is relative. There's a carnival ride where the people sit in chairs attached by chains to a rotating axis. As the ride begins to whirl, your chair swings outward, sometimes to an alarming angle. But you don't feel any sideways g-loads. You're still pinned to your chair. That's centrifugal force. Now look down between your knees as the scenery goes whirling by. Pick a point on the now-whirling ground. That's where you're going to end up, if the chain should break. Sure, someone standing on the ground outside, sees the chairs whirling in circles. If the chain did break, he'd see your chair go off in a straight line, into the boonies. But you, as a rider, are part of the rotating system. From your perspective, you and the other riders are fixed, not rotating, and you feel the centrifugal force. In the rotating coordinate system, it's as real as gravity.

• Lveal, I think you put your finger right on a couple of very good points. First, I think having written in assembly language is a big plus. I sure wouldn't want to force everyone to have to do the same, but I do think it the experience gives one a very specific view on what's going on inside the hardware, that's hard to get otherwise. Even more importantly, I think understanding hardware design is also a plus. A hardware designer truly understands the concept of "black box," to the extent that it's an ingrained instinct. When I was about 10, I got interested in ham radio, so decided to build my own receiver from a schematic I found. I went to the store, bought all the parts, and soldered them together (with acid-core solder, no less). Guess what? It didn't work. That's where my project ended, because I had no idea what was wrong. I had found the schematic, no problem, but I had not the slightest clue what all those parts were supposed to be doing. Fortunately, I got smarter with age. Today, I'd build a certain circuit, say an oscillator. Then I'd test it with scope, etc., to make sure it was osc-ing, and at the right frequency. My pal Peter Stark designed a computer kit that worked that way. The first circuit elements consisted of the CPU, a clock crystal, and an LED indicator light (poor man's scope). He tied the data lines to ground, so as the CPU ran, it would cycle through the address space. That's my kind of circuitry, and the concept can and should carry over into software. Jack

• I'm glad you brought up the "Mathlab has a function" thing, because I had another point to make. Sometimes you really can buy tools that do what I tend to write about. Case in point: A few years ago I wrote about "The Rosetta Stone," which was all about z-transforms and filters and things. I showed how to derive the coefficients for filters of various kinds. Thing is, nowadays you can buy tools specifically to generate filter coefficients. Whereas I wrote about 2nd or 3rd order filters, you can buy tools that will generate, say, 50th order filters. Some will actually burn the E3PROMs for you. Ditto for logic minimization. Just give the program a truth table, and it's generate the file to burn the chip. For those folks who do this stuff every day, I highly recommend getting the most powerful tools you can, and let the computer do all the work. But, as in the case of the least-squares fit (LSQ), it's always bothered me if there's something going on in the black box that I don't understand. I don't believe in re-inventing the wheel. Once I understand how LSQ works, I have no problem at all asking an app to generate me a 6th or 8th order polynomial fit, or a nonlinear fit like Micromath does so well. I guess it's like building a fire. I don't have to be the one to do it every time, but I like to know I could if I had to. Jack

• You make an excellent point, which is that I should build an environment that works the way _I_ want it to work, as opposed to how some other dude thinks it should. Many of my colleagues eschew IDEs completely, opting for a command-line interface or an IDE based on a favorite editor such as Visual Slick-Edit. I've used such an environment in the past, and I must admit it worked very nicely. How do I want it to work? I've thought about this, and the answer is ridiculously simple. Say I start a new project named Foobar. As I build it, I add source files from my library folder. When I tell VC++ to compile, it compiles the files in the library folder, and adds their OBJ files to the project folder. Then it builds the EXE file from those. That way, the library source files are safe. They can even be made read-only. The OBJ files are also safe, being visible only to that project. Why can't VC++ work like _THAT_? Jack

• That's sort of what I thought. Microsoft C++ has been around a lot of years, and I gather that the folks in MSDN have adopted their own paradigm. Use of LIB or DLL files was one suggestion I got a lot. Some even talked about registry changes. Personally, I would _NEVER_ contemplate a DLL until the project was nearing its ship date. If the compiler is incapable of linking files from different directories, it is, IMO, broken. Guess I'd better get a good source control system. Thanks for your comments.