Codifying good software design - Embedded.com

Codifying good software design

Click here for reader response to thisarticle

“Safety first” is a simple motto forever complicated by the complacency and greed of human nature. The story of U.S. fire codes has plenty to teach system designers.

Sweeping fires are so unusual that the once dreaded word conflagration sounds quaint to our modern ears. Yet in 19th century America a city-burning blaze consumed much of a downtown area nearly every year.

Fire has been mankind's friend and foe since long before Homo sapiens or even Neanderthals existed. Researchers suspect protohumans domesticated it some 790,000 years ago. No doubt in the early days small tragedies—burns and such—accompanied this new tool. As civilization dawned, and then the industrial revolution drove workers off the farm, closely-packed houses and buildings erupted into conflagration with heartrending frequency.

In 1835 a fire in lower Manhattan destroyed warehouses and banks, the losses bankrupting essentially every fire insurance company in the city. The same area burned again in 1845. Half of Charleston, SC burned in 1838.

During the 1840s fire destroyed parts of Albany, Nantucket, Pittsburg, and St. Louis. The next decade saw Chollicothe, OH, St. Louis (again), Philadelphia, and San Francisco consumed by flames. Most of my hometown of Baltimore burned in 1904. San Francisco was hit again during the 1906 earthquake; that fire incinerated four square miles and is considered one of the world's worst fires ever.

Mrs. O' Leary's cow may or may not have started the Great Chicago Fire that took 300 lives in 1871 and left some 90,000 homeless. The largely wooden city had received only 2.5 inches of rain all summer so the fire turned into a raging inferno advancing nearly as fast as people could flee. But Chicago wasn't the only Midwestern dry spot; on the very same day an underreported fire in Peshtigo, WI killed over 1,000 people.

A year later Boston burned, destroying 8% of all the available capital in the state of Massachusetts. 1889 saw the same part of Boston again ablaze.

Theaters succumbed to the flames with great regularity. Painted scrims, ropes, costumes, and bits of wood all littered the typical stage while a tobacco-smoking audience packed the buildings. In Europe and America 500 fires left theaters in ruins between 1750 and 1877. Some burned more than once: New York's oft-smoldering Bowery Theatre was rebuilt five times.

Slowly, in come the codes
The historical record sheds little light on city-dwellers' astonishing acceptance for repeated blazes. By the 1860s fireproof buildings were well understood though rarely constructed. Owners refused to pay the slight cost differential. At the time only an architect could build a fireproof building because such a structure used somewhat intricate ironwork which required carefully measured drawings. Few developers then consulted architects, preferring instead to just toss an edifice up using a back-of-the-envelope design.

Crude sprinklers came into being in the first years of the 19th century yet it wasn't till 1885 that New York law required their use in theaters. But even those regulations were weak, reading “as the inspector shall direct.” Inspectors' wallets fattened as corruption flourished. People continued to perish in horrific blazes.

With the 1890 invention of the modern sprinkler, the cost of repairs after a fire in a building with sprinklers was 7% of that incurred in a building without the devices. As many as 150 theaters had them by 1905.

Yet as recently as 1980 87 people died and 679 were injured in the MGM Grand Hotel fire in Las Vegas. Though fire marshals had insisted that sprinklers be installed in the casino and hotel, local law didn't require them. The building's owners refused to fork over the $192,000 needed. They eventually paid out $223 million in legal settlements.

The Las Vegas law was changed the following year.

Fire codes evolved in a sporadic fashion. Before the Civil War only tenements in New York were required to have any level of fireproofing. But The New York Times made a ruckus over an 1860 tenement fire that eventually helped change the law to mandate fire escapes for some—though not many—buildings.

An 1876 fire at Conway's Theatre in Brooklyn, New York killed nearly 300 people and led to a more comprehensive building code in 1885. Thirteen years after the Great Fire, Chicago finally adopted the first of many new fire codes.

This legislation by catastrophe wasn't proactive enough to ensure the public safety. Consider the 1903 Iroquois Theater fire in, of course, Chicago. Shortly before it opened, Captain Patrick Jennings of the local fire department made a routine inspection and found grave code violations. There were no sprinklers, no exit signs, no fire alarms. Standpipes weren't connected. Yet officials on the take allowed the theater to open.

A month after the first performance 600 people were killed in a fast-moving fire. All of the doors were designed to open inwards. Hundreds died in the crush at the exits.

Actor Eddie Foy saved uncounted lives as he calmed the crowd from the stage. Coincidentally, he and Mrs. O' Leary had been neighbors; as a teenager he barely escaped the 1871 fire.

Afterwards a commission found fault with all parties, including the fire department: “They seemed to be under the impression that they were required only to fight flames and appeared surprised that their department was expected by the public to take every precaution to prevent fire from starting.”

All criminal cases against the owners, builders, and (vastly corrupt) officials failed on technicalities. The judge missed a chance for a sagacious ruling when he found that fire codes, with no basis in the State's constitution, were illegal. Thirty civil suits were settled for $750 each. All others were dismissed for arcane reasons of legal minutia.

Hardware salesman Carl Prinzler had tickets for the Iroquois performance but the press of business kept him away. He was so upset at the needless loss of life that he worked with Henry DuPont to invent the panic bar lock now almost universally used on doors in public spaces.

Fast forward 83 years. Dateline San Juan, 1986. Ninety seven people died in a blaze at the coincidently-named DuPont Plaza hotel; 55 of those were found in a mass at an inward-opening door. In 1981 48 people were lost in a Dublin, Ireland, disco fire because the Prinzler/DuPont panic bars were chained shut. In 1942 at Boston's Coconut Grove nightclub 492 were killed in yet another fire, 100 of those were found piled up in front of inward-opening doors. Others died constrained by chained panic bars.

Many jurisdictions did learn important lessons from the Iroquois disaster but took too long to implement changes. Schools, for instance, modified buildings to speed escape and started holding fire drills. Yet five years after Iroquois a fire in Cleveland took the lives of 171 children and two teachers. The exit doors? They opened inwards.

Changes to fire codes came slowly and enforcement lagged. But the power of the press and public outrage should never be underestimated. The 1911 fire at New York's Triangle Shirtwaist Company was a seminal event in the history of codes. Flames swept through the company's facility on the 8th, 9th and 10th floors. Fire department ladders weren't tall enough and the firemen couldn't fight it from the ground. One hundred and forty one workers were killed; bodies plummeting to the ground eerily presaged the tragedy at the World Trade Center on September 11th.

But at this point in American history reform groups had taken up the cause of worker protections. Lawmakers saw the issue as good politics. Demonstrations, editorials, and activism in this worker-friendly political environment led to many fire-code changes.

Though you'd think insurance companies would work for safer buildings they had little interest in reducing fires or mortality. CEOs simply increased premiums to cover escalating losses. In the late 1800s mill owners struggling to contain costs established the Associated Factory Mutual (AFM) Fire Insurance Companies, an amalgamated nonprofit owned by the policyholders. It offered far lower rates for mills made to a standard, much safer design.

The AFM created the National Board of Fire Underwriters to investigate fires and recommend better construction practices and designs. 1905 saw the first release of their Building Code; 6,700 copies of the first edition were distributed. Never static, it evolved as more was learned. Amendments made to the code after the Triangle fire, for instance, improved mechanisms to help people egress a burning building.

MIT-trained electrician William Merrill convinced other insurance companies to form a lab to analyze the causes of electrical fires. Incorporated in 1901 as the Underwriters' Laboratories, UL still sets safety standards and certifies products.

The pattern
Our response to fires, collapsing buildings, and the threats from other perils of industrialized life all seem to follow a similar pattern. At first there's an uneasy truce with the hazard. Inventors then create technologies to mitigate the problem, such as fire extinguishers and sprinklers. Sporadic but ineffective regulation starts to appear. Trade groups scientifically study the threat and learn reasonable responses. The press weighs in, as pundits castigate corrupt officials or investigative reporters seek a journalistic scoop. Finally governments legislate standards. Always far from perfect, they do grow to accommodate better understanding of the problem.

Though computer programs aren't yet as dangerous as fire, flaws can destroy businesses, throw elections, and even kill. Car brakes are increasingly electronic and steering is headed that way. Software errors in radiotherapy devices continue to maim and take lives. Bad code has been implicated in a number of deadly aircraft incidents. The National Institute of Standards and Technology claims the cost of bugs runs some $60 billion a year in the U.S. alone.

Codes for safe software
Why are there no fire codes for software?

Today the Feds mandate standards for some firmware. But take a gander at the Federal Election Commission or Food and Drug Administration rules. The regulations are loose and woefully inadequate. Firmware is at a point in time metaphorically equivalent to the fire-fighting industry in 1860. We have sporadic but ineffective regulation. The press occasionally warms to a software crisis but by and large there's little furor over the state of the art.

Rest assured there will be a fire code for software. As more life- and mission-critical applications appear, as firmware dominates every aspect of our lives, when a bug causes some horrible disaster, the public will no longer tolerate errors and crashes. Our representatives will see the issue as good politics.

Just as certain software technologies lead to better code (for instance, C code is generally at least an order of magnitude buggier than code written in Ada), the technology of fireproofing was well understood long before ordinances required their use. The will to employ these techniques lagged, as they do for software today.

There's a lot of snake oil peddled for miracle software cures. Common sense isn't one of them. I visited a Capability Maturity Model level 5 company recently (the highest level of certification, one that costs megabucks, and many years to achieve) and found most of the engineers had never heard of peer reviews. These are required at level 3 and above. Clearly the leaders of this group were perverting what is a fairly reasonable, though heavyweight approach to software engineering. Such behavior stinks of criminal negligence. It's like bribing the fire marshal.

I quoted the Iroquois fire's report earlier. Here's that sentence again, with a few parallels to our business in parenthesis: “They (the software community) seemed to be under the impression that they were required only to fight flames (bugs) and appeared surprised that their department was expected by the public to take every precaution (inspections, careful design, encapsulation, and so much more) to prevent fire (errors) from starting.”

Writer Douglas Adams said “Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” After 790,000 years of firefighting we have finally learned that fire is, well, kind of dangerous and we'd better construct buildings appropriately.

I collect software disasters and have files bulging with examples that all show similar patterns. Inadequate testing, uninspected code, shortcutting the design phase, lousy exception handlers, and insane schedules are probably responsible for 80% of the crashes. We all know these things, yet seem unable to benefit from this knowledge. I hope it doesn't take us 790,000 years to institute better procedures and processes for building great firmware.

Do you want fire codes for software? The techie in me screams “never!” But perhaps that's the wrong question. Instead ask “do I want conflagrations? Software disasters, people killed or maimed by my code, systems inoperable, customers angry?” No software engineering methodology will solve all of our woes. But continuing to adhere to ad hoc, chaotic processes guarantees we'll continue to ship buggy code late.

While researching this article, a firefighter left me with this chilling thought: “I actually find bad software even more dangerous than fire, as people are already afraid of fire, but trust all software.” esp

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at jack@ganssle.com.

Reader Response


I agree with your article, but you left out one upcoming reason fornot following fire (software) codes. The upcoming reason is cost in this worldwhere there is a growing attitude to reduce cost no matter what, even though thefire (software) codes can reduce cost during integration and maintenance.

– Tom Williams


As an engineer, I find that reminding myself of the costs of amistake in this discipline both saddening and humbling. In my experience, ahumble engineer is more likely to check his work (i.e. peer reviews) and lesslikeley to make a mistake than one that thinks they know everything.

I recently reviewed the THERAC-25 cases after a post mentioned it on Slashdot. I've been out of college 6 years now, and that was the first I'd heard of thoseincidents.

Please regularly share your examples of engineering and software accidents. They help keep things in perspective for both the old hands and the new.

– Robert Fritts


Your article has an interesting twist to it that many people may notbe aware of. The fire protection industry in recent years has been switching tomicrocontroller, ie software, based designs. As a systems developer of fireprotection equipment, I am well aware of the potential ramifications of poorcode. You are correct that there is little if any legal, or even industrystandards, for regulating software in this industry. What little exists, comesfrom institutions like Factory Mutual whom I have worked with in the testing andlisting of fire protection equipment.

– Matt Flyer


The book by Dr. Nancy Leveson “Safeware” provides more examples of why code should be “safe.” Good safety requirements up front allocated to software as appropriate will help not solve the problems of safety.

– Perry M Stufflebeam


Building fire codes are perhaps a poor analogy to use for software safety standards. They are restricted to a rather small problem domain, which isn't the case for software.

In other safety critical applications, such as civil engineering and the medical profession, licensed professionals must certify that certain standards of work are followed during a project or procedure. In the software business we have no such licensing or required certification.

There are, however, published standards for software that we could follow if we were trained to use them. The IEEE software engineering standards are the primary example that comes to mind. The problem isn't that proper methods aren't defined and well documented, it is that we don't or won't use them for one reason or another.

– Steve Hanka


I disagree with your analysis that the industry is at the 1860 level, it's way more immature. We don'teven have language yet. My personal group to blame for this is our English professors, who think the world needsmore people who can emulate the great alcoholic authors of the 19'th century than people who can explain abstractconcepts.

– Annon


Even with all exhaustive software development requirements in RTCA DO-178-B, aircraft sometimes havesoftware bugs, and even crash due to software. Often FAA inspector feels overworked, or pressured by company tocertify software at lower safety category so there is less paperwork to go through. This also save company moneyif they can get inspector to certify at lower level. Then work is contracted out to cheap place like Russia, wherethey not realize that there is big problem with alchoholisim. So there more bugs and is not well tested and isless paperwork. Then electricity is not so reliable in Russia, so computers for development go down a lot, makemore problems in software. This why I not like to fly anymore.

– Victor Katchenko


In your article, you fail to mention the RTCA DO-178B standard used by the FAA to certify avionicssoftware. The standard lists 5 levels of certification, A through E, with A being the highest level. For asoftware package to be certified at level A, every single line of code must trace back to a requirement, everyartifact (requirements documents, design documents, and code) must be peer reviewed (with the peer review minutesdocumented), and every single bug, whether in documentation or in code itself, must be formally tracked anddocumented as well. Moreover, each requirement must be formally tested, each test procedure peer reviewed, and theresults documented and peer reviewed as well.

The success of this approach is highlighted by the fact that there has never been an aircraft accident where asoftware failure has been found to be the cause.

Your article makes many valid points about the lack of standards in the software development industry as a whole,but there is no discussion of areas where standards have been developed and followed successfully. It's easy topoint out flaws, hard to suggest realistic solutions, and harder still to find cases in which the flaws have beenproactively addressed.

Before your readers never drive a car or board a plane again for fear of their safety, perhaps you might mention tothem that we're not completely doomed. Examples of good process do exist, and are followed, with great success andsafety.

– Casey Ballentine

Jack Replies Casey, actually there have been many aircraft accidents with software as a contributing factor,and more than a few deaths. The Chinook incidents in the UK come to mind.I have no data on accidents in which the DO-178B standard was followed, though – the Chinook code wasfound to be so error ridden it was impossible to even inspect the stuff. According the the RISKS digest pilots regularly experience avionics crashes and are instructed to cycle the circuit breakers. On someplanes the avionics breakers have a colored collar to make it easier to find them in a hurry.

But I didn't mention DO-178B, which, at level A, is a hugely rigorous process that leads to awesome code. It's not unusualto have half a page of documentation per line of code. As one who spends a lot of time on planes I'm gladvendors spend a ton of money doing all this! But most code is developed in an ad hoc manner. Seems to me we needa standard for non-safety critical code that's less costly than DO-178B, but that reigns in the coding cowboys…and bosses who chant “just write a lot of code fast, darn it!”


I've flown in helicopters in the petrolium industry and seen at least 10 people killed in accidents, I've also developed helicopter avionics software, and Air transport avionics software — there are worlds of difference in the resources made available for an air transport avionics project compaired to a small helicopter avionics project. In an air transport project 120 person staffs are not uncommon, For a relatively complex helicopter system, there might on a tight budget be a 2 person staff. In air transport a DO-178 level A project is done as a DO-178 level A project. For a light helicopter a level C unit might on a tight budget with a busy FAA staff become a level E (minimal paperwork and virtualy nothing but black box testing) project. I've read NTSB accident reports for small aircraft where the GPS reported the wrong magnetic heading — fatal in mountainous terrain when your primary heads up heading reference is the compass. In small plane avionics I've seen counterfeit Integrated circuits that did not perform like the original. I've been on test flights where the engine controller overrev'ed and stripped the gears in the transmission forcing an autorotation on a hot summer day at high altitude.

– Anonymous


During the 1970's there was a big push among academics and government, and some of us programmers, todevelop and use languages that made it more difficult to write buggy code. Then along came “C”. I'm not sayingthat C is responsible for all our problems, but the mindset that C encourages – quick throw-away code writtenwithout design – is responsible for a good many.

– Chuck Bolz


I'm a recent grad of Kansas State University, and software disastersis something that our faculty somewhat dwells upon. Therac came up in two orthree of my classes as an example of gigantic failures. But its hardly the onlyone. The cover of the book for my logic course, an important lesson in firstorder logic and formal proofs (it even ventured into temporal reasoning!)features several snapshots of the Arianne rocket failure.

Another relevant failure is the mars pathfinder, which was a very interestingfailure. If you've ever worked with robots or satellites, the first and nearlyalways only indication something's gone wrong is loss of communication. In thecase of pathfinder, it was communicating too well. Nobody had expected it to beable to send information as clearly and quickly as it had. As a result, morework was being generated to drive the signal, more than had been tested forproblems. This ultimately lead to a priority inversion, where the communicationssystem would basically starve another process and trigger a watchdog timerreset.

Interestingly, Microsoft of all places has a page on the topichttp://research.microsoft.com/~mbj/Mars_Pathfinder/Mars_Pathfinder.html

– Justin Dugger


The legal system in the U.S. with it's multi-billion dollar payoutsfor accident claims means that companies must consider these kind of payouts inplanning projects and design defensively.

Consider the Boeing 757 FMS done by Honeywell. It was done to DO-178. It useda paged memory scheme because the processor only had a 24 bit address bus as Irecall. The paging was necessary to hold the global nav-aid database. On anAmerican Airlines flight to Cali Columbia the pilot had the wrong page in memoryselected when he entered the 4 character Waypoint code for the waypoint outsideCali. This caused the FMS to select another waypoint in the wrong page ofmemory with the same 4 letter code and turn the plane 90 degrees to the easttoward a mountain. The pilots did not realize what had happened in time andwere unable to avoid the mountain. 189 people were killed. The families suedAmerican Airlines and Honeywell and Jeppesen(The Navigation Database Maker) Theplaintiffs lawyers amoung other things argued that the paging scheme of the FMSlead to the error that caused the crash and a non-paged design or better userinterface with a “preview” of the new route could have prevented the accident. The award totaled Four Billion Dollars. This sent Honeywell stock from $80 ashare to $20 a share in less than a week. It has never recovered. Even thoughthe software was working “exactly as the DO-178 test plan said it should” thatdid not stop the jury from making the award. Even with a robust process likeDO-178, we need to question the systems engineers and systems architects andmanagers on everything like this. Your 401K, Your pension, Your IRA, YourCompany, all are at stake.

– Broke Shareholder


The question is whether we know how to define an equivalent to a fire code. Our software processes define how we do things and not the result, while fire codes define the result (door open out, not a process for designing doors.) Even our best processes such as CMM 5 can produce software project disasters (worst software project result I was involved in was done by a CMM 5 group operating outside their expertise) and DO-278 has also produced disasters.

– Dale Force


The MISRA (Motor Industries Software Reliability Association) standard for a sub set of “C” that is less risky than un-constrained “C” is one example of an attempt at a “Fire Code” by a specific industry. Lint is another way of problem prevention by enforcing a higher standard through a machine check of code. ADA and dialects offer additional checking that is lacking in “C” as well. A prudent company will couple these type of guidelines/coding standards with a robust development process, and an adequate well trained staff, and tools. In levels A-C of DO-178 the code coverage testing sometimes uncovers hidden compiler/processor quirks, or bugs in vendor supplied COTS libraries.

All of this can reduce the risks and costs of liability related litigation — often one of the largest overlooked costs of a product plan.

– William Murray


A barrier to the kind of legislation that would be necessary to drive a software standards code is the lobby that the software industry mounts against any legislation that imposes such standards.

Some legitimate concerns are:

1) How would US software economics be influenced with respect to international competition? Will customers pay the difference in cost for safe US software vs cheap foreign software?

2) Would a safety code impose a language, structure, or process that would stymie the advance of a new technology, to the detriment of competitiveness?

3) Would it apply to all software? Internal use, too? If not, where is the line? Could we effectively identify a subset of software that required the standard? Would spreadsheet programs need to be certified under the code because some people use them to compute safety critical values? Must development of safety critical software be done using only certified development tools?

Once again, the exceeding complexity of software challenges comparisons to any other discipline. This is an inescapable fact: Software is not subject to the laws of physics. Unlike fire, there's not a simple way of preventing or controlling damage from software errors.

Until the software consuming market and/or the software supplying industry recognize the real costs of software quality, the discussion seem purely acedemic. Perhaps what we should be talking about first, is “how do we educate consumers and industry about the cost of quality?”

– Rob Robason


* In the avionics and “life-critical” embedded development areas … the discipline of software reliability is usually (but not always) good-to-excellent. The real thrust of your article is that huge costs and liabilities are generally externalized by poor reliability in many OTHER areas … this externalization possible in our society both because liability is not very stringent, and the public has become conditioned to the notion that “software is unreliable.”

I'm sure you all know the “Bill Gates joke” about the first pilotless passenger airplane … brought to you by microsoft …

That joke is a sardonic commentary on the general fact that the majority of commercial programmers come from (and are “trained by”) a market environment where “features” and first-to-market dominate all other concerns. You cannot have quality in ANY other aspect without breaking that mindset.

* I program in both Ada and C, and would NOT agree that simply coding in Ada (for the same problem, with the same methodology) provides any large automatic improvement in reliability. I think your “example” of tool-advantage in this regard is overwrought.

Generally however those using Ada tend to be more concerned about the issue(s), and more aware of the OTHER important coding and validation techniques needed to build reliable and robust code … the “Ada programmers” by selection have already migrated away from the typical commercial “slam out code, quality be dammed” mindset.

So the consequence is indeed that usually the “Ada programs” are far better than the “C programs.” (or “Visual Basic programs.”)

But “typical” programmers with typical programmer attitudes… and supervision and schedules … won't be 'cured' by the automagical application of Ada … or any other tool… absent a tool which is smarter than they are and understands the problem better. And at this point you don't need the programmer!

– Lee Harrison


Lee Harrison understands the heart of the matter: quality is a direct result of the attitude of the developer.

I work for a major avionics manufacturer, and under the rigor of the DO-178B software process. While the process provides the framework to implement quality-control measures–such as peer reviews–it does not make a developer want to build safe software.

A software safety code should reward developers that show responsibility for the safety of their software. It should empower them to assert their safety-minded views on his organization. It should conversely make sloppy developers feel out of place enough to either shore up their practices or try a different profession.

By the way, we must realize that development rigor is proportional to initial cost–any savings due to safety will only be realized by the reduction of post-release issues. So a “progressive” standard that provides a variety of cost-versus-rigor tradeoffs is desirable.

– Dave Rahardja


Reader Response


When it comes to codes for doing design look at hardware: Building wiring is done to NEC. Aircraft Avionics is done to DO-160D, plus TSO's. Commercial computing equipment to UL, CE, and FCC part 15, plus various hardware standards like 802.11. It is left up to each industries regulating body to determine what standards make sense for that particular industry. Software probably should take a similar approach — doing a game to DO178B-Level A might not be necessary. Aiplanes are already done to DO-178B. The hard part is the infrastructure things like operating systems, spreadsheets, compilers, CAD tools, and networked devices that can handle secondary safety critical data in a secondary way that tend not to get done to a DO-178B level A,B or C standard right now, and need to achieve a higher level of trustworthyness in some cases, plus need better documentation in the event of a disaster that causes loss of a part of a development team in others. The hard part is that some of these items also often are used for games, or other casual activities, and so this dilutes the amount of weight given to the effort — Because it “will just be used for games, or other non serious activities” in many peoples minds.

This makes the job of who ever sets which category in a multi-level “progressive” standard, all that much harder as the true nature of what can go wrong due to the software is covered with all these non-critical applications, and there are many other obviously critical projects needing attention.

A difficult thing — if they come up with the Federal Software Commision, or similar.

– William Murray


While there is truth to Lee Harrison's comment about the existence of a software development culture around the use of Ada that affects results, it is clearly not the whole story.

Stephen Zeigler's study of the use of Ada and C within the Verdix Corporation ( http://www.adaic.com/whyada/ada-vs-c/cada_art.html ) levels this factor since the findings were generated within the same organization. In that study, the developers using C were, if anything, more expert than those using Ada. Yet the Ada code was clearly less expensive to develop and maintain.

This study is interesting reading because it is based on the real-world performance of a group of developers over several years.

Another article by John McCormick (http://www.adaic.com/atwork/trains.html) on the use of C and Ada for an intro course in real-time systems showed a significantly greater completion proportion of the class project once the switch from C to Ada had been made. Again, one would expect differences between the developers to be minimal in this situation.

– Ed Falis


I work for a major aircraft company. The key to software safety is to provide safety-critical software educate to all tiers of leadership and technical management throughout a corporation so they will have the knowledge and constancy of purpose to manage software complexity and criticality to reduce system risks. The main problem is there are those who think they know, but don't know what software safety is all about. There ought to be a national standard “code” for software safety developed by a collaboration of true recognized experts in the field representing FAA, NASA, DOD, DOE, FDA, major corporations who turn out safety-critical systems. Lets do it!

– Barry Hendrix
Lockheed Martin
Software Safety Technical Fellow


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.