|Click image to go to digital edition.|
There was a time when we were multilingual. Assembly language was the only alternative to machine code, and each microprocessor and microcontroller family had its own language, reflecting its own distinct instruction set architecture. There were similarities, often artifacts of the limitations that small transistor budgets imposed on silicon architects. And there were differences, arising from those architects’ determination to create a better instruction set from their meager materials.
Master programmers fought against the complexity of assembly-language programming, and against the entropy that complexity engendered. Receiving no help from language structures, which only reflected the underlying chip hardware, they turned to styles, practices, and customs. They adapted notions such as structured programming and strong type-checking from the big-computer world—not as elements of some new language, but as manual practices. Above all, they relied on documentation to make their code comprehensible to those who would come after them.
Eventually, high-level-language compilers began to appear for MPUs: PLM from Intel, and then formal languages—Pascal and even Ada. These latter two languages from the big-machine world brought, inherent in themselves, ideas: strong type-checking, structure, some degree of readability, and the underpinnings of formal proofs. The foundations were coming together upon which we could build a reliable software development methodology.
Then something went wrong. Like barbarian hordes from the North, doctrinaire disciples of Unix swept out of the universities, bearing with them the sacred language C. It became incorrect to criticize the Unix cult of individual expression, or the fundamental rightness of exhibitionistic coding, or to raise questions about the language of the new secular gospel. Pascal and Ada retreated before the onslaught, first to remote fastnesses, and eventually to near oblivion. With them went the multilingual culture of embedded programming, replaced by a stultifying sameness. With them too went the idea that a language could enforce good programming practice, driven out by a language that reveled in conciseness and actively encouraged practices we knew to be bad.
Eventually, from the ruins a retrograde movement has begun to stir—a turn toward the days when, unassisted by their languages, master programmers imposed type, structure, and even provability on their code through their own practices. As our cover story author this month, Thomas Honold , argues, C may not encourage good practices, but it can’t suppress them. In another article, Jean Labrosse uses C to express some universal techniques adaptable to any code production line.
And what of the future? A new generation of programmers is emerging for whom C is ancient, irrelevant history. Many have used only Java. At the same time, and not coincidentally, development platforms such as Android are appearing, enabling Java programmers to patch together embedded systems without resort to less-abstract languages. Will this new wave again engulf all that we have learned about creating reliable, maintainable systems? Will quality in embedded code sink to the level of Web code? Or will the experience of several generations in developing mission-critical systems, meeting requirements, and simply writing good code survive? Though languages change, will customs endure?
Ron Wilson is the editorial director of design publications at UBM Electronics, including EDN, ESD magazine, Embedded.com, the Embedded Systems Conferences, and EE Times' DesignLines. You may reach him at firstname.lastname@example.org.