Is anyone still using assembly language? You betcha! (Part 2)As I mentioned in Part 1 of this two-part mini-series, odd ideas are popping in and out of my head all the time, and every now and then I share my ponderings with the readers of Programmable Logic DesignLine.
A few weeks ago, for example, I asked for your thoughts pertaining to the pros, cons, and reasons for using assembly language, and I was inundated with replies, some of which are provided below (with personal and company names removed to protect the innocent).
Max, I enjoyed reading your article. Interestingly, no one commented on microcode, which is "bare metal" programming. I learned about this in the mid-80's while microcode programming an Adage graphics computer, then later an Aptec I/O computer. While studying VAX/VMS internals I learned the VMS assembly language is a "higher level language" for the VAX microcode.
Editor's [Max's] Note: Interestingly enough I was writing some microcode for a micro-programmed machine just a few weeks ago (it made my head hurt).
Max, thanks for an excellent article. As one who has participated in various stages of software development (from programming in higher-level languages to programming in assembly to a user of other people's applications programming), I have had an opportunity to see a wide range of situations.
It has become my personal conviction that I don't want anyone to do programming for me who is not competent in assembly. Not that I believe that coding exclusively in assembler is desirable, as the one undoubted aspect of such coding is that it is extremely time-consuming.
However, I have seen all too many programs which suffered from various "bugs" (usually design flaws) which were intractable to diagnose because the programmer/team didn't have a functioning capability of working at the lowest code level. Unless one has a "feel" for the actual machine-level execution of the underlying code, various subtle problems (such as race conditions and timing issues) get overlooked.
Editor's [Max's] Note: This is a really good point. For myself, I think that folks who have coded at the assembly level – especially on small-and-slow microprocessors – understand the concepts of writing "efficient" code in terms of its memory footprint and its operation. This underlying understanding serves them well when it comes to creating programs in higher level languages. I'm not a programmer by any stretch of the imagination, but I "dabble" as required. I've seen some code written by professionals that is so patently inefficient that I stagger back in shock and horror :-)
Max, nice article in EE Times. I found your reference to Python amusing – because I currently work in Python, C, C++, and assembler.
Some assembler examples:
- Block XOR, for use in RAID. Keeps the load/store pipeline maximally busy, far better than a compiler could ever hope to do.
- L2 cache flush, on a machine with a rather unreasonable control interface to the L2 cache and a very tight performance spec. It flushes a half megabyte cache in under a millisecond. That's an example of the "total control" thing – it would be impossible to have that code work correctly if you tried to have the compiler do it. In fact, that's true even in the older, slower versions; the specifics on the sequence of loads and stores required to do the job meant that you had to do it by hand.
In both cases, I've used internal bus tracing hardware as part of the debug (for example #1, a cycle-accurate simulator was also needed).
Editor's [Max's] Note: I replied to this message (I respond to everyone) pointing out that there was a much bigger version of the EE Times piece on Programmable Logic DesignLine in the form of Part 1 of this article. A little while later I received the following:
OK, here are some additional comments based on reading the article/feedback. . .
I completely agree with the "don't use it unless you need to" comment. The corollary is "be prepared to recognize that you need it and bite the bullet". Also: if you need it, then you MUST give the job to someone skilled enough that he/she knows – or WILL learn – ALL the relevant machine details.
If you use assembler for performance, you MUST know in detail how the machine pipeline works and what instruction timing looks like. If you have a quad-issue machine and you don't know enough to explain, in full, how that pipeline handles instructions, you don't know enough to do assembly language coding for that machine.
Similarly, if you use assembler because of internal magic that the compiler can't handle (instruction sequencing rules such as in my L2 cache flush, for example) again you MUST know in detail what all those rules are.
Early in my career I worked on two operating systems that were entirely written in assembler – CDC's NOS (along with PLATO) and DEC's RSTS/E operating system for the PDP-11.
Re the comment about inlining [Editor's Note: This refers to a comment in Part 1] that's inline assembler he's talking about. That comment doesn't apply to GCC – it's not just the ADI version that does this right, it's every GCC. Little snippets of assembly code are 100% painless and clean in GCC. Register handling and all that just works.
By the way, in our current product there are only 3 or 4 assembly language modules, maybe a thousand lines. That's under 1 percent of the total line count; the rest is in C or C++. The assembly modules are that way by necessity: (1) intensely performance sensitive code that is way better (much more than a factor of two) than what the compiler delivers, (2) hardware-specific code that has constraints the compiler can't handle, (3) stuff like interrupt dispatch and context switch that involves operations you can't describe in a high level language, like saving and restoring register state or switching stacks.
Hi Max, I appreciate how your assembler/C discussion proceeded without a "language war". I use only assembler in my projects, but would enjoy a better "notation" for assembler. I wonder why the industry skipped the "higher-level-assembler" approach. It seems assemblers never made it passed macros.
A few years back I wrote an assembler for the 65C02 and the ST62E10 using a Pascal/Modula-like notation. The listing created would allow you to see exactly how the higher level constructs turned out. By having better notation, the assembler code cleaned up really well, since all subroutines could be made into a Procedure and called like:
This made things easier to understand and maintain. And most importantly, assembly was the center of this universe with no compromises!
Max, I enjoyed the assembly-code discussions! As a hardware designer, my primary reason for using assembly code was hinted at but not explicitly stated: Unlike compiled code, the execution time of assembly code can be estimated from the code and the hardware design data without actually running the code. This is particularly true in hard-real-time code such as that used for digital signal processing.
From the processor's datasheet, I can make a reliable conservative estimate of the execution time of an assembly-language algorithm. This assures me that the processor will be fast enough for the application before I design any hardware. I then design the hardware knowing that the hardware speed is sufficient for an assembly-language program.
In contrast, the standard vendor-recommended approach, is to choose a processor, design the hardware, buy prototypes, buy software tools, code in detail, debug the code, etc. After all that investment is spent, then measure the code speed to see if it is fast enough.