Miro Samek

image
President

Dr. Miro Samek is the creator of the open source QP active object frameworks and the free QM graphical modeling tool. His practical books about UML state machines, active objects (actors), and event-driven frameworks for embedded systems are among the most popular on the market. Miro has also published dozens of technical articles for Embedded Systems Design, Dr. Dobb's Journal, and a column for C/C++ Users Journal. He is also the founder of Quantum Leaps (state-machine.com), an open source company dedicated to bringing quantum leaps of innovation to embedded systems programming by making software and tools that enable widespread adoption of event-driven active object frameworks, hierarchical state machines (UML statecharts), design by contract, rapid prototyping, modeling, and automatic code generation. Miro blogs at embeddedgurus.com/state-space/

Miro Samek

's contributions
Articles
Comments
    • It seems to me that most of the comments so far corroborate the observation made in the article, that unfortunately "most engineers have their own personal and often idiosyncratic ways of modeling problems, that they resist the more sophisticated and standardized approaches and are stuck to hands-on C programming". I believe that this is mostly because, unfortunately, most developers do not really know what is already available out there, what kind of code can be generated, and how to combine the generated code with hand-crafted code. The truth of the matter is that graphical modeling and code generation are mature technologies and it is really hard to beat the generated code by any homegrown implementation. For example, it is very hard to come up with a good, efficient, traceable, and *correct* implementation of hierarchical state machines (UML statecharts). Of course, graphical modeling and automatic code generation are not silver bullets, because you still need to *think* rather hard about your problem. But your problems change from 15-levels of nested if-else "spaghetti" to thinking about events, states, transitions, and guard conditions. In most cases, you can still program in C, but you code only the actions executed upon entry/exit from states and transitions. What you do *not* need to code is the "housekeeping" code of your (hierarchical) state machines, because the tool generates it for your based on the diagram(s) you created. And the portion of such "housekeeping" code is significant, typically 50-70%. Additionally, there is a lot of code you reuse from the event-driven framework surrounding your state machines. All in all, the productivity gains on offer here are so significant, that not many software developers can afford to miss out. And that's why, I think, Michael Barr predicts that "tools that are able to reliably generate those millions of lines of C code automatically for us, based on system specifications, will ultimately take over."

    • No. If anything, the C implementation of virtual functions might actually increase the RAM footprint. It depends where the virtual tables are allocated. It seems that the method of "copying and overriding" the virtual tables that Dan advocates would require allocating them in RAM. In contrast, the C++ compiler can (and typically does) allocate virtual tables in ROM, as they are known at compile time.

    • Disabling assertions in the production code may still be the beaten path approach, but it seems to me that one of the main purposes of Jack's article is to THINK about it for a minute. The often quoted opinion in this matters comes from C.A.R. Hoare, who considered disabling assertions in the final product like using a lifebelt during practice, but not bothering with it for the real thing. I personally find the comparison of assertions in software to fuses in electric circuits quite compelling. Imagine buying a brand new car and just before driving it on the street for the fist time, replacing all fuses with paperclips. Disabling assertions in production code is just as ridiculous. Of course, the assertion handler must be different and very carefully designed in the production code than during debugging. But for testing, leaving the assertions in is actually simpler, because you "test what you fly and fly what you test".

    • Software tracing tools aren't new. Some of the greybeards might remember the ScopeTools products originally developed by RTI in the 1990's (StethoScope, TraceScope, ProfileScope, etc.), which were then sold to Wind River Systems in 2005. However, even though the ScopeTools were really powerful and even more sophisticated than the ones Jack describes in this article, they somehow went extinct over the years. I'm not quite sure why, but perhaps because maintaining the tracing instrumentation manually was too big a pain for programmers in the long run. I realize that the tools (e.g., uC/Probe), can monitor any variable in the target, even without instrumenting the target code. But such "automatic" monitoring is necessarily limited to polling the variable at a certain interval. The value of such tracing for debugging the system is limited, because the Murphy's law will make sure that the bug will occur between the polling intervals. So, in the end, tracing the system *reliably* still requires manual instrumenting the application code. Also granted, that an RTOS can be pre-instrumented, but an RTOS "knows" only about tasks, semaphores, and other such low-level mechanisms, but does not know anything specific about the application. This hasn't changed for the past 20 years, from the days of the ScopeTools. I think that we need a game changer in the software architecture for software tracing to really succeed, otherwise we are "doing the same thing over and over again and expect different results", which is a definition of insanity. One development that I see as promising is to go beyond the RTOS to a framework. A framework is an "application skeleton" that can be pre-instrumented, so that it can report many more interesting occurrences than an RTOS. For example, a framework based on state machines can report all activities inside the state machines in the application, such as transitions, entry/exit to states, etc.

    • Frankly, I find the title of this article overpromising and outright misleading. I've been waiting for the second part to see where the author is going with this, but so far the articles have really little to do with object-oriented programming. The author devotes already the second article to a very basic module-scope encapsulation, which is limited to a single instance of some "object". Object oriented programming is about managing an open-ended number of objects (instances of a class). For anybody really interested in object-oriented programming in C, I would highly recommend the recent series of excellent articles by Dan Saks on the subject matter (e.g., http://www.embedded.com/electronics-blogs/programming-pointers/4401463/Initializing-derived-polymorphic-objects). Miro Samek state-machine.com

    • I think you might be confusing the semantics of "const *" with "* const", and perhaps also with "const * const" The "me" pointer in C corresponds directly to the "this" pointer in C++. In the C++ Standard, the "this" pointer is implicitly declared "* const", a constant pointer, because it cannot be changed. The declaration of "me" merely copies the C++ Standard in this respect. But please note that while the "me" pointer cannot be changed inside a class method, the object it points to can change. If you don't want to allow changing of the object, you can use the "const * const" declaration: void Shape_perimeter(Shape const * const me);