David Brown

image

Biography has not been added

David Brown

's contributions
Articles
Comments
    • It is clear the author is a bit mixed up about C and C++, as well as the standards versions. By "If you are using an older version of C11", he means "an older version of C++". And templates and exceptions are not exactly new features of C++11 ! Regarding templates, it is a common misconception that they necessarily mean larger code in C++. That will depend on how they are defined and used. Inlining, constant propagation, common code merging, and various other compiler techniques can mean code is shorter with templates - especially compared to the alternative of trying to write "generic" functions. And while there are good reasons for choosing not to use exceptions, especially in embedded programming, run-time speed is not one of them. Enabling exceptions usually has very little or no impact on the speed of the code, and indeed is often faster than using error returns or similar alternative techniques. However, it can sometimes involve substantial increase in code size for the stack unwind tables.

    • If you are writing floating point code for embedded systems, be /very/ careful about your floating point literals. If you write 5.0, then you have a double-precision "double". If you want a single-precision "float", then write 5.0f. On a microcontroller with software floating point, double-precision operations are typically about 3 or 4 times slower than single-precision. If the microcontroller has 32-bit hardware floating point (like a Cortex M4F), the difference is a factor of several hundred. So don't write "float y; ...; y /= 2.0;". You might find your compiler bumps this up to an operation on doubles and then converts back down to singles. Write "y /= 2.0f", or even better, just write "y /= 2;". That is the natural way to write it. And if your compiler supports it, don't forget warnings to catch mistakes like this: -Wfloat-equal -Wfloat-conversion -Wdouble-promotion

    • As Matthew says, use types from stdint.h. Don't make your own typedefs for size-specific types (unless you are stuck with tools from last century). Of course, use typedefs freely for making your own types for other purposes - just don't use them to duplicate standard types. And if you want your code to be portable across a range of target architectures while being optimal, go beyond the basic types like uint32_t. For the example above, comparing 8-bit and 32-bit code, the correct type to use is "int_fast8_t". This says "give me a signed integer, at least 8-bits in size, whatever works fastest on this platform". On an 8-bit AVR, it will be 8-bit in size. On a 32-bit ARM, it will be 32-bit. On both platforms you get the best results. Don't /ever/ use "char" for an aritmetic type! And the article is incorrect in saying that signed types are more expensive than unsigned types - precisely because signed types can have more optimisations than unsigned types. It is possible that IAR's compilers are different here and don't optimise signed types as freely as the C standards allow, but a C compiler can ignore the possibility of overflow for signed types. This gives it more opportunities for optimisation, not less. Sometimes there can be particular operations that are faster with unsigned types than signed types on particular cores, however. And unsigned types should always be used for bitwise operations.

    • Your first example will fail in most cases, because you have not declared "flag" to be "volatile" (or alternatively, you have not used volatile accesses on it in the main loop). I expect you will cover "volatile" in detail in later parts, but it should be in your examples here too. The idea that you should have a "default" statement in a switch to "help recover from an undefined state" is laughable. Write your code correctly so that it will not /have/ undefined states. If you have bugs in the code (or perhaps hardware problems), then a default statement is not going to help significantly - it just means you have added more code and therefore more complexity, on a code path that you almost certainly will never test. "Default" switch clauses are there for when you want a general case rather than just specific ones - never as a "just in case something goes wrong".

    • A smaller RTOS with less cache misses will mean it runs faster and more consistent. But contrary to what some people here have been saying, "real-time" is /not/ about having consistent or predictable times for anything. It is about having guarantees on deadlines. If a task has to be completed within 100 ms, then it doesn't matter if it is done in 1 us or 99.9 ms, or that cycles vary over this whole range - it is correct and real-time. Having predictable and consistent timings means you can match tighter deadlines, and you can use a greater proportion of your processing power for real-time tasks. But on a typical Linux system, you have lots of non-real-time tasks too (if not, you would be better off with a small dedicated system rather than Linux). So you have the real-time tasks with real-time scheduling so that they get the time they need, when they need it - and let everything else run when it gets the chance. Regarding FreeRTOS, you /are/ mistaken. It is totally unrelated to the Linux kernel, and targets a completely different size of system.

    • Most "big name" toolchain vendors can provide you with older versions of their tools without too much fuss or extra cost (though typically without much support). But that's a good reason to pick open source toolchains as well as having the source for the application itself.

    • A free market only works when there is significant competition and alternative suppliers, and when there is clear knowledge about the products and suppliers. Do you know which scope manufacturers are going to support their models in four years' time? Do you have a choice between different scopes with roughly similar characteristics and price, but differentiated in their expected support time? If not, then a "free market" does not exist for supported scopes.

    • The scaling of electronics does not work like that. With smaller feature width on the chips, you get more features and more processing power for the same die space and the same money - but that does not translate into saying you can get the old features for less money. Chips cost money to design, produce, test and distribute - there are minimum prices. I'm guessing the bottom end is something like 20-30 cents. The 4-bit market is close to dead - even the simplest rice cookers will use 8-bit devices these days. As the price (and size and power requirements) of 8-bitters has come down to within a few cents of 4-bit devices, there is no longer a good reason for a "rice cooker" company to use them - they need the 8-bit chips for their "high-end" rice cookers, and dropping the 4-bitters means less inventory and one less development team. The same thing will happen with 8-bitters pushed out by 20-30 cent 32-bit devices, though it will take a bit of time. (There are no 16-bit devices of significance left, except the msp430 - which is best grouped with 8-bit devices.)

    • I learned most of the programming languages and paradigms (but not Forth or Python) I mentioned in a 3 year university course, of which the "computing" part was only about 30% - the rest was maths. And most of the "computing" didn't involve real programming languages at all - it was theoretical. The point is, you don't need to be fluent in all these languages, or learn the more advanced features. You need to understand the principles, not the details, and that can be done in a much shorter time. Once you have learned enough about the principles of programming, you can pick up the basics of a new language in a couple of days, and be confidently using it for development well within a week.

    • Your principle is right, but you are not going far enough. All these languages are essentially the same - they are all imperative low-level single-thread programming languages, and you can translate easily enough between them. Sure, some things are easier to express in one language than another one, but there are no fundamental differences. It would be far better to teach students a variety of different programming paradigms - different ways of /thinking/ about their programming. Teach them functional programming so that they learn about functions, states, types, result-oriented coding (i.e., say what you want to know, not how to calculate it), and provably correct programming. Teach them Forth for the ultimate divide and conquer - and to learn that sometimes it is best to think backwards. Teach them occam or CSP to learn how multi-threading /really/ works. Teach them a high-level language like Python, so they can learn when high-level design is more important than low-level premature micro-optimisations. Teach them assembly, so that they learn how the cpu actually works and thinks. /Then/ teach them C, C++, Java, etc., as a practical way of getting things done.

    • Producing a correct C++ implementation is hard, no doubt about that. And it will likely have more bugs than the C parts - though most such bugs will only show in very obscure code, rather than in common use real-world code. But if you try to divide up the costs involved in making a full toolchain like CW, then the C++ specific support is only one part - and far less than the C support (most C++ boils down to the same internal representation as you already have for C. There are even some C++ compilers where the C++ part is just a pre-processor for the C compiler). Thus the costs to the user for getting C++ compilation are hugely out of synchronisation with the development costs. I believe therefore that C++ support should follow the same pricing model as for C in many commercial toolchains - code limited for cheap licenses, with stepwise increases in code size, support and toolchain features towards the expensive licenses. Note that some toolchain vendors, such as CodeSourcery, do this already.

    • I think one of the things holding back C++ in the embedded world is the tools. The two biggest issues are commercial toolchains and debuggers (open source toolchains, with gcc at the head, are fine for C++). Commercial toolchain suppliers seem to view C++ as a hugely advanced and expensive option - despite barely half-hearted support for it. One example I have looked at recently is Freescale's CodeWarrior. For C programming, CW is turning into a very nice tool - good compiler, good IDE, and usable libraries and "wizards". And the price is good - the free code-limited version is sized so that you can use it for a lot of real-world projects. Then you have pay-for options for more code size, support, and advanced features such as kernel-aware debugging, debuggable libraries, etc. And where does C++ fit in? It is only available on the most expensive version. It's absurd, and a pointless limitation. If the commercial toolchain developers actually put some added value here - say, C++ versions of their libraries and device header files - it might be understandable. But as it stands, I get the impression that they like to advertise C++ but would prefer people not to use it. The other big problem with C++ is in debugging - many debuggers are simply not up to the task as you try to deal with things like breakpoints in methods, disassembly of overloaded functions, and comprehending names generated from templates.

    • You are right that C is not suitable for programs with millions of lines of code. But what you are missing is that /no/ programming language - real or imagined - is suitable for programs that size. The issue is that /programmers/ are not suitable for working with millions of lines of code. There are two ways to handle very large projects. One is to divide it very clearly into independent projects. And I mean /independent/ projects - not just different groups doing different libraries all designed to work together. The other method is to use programming languages which do far more in less code. Fewer lines of code means a more manageable project. Of course, the best results usually come from combining these. Work with multiple subprojects, and use whatever language makes sense for each particular project.

    • 1. No code that is correct can be made incorrect by adding a "volatile". But it can be made bigger and slower, and less clear. 2. Exactly correct. C has no proper support for controlled accesses. Similarly, it does not have support for things like memory barriers, cache or pipeline controls, etc. You have to use toolchain-specific enhancements, or assembly (possibly inline). Explicit volatile accesses using typecasting is just the best that can be done using the limited tools available. 3. C++ doesn't let you do anything that C can't do (in this area), but it may let you do it a little neater and clearer. 4. Using explicit volatile accesses rather than declaring data as volatile does make it easier to forget them. But equally, putting volatile in the declarations can make you think that you've done all you need to do, and give a false sense of security. Especially for bigger and more advanced processors, "volatile" is never enough - so you need to be in the habit of understanding your accesses. Having said that, I fully agree with putting "volatile" in the declarations of data for which accesses always need to be volatile, such as many hardware registers. I hope now you will /think/ about what each access means when you use them, but that doesn't mean it is necessary to write them all out explicitly if it does not add to the clarity and understanding of the code.

    • A lockless queue like that also illustrates why "volatile" is often not enough - and simply declaring data as "volatile" quickly leads to mistaken assumptions, while explicit access control may make things more obvious. If your queue writer code puts data into the queue with non-volatile writes, then updates the head with a volatile write, many programmers will assume that the data writes are completed before the volatile write to "head". After all, volatile accesses can't be re-ordered, right? Wrong. Volatile accesses /can/ be reordered with respect to non-volatile accesses, and the compiler can do the data writes after the volatile head write. You need memory barriers to get the write effect, or you need explicit volatile writes on the data too. So why not just make the data non-volatile? Because you can often make the code much bigger and slower - and often the code using volatiles is in low-level, time-critical code.

    • Let me first give you a pointer to one of Linus Torvald's rants against "volatile": http://lwn.net/Articles/233482/ Obviously he is talking specifically about the Linux kernel, not embedded programming, but I believe it still mostly applies here. Most people - including the authors of most books about embedded programming - think of "volatile" as applying to the data. And in many cases, it is very convenient to declare the data as "volatile", meaning that all accesses to it are volatile. But I believe that thinking that way limits your understanding of "volatile" and about controlling accesses. It works well enough for simple programs, and simple processors - make all your hardware registers "volatile" and assuming everything works okay. But it fails in complex systems, it fails with advanced processors, it fails when mixing volatile and non-volatile data, and easily leads to inefficient code when using faster processors. So how do you force a volatile access to data that is not declared volatile? You use a typecast: #define volatileAccess32(var) \ (*((volatile uint32_t *) &(var))) extern uint32_t vol; uint32_t foo(uint32_t x) { volatileAccess32(vol) = x; return volatileAccess32(vol); } The typecast is messy, so you wrap it in a macro, static inline function, or C++ template according to taste. There are lots of situations when you want different types of access. Maybe you've got a hardware register that you want to control with volatile writes, but are happy with non-volatile reads to get better code. Perhaps you've got a lockless queue with one context (thread, interrupt code, etc.) controlling "head" and the other controlling "tail". The process controlling "head" will not need volatile reads to head, but it will need volatile writes, and it will need volatile reads from "tail".

    • Looking up "volatile" in a dictionary does not help - what is of interest is what the C standards say, and how real-world compilers implement them. C has no concept of volatile /data/. When you make a volatile access, you are telling the compiler that it should do exactly as many reads and writes as the source code says, in exactly the same order (with respect to other volatile accesses). You are /not/ telling the compiler that this data might change suddenly, or might be read by external hardware. You are telling it to access it exactly as stated in the source code. It is very common to have data that needs volatile accesses in one part of the code, and can use non-volatile accesses elsewhere. Or perhaps you need your writes to be volatile, but not reads. You get that control by being explicit in your accesses, not in your declarations. There are also many situations where "volatile" is not enough - processors with cache, re-ordering, multiple cores, etc., make it far more difficult to make sure that the access really happens. Get in the habit of making your accesses explicitly volatile in the code that uses it, and you will write clearer code that will work better when you move to more advanced processors.

    • Don't rely on your compiler doing any sort of direct translation just because you turned off optimization - the compiler is free to optimize code regardless of any switches you use.

    • The key to getting "volatile" right is to understand that there is no such thing as "volatile data" or a "volatile variable" (and certainly no "volatile const" data). It is /accesses/ that are volatile. Declaring data to be volatile is just a short-hand for saying that all accesses to that data are volatile.

    • Some compilers, such as newer gcc versions, will factor out "hot" and "cold" functions automatically and work with the linker to place them separately.

    • You misunderstand. First, it is quad-spi - there are four data lines in parallel. So when it is running at 100 MHz, you get approximately 40 MB/s throughput. For comparison, a typical 60ns 16-bit NOR flash gives you about 30 MB/s. Secondly, the key use is for things like bootloaders, and then loading the program into ram. On other processors with instruction caches combined with pre-fetching, SPIFI is much more appropriate.

    • The fundamental idiocy here is the legal system in the US that allows decisions like this to be made by untrained and ignorant jury members. I know that the theory is that a jury from the public is unbiased, fair, and immune to corruption - but in reality for cases like this they are highly biased and incapable of reaching a fair, informed and appropriate decision.

    • People who still use function-like macros should learn about "static inline" (or inline C++ methods if you prefer) - you can define your functions with proper syntax, type checking, etc., and the compiled code will be as small and fast as you can get with macros.

    • Could you give an example of this? Common experience is that assembly needs complicated macros to generate different code depending on the parameters (or features of the parameters, such as whether or not they are constant). Decent C or C++ compilers have no problem generating optimal code for constant values - you just have to make sure the function definition is known at compile time (typically an inline function), and you are not crippling your compiler by not enabling optimisation.

    • I don't have any problem with non-standard features where they are useful - you can't do embedded programming at all without using at least some non-standard features, and often they make the code significantly more efficient. But in this case, explicit padding is better than "packed" pragmas or attributes - being standard C is mostly just a bonus as far as I am concerned.

    • Don't use non-standard "packed" attributes or pragmas unless you have a very good reason for it. A better technique to control your packing is simply to add dummy bytes (or bits, or words, or whatever) to make your layout explicit. That keeps everything clear, and avoids any mistakes with alignments, etc. I agree entirely about using static assertions to check that you've got the structure right. You don't need checks on the individual offsets - if you've included explicit dummies, then it's enough to check the size or the offset of the last member. If that's correct, then so is everything else. If your compiler supports it, use a "-Wpadded" flag to check that you haven't missed any pad bytes.

    • Sometimes it might be nice to have abstract classes for something like timers - perhaps the microcontroller has two different types of timers, and you want to treat them the same way. That can be handled by having a distinction between logical timers (which can be abstract) and physical timers (which have a fixed structure). The logical timer class would have a pointer to a physical timer register structure, as you suggest. But this sort of thing has its costs. The aim here is to be able to write clear and concise C++ code with encapsulated syntax (so that you can write timerA.enable(), etc.), while still generating efficient code. A call to such an "enable" function should be inlined and implemented by a single bit operation - not a call to an external function with several layers of indirection. It's okay to pay the time and space costs for indirection when you really need it - but not when you don't.

    • You /can/ use a hacked new operator to let you put the "new" object at a specific place, and call the constructor automatically. But what does that give you, compared to an extern reference declaration and an explicit call to Init()? It gives you uglier code where it is much harder to see the logic, extra overhead in startup, and it means that every action on the object uses an extra pointer and layer of indirection. No thanks - Init() is the sensible way to initialise hardware devices like this.

    • It's fairly clear that you are not an electronics expert - you are simply regurgitating terms like "high frequency power supply noise" without an understanding of what that might mean, or what effect it might have on the music. (To give you a hint, from someone who /does/ know, the answer is zero effect, unless you have a very badly designed system.) HiFi equipment is tested with dynamic testing, not just static. And even if there /were/ these mythical "subtle second-order effects" that can only be heard by a human ear - don't you think that HiFi manufacturers include listening tests during development? You can be sure that high-end HiFi manufacturers use panels of /real/ expert listeners, rather than "Which HiFi" addicts, to help tune systems and identify any issues. Manufacturers use test CDs as part of their development and quality control. If there were combinations of sounds that emphasised particular problems, then you can be sure these would be used in testing. It is correct that there are differences in the sound between different CD players (though very little between high-end players). And it is correct that no CD player is absolutely perfect - there /are/ distortions, and there are effects dependent on the type of electronics used, the way it is designed, and some variation due to tolerances in the electronics. No one will argue any differently. But it is total and complete nonsense to say that vinyl has fewer distortions because it is "all analogue" and CD is digital.

    • You have a few correct points here, but are missing the main point. Yes, peoples' ears are sensitive to certain types of distortion and noise - but you don't get them with good digital playback. You get them with vinyl. When you listen to a CD, the problem is not some extra noise or distortion - it is that the familiar (to you) noise and distortion is missing. It's like tube amplifiers - fans will tell you they give a "warmer" sound than transistor amplifiers. In fact they give a less precise rendition than most mid or even low-end transistor amplifiers - there is more noise, and more distortion. In particular, they have significant second harmonic distortion that you don't get elsewhere - the "warmth" is that second harmonic. If you like your music to have this added noise and distortion, that's fine. I can appreciate that, and understand it. Just don't make nonsense claims about CDs and digital reproduction adding noise or being less accurate in some way. Oh, and the "minimal electronics" movement is purely about pandering to people who will pay for such "features". In the recording studio, the sound from the musician passes through perhaps 30 or 40 opamps before being digitised. A few more on the playback device would not make the slightest difference (assuming, of course, that the electronics is of good quality and design).

    • I think the differences of opinion are actually quite simple to explain. CD gives a more accurate representation of the original recorded sound than vinyl. Very few people have the training and naturally good hearing to be able to notice any digitally-induced noise - SACD or 96kHz/24-bit digital takes that noise below anything that humans could theoretically differentiate. The reason some people prefer vinyl (or tube amplifiers) is because they /like/ the noise. Humans are not comfortable with too stark contrasts - we write on paper that is off-white, we put patterned wallpaper on our walls rather than pure colours, etc. If there is not enough background noise, things seem artificial and cartoonish. There is no point in having a technical argument about the quality of the reproduction of the sound - there are no doubts that CD is more accurate than vinyl. But equally there is no doubt that some people /prefer/ the sound of vinyl. It has nothing to do with CD's sounding "harsh" or "failing to preserve the continuity". It is just that some people /like/ the pops, crackles and hisses from vinyl - it feels familiar and comfortable to them. As another poster says, it's the same reason people sometimes prefer candlelight to a light bulb.

    • Exceptions are a way of hiding control flow and adding surprises to your code. If used carefully, so that they are caught and handled appropriately, they can be a useful mechanism. The unfortunate reality is that in most cases, programmers don't handle them properly. So enabling exception checking on array bounds just means that an out-of-bounds access leads to an unhandled exception and the death of your program. In any situation when you would be able to handle an out-of-bounds exception sensibly, you would also be able to check the bounds before the access - doing the check in advance is always better. Thus the only benefit of exceptions on array accesses is as a possible debugging aid. And even then it is almost certainly better to make your own array class that does the bounds-checking explicitly. gcc will warn you if it can spot out-of-bounds accesses at compile time, which is always a good check to enable.

    • Arrays in C typically do have dimensions and sizes, when the code is well-written. But C allows you to make a mess of your code and use arrays without specified sizes, or even to use pointers to access array data. You lose a lot of static error-checking when you do that, and typically also generate poorer code, but people still seem to think that C arrays and pointers are interchangeable.

    • What happens when the assert statement is triggered? Asserts /do/ alter the program flow and add to the complexity - if they did not, then they would not do anything! And all code must be checked and tested - are you able to force the error condition to check the assert? There are several possible outcomes - some are good, some are bad. If the assert trigger can be determined at compile-time, then a compile-time warning can be given - that's effectively static analysis, and is a good thing. Unfortunately, it is in the wrong place. To be useful, the static_assert should be in an inline function defined in the header, not in the function body. Triggered asserts can cause breakpoints or stops during a debugger session - that is definitely useful as a debugging aid. But if the assert is triggered at run-time in an active system, what can it do? Abort the program with an error message? That is generally either useless, or worse than useless. At best, log files might be useful in a post-mortem. Sometimes you /can/ do something useful with the error check, such as add a log entry and return a default value. But that is a specific reaction to the error condition, not just a brain-dead assert. What you say about Eiffel's pre- and post-conditions is exactly what I described about specifications of the function. The only difference is that Eiffel allows you to specify these in the language (and therefore do static analysis with them), while C does not.

    • There is nothing wrong with the function definition: int divide(int value1, value2) { return value1 / value2; } The only thing that is wrong, is the assumption people are making about its specification - i.e., what it is supposed to do. It is in fact a function that when given two numbers, the second of which is non-zero, will return their quotient. It is incorrect to give a warning about a divide-by-zero error here, and it is wrong to use an assert or some sort of run-time check in the function. The place to do the checking is before the function is used - that is where a potential program error lies. The other major mistake here is to think that an "assert" is a good thing. Sometimes it is - but often it is not. There are many situations when it is fairly harmless if a divide-by-zero returns an undefined value, and many situations when a failed assert causing a program abort would be the worst possible outcome. An important rule is never to check for error conditions unless you can do something useful with that information - otherwise you have just increased your program's complexity for no gain.

    • I agree with most of this article, but I think it goes a bit far with the "hide everything" aim. It is a worthy idea, but the unfortunate reality is that C (and C++) will not let you hide data and details without a cost. We are talking about embedded programming here - wasted code space costs money, and wasted run time costs power. And while layers of abstraction and data hiding help code quality and testing in some ways, they detract in other ways - they can make debugging much harder, and it can be difficult to follow the flow of information. It is common to slavishly follow the ideas that "all global data is bad", and that implementation details must be hidden at all times. The reality is that global data is sometimes the clearest and most efficient way to pass data around. And implementation details end up in header files if you want your code to be efficient. The simple answer here is to use comments or sectioning of the header file to make usage clear.

    • If one header file needs another header file, it should include it. Generally you should try to keep modules independent when that is possible. But if one header needs a type defined in another header, for example, then it should include it. Never rely on the application code having the extra headers or headers in a particular order.

    • The file handle is an abstract type - it is not a struct, but a pointer to a struct. You can handle these without knowing the contents. If all you have is an empty "struct" declaration, then you can only work with pointers to them, but not the type itself.

    • I think the purpose of listing 2 is as an example - sometimes a delayed initialisation is appropriate, though this example is too simple to show it. Personally, I find it odd to have "initialise variables before use" as a tip - it's a bit like a driver's handbook recommending you start the car's engine before driving off. Assert() is often a poor choice in embedded programming - you don't have any convenient way to show a message, and killing your program is seldom smart. You are correct that passing incorrect data to a function is a programming error, and the aim should be to avoid it in the first place, or catch it quickly if it happens. But assert() is no better (or worse) a way than returning an error value.

    • The compiler should tell you about using unintialised variables, and the original programmer should have checked all the warnings - the reviewer should not have to deal with such elementary things. And beware of writing too many comments on code that will be reviewed. If you put a lot of comments in some complex code, the reviewer will believe the comments rather than reading and understanding the code.

    • Unnecessary globals are a bad idea. But the variables here may be file statics, which is normally perfectly reasonable. You've got to have your data /somewhere/. As to whether it is better to have separate variables, or put them in a structure, that depends on the program, the target processor, and the compiler - you can't make generalisations here. Of course, you are correct about idiotic comments. I don't see a need for any comments in that code sample.

    • You should certainly distinguish error codes if it is useful - but /only/ if it is useful. It is also not uncommon to have extra differentiation as part of development and debugging (but remember to test the shipping code, not just the debug builds!). But beware of adding extra features in case a customer later wants it - that means extra work for you doing the initial development, extra work for the testing (all your error cases must be tested), and extra risk due to the extra code. Are you paying for all that, or is the customer?

    • I've seen worse program organisation. I once had to maintain a program where there was /only/ a master include file - no other headers at all. Every extern declaration of data or functions was inside this file, in no particular order or relation to the module that defined it. Other joys of this program included filenames in DOS convention (the compiler was DOS-based, so that's fair enough) but with a program name prefix first "PRG_". This left 4 letters unique to each file name. The programmer had the same attitude to variable and function names - none longer than 8 letters. On the very rare occasions when anything was commented, the comments were also abbreviated.

    • I agree that spaces in filenames and directory names are a daft idea. But so is limiting names to 8.3 letters, all capitals. Use sensible, meaningful names for files and directories, as long or as short as makes sense - just like for variables. As to all-caps for things like define'd macros or enum constants - yes, it's a convention. And that convention stretches back to K&R's original habits. But that doesn't make it any less a poor convention - writing in all caps is ugly and distracting, and provides no benefits to coding. Some people think it's a good idea because it makes it clear that you are using a macro - but /why/ is that useful? Why should it matter if a "function" is a real function, or a function-like macro? Why should it matter if an identifier is a macro or a variable? If it makes a significant difference to the code you are writing, then you would already know the answer. And if you really want to be able to see at a glance which identifiers are macros, then join the 21st century and get an editor with syntax highlighting. Conventions using all-caps are from the dark ages, when C was created with the single aim of letting K&R write operating system code using fewer keypresses than writing it in assembly (their motivation for developing C was their hatred for the DEC keyboards they had).

    • There are some useful tips here. But I don't entirely agree with your tips about errors. The most important thing to consider about errors, is how you are going to handle them. The second step is to consider how you are going to test your handlers. There is no point in making error type enums and differentiating between error causes, unless this makes a difference in how you will handle the errors. If every error cause leads to a big red light going on, then it is better to have just a single error indicator - that means less extra code to write, test and maintain. The exception here is if you can make use of the cause to aid debugging or post-mortems. Sometimes there is nothing sensible you can do with an error - the sensible thing is then to do nothing. Obsessively checking for every theoretically possible error or unusual situation can make your code larger, at higher risk to bugs, and impossible to test properly. Prefer to write functions that don't return errors, rather than handle returned errors. It is very debatable whether having "LAST" entry in an enum is a good idea. It is strange that it's been used here, given that the inspiration of the article is Ada and strong typing. When you have an enumerated type of errors (for example), then "LAST_ERROR" is not a valid error - it should not be included in the type. There is no good way to express the concept in C (nothing like Ada's 'last type attribute), but putting it in the enum is wrong. It is better to make it a #define'd constant (or even it's own little enum if you don't want to use #define). This means marginally more work when writing the enum definition, but gives you better type safety. Other than that, it was a very nice article.

    • There are many reasons not to use a single "master" include file. It's okay to have a common include file for basic functionality that should always be available, such as including stdint.h, perhaps microcontroller-specific includes, and system configuration - every file needs these available. But outside that, you include the header for a module if you use that module - you don't pollute your namespace with useless names. It keeps your code clearer, easier to maintain, and more modular, and makes compilation faster.

    • I agree with the principle that consistent style is important. However, this particular style seems to have been created some 15 years ago, and left untouched. The world has moved on, and it makes sense to have a style convention that takes advantage of improvements since the days of DOS and K&R C. Consistency is important, especially over long-term projects, but a style guide should be updated as necessary. Here are a few things I particularly disagree with: Drop the DOS-crippled file and directory conventions. Use capital letters in names if you want - but not all-caps. There is no need for abbreviations in file names, unless it's obvious what they mean - you can call a file "displayTables.c" if you want. And C files use ".c", not ".C". Version control software should /not/ modify comment blocks - it should leave files untouched. Most modern VCS software follows that rule. Making your header files work differently depending on when they are used (based on xxx_EXT macros) is such bad style it makes me cringe. I hadn't expected a professional author and developer - who relies on his reputation - to even suggest it. You declare globals as "extern" in a header, and define them in the matching C file - the compiler will tell you if you get something wrong. It's been many years since stdint.h has been available - fixed size data types are called int8_t, etc. There is no need to have your own private convention, and certainly no need to shout about it (fix the broken capslock key). And if your compiler won't accept // comments, get a better compiler.

    • This sort of arrangement is very common as a way to get atomic reads of data. You have to be sure that the looping is bounded, but that's clear enough for such a timer or counter. It is particularly useful when dealing with hardware counters or timers, where you cannot use any sort of locks (not even the simple "global interrupt disable" solution).

    • AVR Studio 5 beta is available. Atmel have supported gcc for the AVR for several years (though most of the work is done outside Atmel). They have had a gcc port for the AVR32 since they first launched that architecture.

    • I disagree on much of this article. Intrinsic functions often a good choice if the compiler happens to define one - typically, they are available as wrappers for single assembly instructions like CLZ. If you need more than one assembly instruction, it is unlikely that there is a matching intrinsic. I have also found cases where intrinsic functions were implemented inefficiently by the compiler - the compiler did a better job when given inline assembly than when using intrinsics. It is rare that a long function is best written in assembly - compilers will often do a better job than an assembly programmer because it can (amongst other things) track register and stack usage for optimisations that would be too time consuming to write by hand. So in most cases, you only need small sections of assembly - perhaps between 1 and 4 instructions. You can't use intrinsics if they don't exist for the code you want. External assembly modules means a lot of extra effort, and it means function call overheads - a big waste of time and space. And because the code is a black box as far as the compiler is concerned, it can't use IPA or global optimisation to improve the code. Inline assembly fixes these issues. I don't know about Green Hills compilers, but gcc will happily inline and optimise inline assembly code, and will optimise the C code around it. The "correct" way to write the count_leading_zeros function with gcc is: static inline int count_zeros(uint32_t src) { int ret; asm(" clz %[ret], %[src] " : [ret] "=r" (ret) : [src] "r" (src)); return ret; } It's true that there is a learning curve for the syntax - but there is a learning curve for writing assembly modules too. And while it's also true that the documentation of the syntax in the gcc manuals could be clearer, there are endless examples, tutorials and resources on the web.

    • MS have tried working with different architectures before on Windows. The original NT worked on x86, MIPS, PPC and Alpha. But one by one, these architectures were dropped. There were several reasons for this: 1. MS's own software (windows, office, etc.) was written specifically for the x86 - making the code portable was a huge effort. 2. Third-party developers wrote code specifically for the x86, not portable code. 3. MS didn't want to pay the costs of making and supporting Windows on these architectures - they made the cpu manufacturers pay. This was a large cost for the manufacturers, and made competing with Intel's processors even harder. In the end, these companies found it was not worth the cost to pay MS, so MS dropped them. In the Linux world (and also for most embedded OS's), portability is standard. The kernel supports several dozen cpu architectures, and most software is written for portability. The cpu architecture is almost incidental in a Linux system. If MS are going to get Win8 running on ARM, they will have a lot of work to do. And they will have to do it themselves - ARM won't pay them to do it. Much of the work is going to be in getting third-party developers to support it. For developers using dotnet, it will not be too hard - but most important software runs native on the x86. The result will be that the Win8 ARM tablets will look like large Win7 phones, without the phone. They will work for browsing, email, MS office, and a few games. They won't work with any other windows programs the user might have. And if you can't run windows programs, why bother with windows? Do you buy a Win8 ARM tablet so that you can run MS Office, MineSweeper and Solitaire, or do you buy the cheaper one with Linux, OpenOffice, and thousands of other apps that can be installed from a simple dialog box?

    • You have certainly quoted correctly from Freescale's website. However, Freescale's website here is wrong. Someone there has got their Coldfire cores badly mixed up. Look at Wikipedia's article on the ColdFire: http://en.wikipedia.org/wiki/Coldfire It is not very detailed, but at least it's got its history correct! It may be that IPextreme has a new version of the V4 core which shares some code with the V1 core. The V1 core was the first ColdFire core that "mere mortals" could license and use in FPGAs or SoCs. Historically, however, the ColdFire cores have always been synthesisable, and have been used inside SoCs from before the term SoC was invented. It's just that you had to be the size of a major American car manufacturer even to hear about them.

    • When you are writing time or space critical code, the important thing is to know your compiler well. Write simple test cases, and look at the generated assembly code - timings are often affected by other things. Different compilers are going to produce different results, and the results will depend on the flags used and the target device. Good code for one device is not necessarily good code for another device. For example, if you are compiling this for an AVR, you want to ensure that pointer indirection is eliminated because it is costly in time and space. But if you are compiling for an ARM, pointers are good - access through a pointer plus offset is cheaper than absolute addressing. Ideally, you want to use a good compiler and let the tools pick the best code. Your job as programmer is to give the compiler as much information as possible, with as clear intentions as possible (if you know something is constant, call it "const". If it is static to a file, call it "static"). One thing that will typically make a very big difference with code like this is to ensure that the compiler can inline the functions. It has to see the function definitions through headers or link-time optimisation. Then the compiler will typically eliminate pointer indirection automatically - unless, of course, it feels pointers give better code.

    • I'm not sure you meant to write "The ColdFire V4 core is a simplified version of the ColdFire V1 core". It's the V1 core that is simplified, not the V4 core - especially since the V1 core is newer than the original V2, V3 and V4 cores.

    • There is a lot more to the question than just the poll numbers. The number of people who don't know that the Earth goes around the Sun in any given country is not a big issue - there are ignorant people everywhere, and there are plenty of people who choose to believe religious teachings rather than scientific results. That's OK by me. The real problem is when people take religious beliefs and claim them as science, or claim that they are backed up by science. It is this attitude that is very much an American problem (though it has spread a little to Europe). It is only in the USA that you get groups like the Galelio was wrong that claim scientific backing for their nonsense. It is only in the USA that quacks and crackpots call themselves scientists and write stuff like this that cannot remotely be called "science". It is also almost only in the USA that you get so many "fake scientists" with qualifications (sometimes earned, sometimes bought, sometimes totally fabricated) that support this sort of thing. It is almost always about money - it's easier to make money writing and selling "intelligent design" books than doing real science. It is also sometimes about trying to enforce your religious beliefs on others. Here in Europe there are plenty of people who are happy to con others out of their money, and plenty of people who write religious books. But they don't call themselves scientists, and if they do, no one gives them any credit for it. Perhaps it is an American attitude that everyone has a right to an opinion, and that you have the right to express that opinion, and that every opinion is equally valid and deserves equal consideration. The European attitude is also that everyone has a right to their opinion, but not that these are equally valid. Feel free to say what you want, but if it's nonsense then you should not expect people to listen.

    • I take exception to the "Is the world going nuts?" comment - I think you meant to say "Is the USA going nuts?". Most people throughout the world simply do not care whether the sun goes around the earth or vice versa - as long as the sun comes each morning, it makes no difference to their daily lives. Then there are a those who choose to believe a strict interpretation of their religion's holy book(s), and thus believe in a geocentric universe. Such people simply believe that when science and their understanding of the Bible/Koran/whatever is in conflict, the book trumps science. But the concept of "scientific" arguments for geocentricity, such as this "Galelio was wrong" group, is almost exclusively an American phenomenon. It's in the same spirit as the "young earth" and "intelligent design" nonsense, and while there has been a limited spreading to other countries, this sort of active disbelieve of science, and religious pseudo-science, is from the USA alone. The USA has a big problem here, and it's getting bigger. But please don't say the "world" is going nuts - the USA is going nuts, and the rest of the world is just getting dragged along.

    • I am wondering if some of the people commenting here have missed the point that this is a class to access memory-mapped devices. C++ has no concept of classes whose data members are in different parts of memory (except for "static" members). So there are no issues about "adding non-memory mapped members" - you can't add extra arbitrary members to the memory mapped device, so you can't add them to the class. If you want to mix your own members with the timer_registers, create a new "normal" C++ class containing the new data, and a reference to a timer_registers object. Similarly, it doesn't make sense to think of virtual functions for a memory-mapped device - the device is what it is, and it can't change. Again, if you need some virtual functions, make a new class that includes a reference to a timer_registers object. You also cannot create or destroy the peripheral, therefore there is no sense in having a constructor or destructor, and the pointer cast is perfectly reasonable. If you want to have some sort of initialisation procedure during startup, it is almost certainly better to write it explicitly so that you are clear about when it is called, and that it is called exactly once. But if you want to do it automatically, create a new class with timer_registers as a reference. Finally, to those that worry about the cost of the pointer indirection - if your compiler is generating object code with unnecessary indirections from this class, get a better compiler or learn to use your existing compiler properly.

    • First off, it should be noted that the code examples used by Microsoft are a totally different kind of code than the Linux kernel mentioned, and from the sort of embedded software written by most of this website's readers. Opinions about the quality of Microsoft's programming aside, there is no reason to assume that the conclusions of this paper are valid for a wider range of software tasks and software development processes. Having said that, asserts can often be useful. They are particularly useful during testing and debugging, and in the interfaces between code modules if they are not well specified and documented. But asserts in general are /not/ free, especially in embedded systems. There is a big question as to where the assert errors should go, and what the software should do when an assert is triggered. In an embedded system, asserts should only be enabled during testing and debugging - if you need run-time checks on finished software, these should be at a higher level. One thing that can be very useful, and is free, is static assertions that are evaluated entirely at compile time. Until an standard static_assert makes its way into the C and C++ standards, it is possible to get a free (though slightly developer-unfriendly) static assertion with macros: #define STATIC_ASSERT_NAME_(line) STATIC_ASSERT_NAME2_(line) #define STATIC_ASSERT_NAME2_(line) assertion_failed_at_line_##line #define static_assert(claim) \ typedef struct { \ char STATIC_ASSERT_NAME_(__LINE__) [(claim) ? 1 : -1]; \ } STATIC_ASSERT_NAME_(__LINE__)

    • For rule #8, your example is bad - integer promotion will ensure that the uint8_t a is promoted to (int) 6, and int8_t b is promoted to (int) -9. The constant "4" is already an int (whether you like it or not), so the comparison will be done correctly as the programmer expected. The advice is sound, however - be wary of mixing signed and unsigned numbers. Sometimes things won't work as expected, and sometimes you end up with unwanted promotions (if you add a uint16_t to a int16_t on an 8-bit processor, you'll not be pleased when the compiler follows the C standards rules and promotes then both to int32_t).

    • The embedded.com source code page is most certainly *not* a "library of open-source software". It is a collection of commercial software trial versions - at least, that's what most of the items seem to be. While such a collection is certainly a useful resource, there is nothing "open" about it. To be truly useful, a library like that should be very clear on exactly what licenses each item has - that way people can see what they are looking at without reading the small print.

    • Good programming style is about readability. "char const" might sound natural to a Frenchman, but English speakers find "const char" to be a more natural and readable phrase. I can see absolutely no benefit from writing your type phrases backwards - it's inconsistent and breaks the flow of the text. It is not unlike silly rules such as writing "if (1 == x)" instead of "if (x == 1)", where logical writing style is sacrificed for mythical error-checking benefits. C has a powerful "typedef" statement - use it to make your types and declarations clear. A very simple rule is the type part of a declaration should not have more than two parts - there is no need to worry about the differences between "const char *p", "char const *p" and "char * const p" because the declaration is never written. Instead, use: typedef char *pchar; const pchar s; or typedef const char cchar; cchar *p; (Note the use of more logical names, rather than the non-obvious "typedef char *ntcs".) Encouraging the correct use of "const" (and "volatile") is a good thing - but please do it by emphasising and encouraging good, clear, *readable* code and good type usage, rather than by inventing your own conventions.

    • It would be a lot easier to take groups like the RIAA seriously if they did not use such inaccurate and wildly disproportionate terms, which are then propagated by the media - including this article. "Theft" and "stealing" require three things - you must take something to which you do not have any rights, you must do it intentionally and knowingly, and you must deny the rightful owner their rightful use of the stolen item. If you take someone's CD, you are stealing their music. If you download an illegal copy, you are not stealing. It's a civil offence - breaking copyright laws and/or licences or contracts. But it is not theft, and it is not a crime, because the rightful owner has lost nothing. Once you get to the levels of selling illegal copies, or distributing to the level of making a significant impact in the potential sales by the rightful owner, you are committing a crime (though it is still not theft). And "piracy" is something that happens at sea, particularly off some parts of the African coast. The same thing applies to IP in engineering. Illegal use of IP is without doubt a serious problem for many people. It can involve copyright violations, breach of contract, licences breaches, and various other civil offences - but it is not theft, and it is in no way the same as stealing a car. As you said in your article, there are two reasons why people do not steal. One is fear of reprisals (which barely applies in the case of illegal IP usage), and the other is an understanding of the moral issues involved and a desire to "do the right thing". IP misuse can only be realistically tackled by appealing to people's morals - confusing, inaccurate and downright dishonest media terms, adverts, and propaganda cannot help. The first step to dealing with any problem is to properly identify the problem. Until IP rights owners, the various pressure groups, and the media learn that, they will never make any progress.

    • An alternative way to handle USB upgrades is to use an external USB interface chip and connect to the microcontroller's serial programming interface (many microcontrollers have a suitable interface). For example, the FTDI2232C USB to serial device has two UARTs, one of which can be used for fast SPI-style communication. If you have a microcontroller with a UART and a serial programming connection, you can add this device to your design and get USB communication that appears as a UART for both the microcontroller and the PC side (no need for new USB drivers on the PC, or any USB-specific knowledge and programming on either side). The second serial port gives you a "back door" for updating the firmware. A big advantage of this is that you don't need any bootloader on the microcontroller, which can save time during production. In fact, if you are using a microprocessor rather than a microcontroller, and are always connected to a PC, then you don't need any flash on the board at all - use the USB device to download the program to ram and run it there.

    • These are all good questions to consider before choosing embedded Linux as your OS. But it's worth pointing out that they are good questions for *any* OS, not just Linux. In particular, people often think that you have to consider licensing and legal issues as a special topic for Linux. It's not a Linux issue, or a GPL issue - you have the same sort of issues with any software you use in your system. With the GPL, the legal issues are out in the open, and are in terms that non-lawyers can understand - with commercial licenses, it is normally much harder to figure out what rights and responsibilities you have. I'm not sure I approve of you publicly generalizing "the rest of us can often ignore the legal issues". It's certainly true that most of use can ignore some of the legal issues, such as the example threat of being sued for unauthorized code in the kernel (assuming it is not your fault!). But other legal issues, such as the requirements of the GPL, should most definitely *not* be ignored. You wouldn't expect developers to ignore the legal and licensing requirements of WinCE or QNX - why imply that they can ignore those of Linux and the GPL?

    • While I agree with much of your article, I think you are wrong about optimisation. It is certainly true that code is often easier to debug with less optimisation (don't turn it off entirely, as the generated assembly is often unintelligably poor). But if your code works with compiler optimisations disabled, and fails when optimisations are enabled, it is almost certainly an error in the code - not a problem with the compiler. I have used compilers in the past that have bugs in their optimisers, but its rare for a decent compiler - the chances are much higher that it is a user error. In listing 4, the problem is not a "optimizer error", and the solution is most certainly *not* to turn off optimisation. The correct solution is to learn to use the "volatile" keyword - if "data_port" is correctly declared as "extern volatile char *data_port", then the optimiser will generate code that works as the programmer intended. I also wonder a little about your programming style - "Old style" C function declarations have not seen much use in new code for fifteen years or so. Your suggestions for better C programming are mostly good advice, but I disagree with the tired old "avoid global variables" mantra. Many people seem to think that it is "better" to hide global data, and refer to it by pointers or by set-and-get functions - leading to programs with exactly the same abuses of global data as direct access, but with harder-to-read code and bigger and slower object code. The problem is when the programmer does not properly control access to shared global resources, and it is in no way limited to global variables (think of a program that updates a screen during its main loop, and has an interrupt routine that prints an error message). I'd also suggest that people take advantage of the warnings and checking that their compilers provide. Some compilers have very good checking, and can even enforce style rules. Most modern compilers would catch most of your "ashitical" bugs if you ask them to.