przemek

image
engineer

Biography has not been added

przemek

's contributions
Articles
Comments
    • It occurred to me that this is a perfect illustration why C++ is mistrusted in embedded programming. The for(...,i++) construct is a familiar iterator: discarded value, well-known and expected side effect. However, due to C++ power, suddenly there's a possibility of hidden nasties, which aren't necessarily under the programmer control: the type of 'i' could have been provided by some third-party library. The standard remedy to that problem is to use a reduced, safe subset of C++, limiting it to just the nice extensions (namespaces, objects, initializations, templates etc). The problem with that approach is that such limitation is an unenforced coding standard, whereas for better or worse, C compiler will just compile simple C and that's that.

    • You make an unwarranted assumption that you can sustain high quality of assembly language over large amounts of code. Usually, a top programmer can outperform a compiler but there's been a lot of data showing that over large amounts of code compilers tend to gain, because of large-scale optimizations (global register assignment, cache-friendly code motion, etc). Add to that the fact that refactoring for algorithmic optimization is much harder when your entire code is in optimized assembler, and the conclusion is that assembler just is not a viable global implementation language---it is best used to tweak very localized and well characterized performance bottlenecks. By the way, Linus' argument against C++ is that it's harder to correlate performance measurements with code, and his two projects (kernel and git) are very performance-sensitive. He reports that he often peruses generated assembly to check for performance problems---feasible with C because its primitives are directly mapped onto underlying ISA but significantly harder with C++.

    • It's a cool idea but also a possible source of confusion; it reminds me of the old prank from the days when Apple Macs first got voice commands. Imagine a college library room full of weary students finishing their term papers and the prankster who walks in and yells COMPUTER! RESTART! YES!

    • Obviously the hardware capability at any given price point is ever increasing. Therefore, the set of platforms capable of running a high-end memory-protected OS is always growing, and Linux seems to be winning among such OSes due to its flexibility and huge amount of existing software. Over time, less and less platforms will be incapable of running Linux. Now, Jack brings up a separate issue of Linux RTOS latency performance. This is a complicated subject: EMC2/LinuxCNC people deal with it all the time and it turns out that it is dominated by system issues such as baseboard and graphics firmware. In any case realtime is about 'is it fast enough for the job'--in the Petalogic example, if your task deadline is 10 ticks, neither RTOS is good enough; if it's 1000 ticks Linux loses to FreeRTOS, but if it's 10000 ticks both will work. Will there be applications where cost or other requirements limit the capability to run high-end RTOS? Of course---but there will have to be a good reason to give up all the goodies such as memory protection, software driver stacks for many protocols and peripherals, and userland libraries and applications.

    • Both your LPC800 links point to the same URL, which happens to be the user manual. The datasheet is at http://www.nxp.com/documents/data_sheet/LPC81XM.pdf

    • In an ideal world, hardware would be robust enough to survive on its own--or at least have enough firmware and protection circuitry around it to prevent damage. In practice such safety measures are too expensive, so the embedded field is full of 'secret handshakes'. Here are some examples, arranged chronologically. Old line printers could be made to print every character on a single line repeatedly without advancing the paper. This at the least would cut the paper and damage the ribbon, and at worst it could damage the printer due to mechanical resonance. Disk drives could be driven into resonance by mving the heads at the resonant frequency. The old drive cabinets could literally be made to walk across the floor. Video monitors used video horizontal sych frequency for high voltage generation. The original IBM PC monitor would catch fire if it was turned on without video signal--that's why PC power supplies had a slave AC power outlet for the monitor that was off when the computer was turned off. Some CRT monitor's high voltage sections failed if given video signal with frequency that was far too low or too high. Wireless network cards have programmable power stages that allow power levels that would put them out of spec for the local RF authority limits--many wireless drivers are binary only for that reason---their manufacturers do not want to reveal details that would enable third parties to violate local FCC rules. There's no way to prevent all such occurences---the cost-benefit just doesn't justify the measures required for absolute safety.

    • in a[i++]=i, the problem is that the compiler can chose to evaluate RHS either before or after LHS. The RTL and the result would be different in each case: LHSaddr=&a[i++]; RHS=i; *LHSaddr=RHS vs RHS=i; LHSaddr=&a[i++]; *LHSaddr=RHS

    • Most CRT monitors made in the 90s could be damaged by applying out-of-spec horizontal sync, because that frequency was used to drive the HV coil, and overdriving it tended to burn out the HV stage, sometimes with smoke and/or flames. Bad video drivers tended to run into this, as well as people trying to squeeze more resolution out of their monitors. Going back in time, remember how the original IBM PC power supply had an 'input' AC power receptacle as well as 'output' receptacle? That's because the original PC CRT monitors' HV circuit burned out if NOT provided with a driving video signal, so the monitor had to be turned off ASAP as the motherboard/video card stopped running.