Weatherbee

image
Research&Development Engineering

Biography has not been added

Weatherbee

's contributions
Articles
Comments
    • One cool thing about the Zynq is that the Xilinx SDK supports both Linux based development and bare metal programming of the Cortex-A9 including all the startup, initialization and peripheral device drivers. For me this was a major plus coming from a microcontroller/RTOS mindset. See http://www.zynqbook.com/ for a free really easy to read intro to Zynq.

    • I am a big believer in integrating the unit test code into the runtime code to be run at least at bootup if not continuously during idle runtime. It comes at a ROM cost and can't achieve full coverage in units that need hardware interaction at runtime (without integrating a peripheral simulation stub) but ideally you can cover trapping QA test cases along with testing for random runtime hardware malfunctions in the same code which is pretty cool (if the runtime capable unit tests are just run continuously in the idle task).

    • Ada is sweet, but it is unfortunately not the answer. Ada is also almost 40 years old too and it is about 10 years late to the "debunk C/C++ in critical systems" party. The problem is that we shouldn't be *writing* code in the first place. Model Based Design (e.g. UML, SysML, Simulink etc.) are a much more effective and modern mechanism for constructing software of high criticality.

    • Frankly, I'm beginning to get slightly annoyed by the Ada articles. Ada 2012 and SPARK 2014 are awesome, no doubt about it --- I've looked at them front to back. So is model based design. Embedded developers should definitely make the effort to escape from their Assembly/C/C++ paradigm. The central issue with all of the above tools is cost. If you can't sell executive management and/or clients on an "exotic" toolset/language suite then the hemming and hawing about their awesomeness is all for naught. Also, "old-timers" and other developers that don't want to learn any knew tricks will resist you tooth and nail in my experience. From what I've seen commercial Ada and Model Based Design tools run anywhere from $25,000 upfront and $5,000/yr in maintenance to $35,000/yr on subscription to start for some run of the mill platform commoditized bare metal platform like the ARM Cortex M3/M4. Please point me to an alternative, royalty free, commercial (meaning I can target a closed source platform) legal means of applying tools outside of the C/C++ domain that will cost no more then $5,000 up front and $2,000/yr and I think the uptake of these kind of tools will be much more rapid. Otherwise this will remain fabled magic fairy dust.

    • Hi Bernard, Does anyone want to chime in on the availability of Ada compilers/run time environments for the Cortex-M and Cortex-R micro-controllers running bare iron or with common uC RTOSes (FreeRTOS, RTX, etc.)? Is this not practical? Seeing that the ARM architecture now represents the lion's share of the 32-bit micro-controller industry and is supported by a number of well developed C/C++ toolchains it would seem to me that simply having native shrink-wrapped commercial Ada support for those architectures would definitely increase interest. Maybe I'm missing something as I see that "in theory" gnat could somehow be adapted but in practice I haven't been able to locate a commercial example of such a thing. Before transitioning to assembly and then C/C++ for bare iron embedded and Unix like environments, I was an enthusiastic Pascal/Object-Pascal PC desktop developer back in the Borland heydays. For that era they had very fast, efficient and effective IDEs and compilers and I certainly remember enjoying the strongly-typed nature of Pascal and so between that and the knowledge that VHDL is partially derived from Ada conventions I think that if practical compilers were available it would be something to seriously consider. It's too bad though that Model Based Design tools are generally generating straight ANSI-C or C++. Model Based Design tools of the Simulink-like nature are yet another trend in embedded that you ought to consider investigating also. MBD plus C/C++ (for a certain subset of common embedded problems) may prove to be a more viable solution to developing large structured projects in the long term then Ada.

    • Furthermore the tiers of knowledge run something like: 1. Mathematics 2. Physics 3. Electrical Engineering 4. Mechanical Engineering 5. Astronomy 6. Chemistry 7. Biology 8. Medicine 9. Law 10. Geology 11. Computer Science 12. Psychology 13. Business 14. Economics 15. Music 16. History 17. Politics

    • Time out. So I'm one of those "hardware-trained" engineers who learned to code "by ear" when I was 10 years old, long before I was trained to design hardware. There is a gross inaccuracy with your statement about #define-ing register addresses instead of using a structure for the purposes of abstraction. That inaccuracy is that the ANSI C language standard does not guarantee any particular packing of structure members. Structure member alignment is implementation dependent and this is actually for a *HARDWARE* reason which is that many processors cannot handle unaligned data. An example would be reading a 32-bit integer from a memory address where the bottom two address bits are non-zero. If you think about the organization of memory at the *HARDWARE* level which might say be 32 or 16 bits wide you can see that reading/writing from/to certain byte addressed locations may require more than one memory bus transaction which will not necessarily be supported by the underlying memory controller. There is *NO* guarantee that a C structure defined in one particular way will have member offsets that are identical across platforms. Never mind the entire issue of endianness. You can *usually* suggest the offsets be arranged in a certain way by ordering the members from largest to smallest but there is simply no guarantee. And BTW there are plenty of hardware engineers that use abstraction at many levels including hierarchical board level design and hierarchical entity/macro-cell based digital system design. I would guess that there are probably many more incompetent software engineers than hardware engineers that are unqualified to write software. After all, who made who here? So perhaps those hardware-trained abstraction-lacking engineers you look down on know something after all...

    • Regarding the right-hand-side evaluation first: don't forget that you could evaluate the left hand side as a pointer a[i++]: ptr =(void *)a + (i * sizeof(a[0])); then increment i; then evaluate i (which has been incremented) and move it to the memory location pointed to by ptr. There might be some good machine level optimization reason for the compiler to do it this way. The *bigger* question is: who the hell writes code where the index of the array is assigned as the content of the array elements without the use of an outer loop anyway? (Where the increment could be performed). A lot of the potential for error actually just simply arises from poor architecture as opposed to language traps. Of course, expressing "C code" in reverse polish notation would fix all of this since the order of operation is then explicit. a[i++] = i 'a' i i PUT 'i' INCR DROP

    • It is far *more* important that a developer can understand the tool that they use in its entirety then that their tool does entirely everything.

    • I totally disagree with the last rule "DON'T expose the internal format of any module-specific data structure passed to or returned from one or more of the module's interface functions." This is just not a reasonable thing to do in C while maintaining the ability to statically allocate structures outside of the module and I agree with others that it amounts to a solution looking for a problem. If you REALLY WANT TO HIDE the internal data structure don't even pass the struct pointers, rather use integer handles that have to be resolved to pointers internal to the functional module. That will prevent code outside the module from being able to even copy the data without using "accessor" functions in the module.

    • Abraxalito, The reason it makes no sense is because it is a poorly written article that doesn't follow a logical line of reasoning developed in a natural order. I am sure that the author is a really bright guy but what he needs is a technical editor. Anyway, what he is saying is that it might be desirable to measure power consumed by a system on a cycle-by-cycle basis, however the passive properties of the printed circuit board (e.g. trace resistance, capacitance and inductance) that a particular processor is mounted on along with the bypass capacitors that are used makes it impossible to do that because those act as low-pass networks smoothing out the power consumption with respect to time. This is desirable behavior in terms of power distribution and regulation but is undesirable in terms of measuring the power that is being consumed on a cycle-by-cycle basis. I hope that makes better sense.

    • I have been a big Tek fan for some time now (as I have thought that many HP/Agilent test products while unarguably high-quality had artificially inflated pricing attached) but carefully examining the specs for the Agilent 3000-X Series to the Tektronix MSO/DPO 3000 series it seems like price wise that Agilent has just fired a Nuclear Surface Torpedo at the Tek 2000/3000 series battleship. I see that Tek is having a big sale right now but to match up the price/value ratio that Agilent is uncharacteristically offering here they are going to really have to slash some of their 2000 and 3000 series pricing. Given that the series numbering is essentially identical to Tek which wasn't always the case one has to figure that this an entirely deliberate move by Agilent to disrupt Tek's current pricing model. This can only be good for the technical consumer though as I have always felt that oscilloscopes especially (which are practically commodity items) are way overpriced and that the performance to cost ratio doesn't seem to be advancing regardless of major advancements in DSP technology over the last 15 years.

    • You mean like sleeping when you are hungry or storing fat when your body thinks that it isn't going to receive additional calories in the near term or changes in metabolism as a result of age or physical strain? Some of the extreme power management that battery powered devices are capable of these days are downright spooky in their longevity. We will all be in trouble when mobile processors start repairing theirselves. Roll "Terminator" clip.

    • Furthermore, I've done a direct comparison of the newest PIC32 and TI Stellaris parts over the course of the last 9 months. I am a Microchip guy as I have extensively exclusively spec'd both their 8 and 16 bit architecture over the last 10 years. They have missed the mark with the PIC32 though. It isn't nearly as eloquent as what I came to expect based upon the dsPIC33. Any advantage that they gained by licensing MIPS4K was quickly lost when whatever technical manager decided to shoe-horn 16 bit peripherals with the core and release that as a product. The TI Stellaris 9000 series is what we ultimately picked. Even though TI was a bit difficult to work with the end product is much more respectable and easy to deal with. Branch delay slots and register shadowing theoretically may give MIPS4K a performance advantage but those theoretical advantages don't translate into real advantages when you actually have to work with the thing. The annoying and unnecessary (for a deeply embedded app) MMU doesn't help either. The PIC32 even with that ridiculous MMU (over the Stellaris MPU which is optionally enabled) doesn't even buy you the ability to do anything microprocessory (like boot Linux) because the PIC32 lacks the fantastic EPI the Stellaris has (which can be used to control external SRAM, DRAM or a slaved up FPGA). I think MIPS has lost the 32 MCU race and should move on. And I think Microchip should have developed a proprietary 32 bit architecture for their 32 bit offering.

    • I don't think MIPS on FPGA is going to save MIPS in these deeply embedded applications as both Actel and Xilinx are placing their bets in those areas with the SmartFusion and upcoming Xilinx architectures. ARM has won the 32-bit race (as far I am concerned x86, MIPS and PPC are not viable) in the embedded space and my recommendation to MIPS is to just continue to tune and strengthen themselves in the 64 bit space for the day when we all decide that it is time to transition to 64 bit for deeply embedded apps. ARM has supposedly been making some plays in the server space with the A15 lately but in my opinion that is a ridiculous play for a 32 bit processor, it is about 15 years too late. Without a true 64 bit architecture they've got no game. MIPS should continue to focus on the high end space.

    • The days of the RTOS are numbered. Think about this same discussion on RTOSes -- if noone here was using an actual hard microprocessor but rather all “microprocessors” were realized in very large FPGA like devices. Instead of worrying about which RTOS we are going to choose this week we would either be simply instantiating separate tasks that are literally implemented with separate gates or we would be considering something along the lines of virtualization --- how we can make a single processor look like multiple processors with interprocess communication and memory management mechanisms through built-in hardware. Of course we don't have this type of hardware available at the embedded level for a reasonable cost but I think it is just a matter of time until some forward thinking silicon vendor develops a microcontroller in which the threading, messaging, task management and context switching mechanisms are built directly into the silicon (and I don’t mean ROM TI). In fact, I ought to quit my day job and get started on that today because that will be the realtime uC architecture to end all uC architectures. In the end this is a viable solution in the uC space because so often in deeply embedded hard realtime apps we need threading but not dynamically and then only among a limited number of threads. I can't think of very many deeply embedded applications where more than a dozen separate threads are actually necessary, active or not...

    • Jack you rule! Spot on buddy. I loved Turbo Pascal (along with the object oriented extensions) back in the day too (which was around '92 for me) although it has been a long time now I do recall that the Borland IDE and compiler running on my meager 33MHz 486 with an amber monochrome display was absolutely rocking. In fact I originally learned C in about 1 week from a little book called "C for Pascal Programmers" as I recall. Anyway, you are right on the software approach as most of these structured approaches to development actually don't work if they are followed precisely. Those of us who have been coding since before it was a fashionable profession ('85 for me) know that these systems setup to manage large teams of engineers almost never work correctly and that with the exception of what I would term "applications programming" (which I arrogantly classify as a 2nd tier profession) 10% of the players contribute 90% of the actual output. But don't tell anyone, while management is busy selling each other on the newest development acronym, those of us who actually know what we are doing can do 5x the work in 1/5th the time while we are waiting for the next phase gate or whatever. For the other 4 days a week there are pointless meetings, confused coworkers who somehow graduated from college without understanding how to write a complete sentence, the water cooler, 2 hour lunches and solitaire.

    • Pretty much sums it up: http://bigiron.bighardhat.com/response.c

    • Well, I tried to post my response in the form of a C program. But it seems this forum strips important symbols like <,> making it impossible to post C code. I guess C *is* dead...

    • 
      #include 
      #include 
      int main (void)
      {	
      int (*q)();	
      int m[]={0,8,5,6,8,3,5,3,9,3};
      int s[]={498013,1200,6,5360,33,3456007,518400,0,0,190144,960,1680,138400,0,3,605002,0,0,5356816,480,4,360,1814400,11520,0,0,0};
      static int c;	
      int (**z)()=&q;
      int r,v,x,l;	
      while((q=putchar)) { do
      #define q sizeof
      { for(r=0,v=~0u>>1;r=~0u>>1?0:v)) { (*z)(l?l+64:32); s[l]-=v; }} while(v);
      if (++c
      Edited by: ESD editorial staff: SRambo on May 16, 2010 6:20 PM

    • 
      /* Bugs are the result of lazy engineers that don't bother to *
       * test their code and instead expect the toolset and QA department *
       * to save them.  Would you want to fly on an airplane with a captain *
       * that can't land without autopilot? How about putting blame where *
       * blame is due for a change... */
      #include 
      #include 
      int main (void)
      {	
      	int (*q)();	
      	int m[]={0,8,5,6,8,3,5,3,9,3};
      	int s[]={498013,1200,6,5360,33,3456007,518400,0,0,190144,960,1680,
      		138400,0,3,605002,0,0,5356816,480,4,360,1814400,11520,0,0,0};
      	static int c;	
      	int (**z)()=&q;
      	int r,v,x,l;	
      	while((q=putchar)) { do
      	  #define q sizeof
      	   { for(r=0,v=~0u>>1;r=~0u>>1?0:v)) { (*z)(l?l+64:32); s[l]-=v; }} while(v);
      	  if (++c
      Edited by: ESD editorial staff: SRambo on May 16, 2010 6:19 PM

    • Hermanator, Do you mean ARM v7 when comparing to the PIC24 as in Cortex-M3 or did you mean ARM7TDMI (ARM v4)? I didn't realize that Luminary ever made any 7TDMI based parts...