One of the great things about my multiple roles as engineer, writer, and fashion consultant (LOL) is that I get to hear from lots of people about lots of different things.
Oftentimes, companies want to tell me about their latest-and-greatest devices, tools, or products. Other times, engineers wish to inform me what they are up to. As an example of this latter case, consider the following email that recently landed in my “inbox” (I have the sender's permission to share this with you, but he prefers to remain anonymous).
Hi Max, Remember a few months ago I was wanting to get 1 mega-sample per second of an ADC off of a device into a PC style computer? Well, here's update I promised.
Finally did it.
Long and torturous journey.
First, found Atmel SOC (system on chip), and since Linux was available for it, tried that. It would have been a great solution. It has gigabit Ethernet, which would have been ideal to transfer 2 megabytes per second off of that unit to a PC style Quad Core workstation.
The issue that broke the camel's back boiled down to a side effect from the fact that IIO had been done for Linux. (Industrial Input Output, IIRC)
(That is a low speed system expected to be used for milling machines and the like, where a hundred samples per second is more than enough.)
The side effect was all development stopped on the A/D area after IIO was put into the kernel.
We worked on doing our own driver. Once we got to the point we could get a million samples available, we turned to the DMA.
The DMA engine is extremely convoluted. It has to be, because it is supporting any architecture (Arm, X86, etc.)
Unfortunately, there were some glitches in it we could not overcome without massive code changes, and we aren't in that business.
That DMA works fine for the subsystems they got to work (i.e., Ethernet, disk I/O). Getting ADC samples off and to Linux user level code did not happen properly, so we finally gave up.
I assume that, after a few weeks of intensive work by someone who understands that convoluted mess, it could be patched to work. We don't have that talent, or those resources, or that time.
We looked at doing a tight loop and started down that path. Unfortunately, the development environment for the Atmel device from Atmel itself was not yet ready. But for the low, low price of between $6,000 and $10,000, I could sink money into IAR and start developing for that $99 device.
That was a no-go, our budget is currently constrained with the economics of the oil field. Atmel did respond to us and say something should be available at the end of this year at a much lower cost.
Didn't fit in our time window.
We kept looking.
We turned to another device, the PSoC 5LP from Cypress Semiconductor. This turned out to be a different story.
First, the free PSoC development software is a dream compared to other embedded systems. Their API's work. They are rather well documented. Their examples work out of the box. They have a $10 board you can play with and create your own project. (CY8CKIT-059)
None of the other examples I tried from other embedded manufacturers actually worked without some tweaks and digging. Cypress PSoC examples do what they say, and work without tweaking. Quite a surprise. Refreshing.
Unfortunately, we still couldn't get 1 mega-sample per second off of the PSoC either, using built in devices. The SAR ADC generated 2 megabytes per second (12 bit words).
Their full speed USB, when run in Isochronous mode, can only send 1 megabyte per second. (USB 2.0 full speed is 12 megabits per second with some overhead.)
Even so, we were so close, we decided to keep going. We could probably make 500,000 samples per second work, with some loss of capabilities. However, that rate would have allowed a successful product.
The PSoC has user-configurable Altera/Xilinx/Actel-style logic built in. We tried an SPI logic based component. Close, but there were DMA problems sending to a 12 bit (or 16 bit) SPI which broke the ability to DMA data out. Period. Reported that to them, a case is filed. (8 bit works fine.)
So, we kept after it. We found a parallel interface, high speed USB 2.0 device (480 megabits) with a 1k buffer. We put data out using a memory interface created using the PSoC logic elements.
DMA was tried, but DMA priority conflict either killed the ADC or killed the transfer off chip. At 1 mega-sample per second you can only use one DMA engine and *keep* getting 1 mega-sample per second transferred into memory.
Realize this a 62 MHz RISC CPU system. Even so, running 1 million DMA events per second manhandles the bus and really causes resource issues. I liken it to wagging the tail so hard the dog can't stand.
We decided to write out the data in software, as we could theoretically run 62 instructions per microsecond. 700,000 bytes per second (350,000 samples per second), using CPU writes in a loop, nothing else going on. (Measured on an oscilloscope.)
Closer! So, we put two of these devices and changed the bus to be 16 bits. (Two USB streams into the PC) 1,400,000 bytes per second. Closer. 200,000 samples faster than USB, so our product would work better. However, we could taste blood, so we kept going.
We started looking at the structure carefully. What if we did not do software loops but fell through writing a word at a time. 2048 repeated lines of code. Since the PSoC has 1/4 megabyte of flash, no problem. Suddenly, about three megabytes per second!
Wow! Hand optimization of C code saved the day.
OK. Good. Move over to the PC. Oops! Dropping packets. We were overrunning the 1k buffer on the firmware end. (A day or two to discover.) Modify software and firmware. Good balance found, now getting data off, with some milliseconds to spare to run other code.
The thing I am left with is the realization that everyone who plays in this bathtub touts a 1 mega-sample per second A/D. No one has put features in their device for getting that data rate off of their device. I don't think the developers of the devices realize that if you get a mega-sample per second, you might want to use it. I feel that realization would have led them to provide either high speed USB 2.0, or Ethernet, along with the development environment support. Lack of development environment support nearly killed this beast.
I now recognize that we could probably have done it with the Atmel, but their development situation stopped that cold.
Thus far, only Cypress — with its generic “Logic Toolbox” and easy to use development system that actually works — allowed us to reach our goal line. I am assuming other problems will arise as we move down the road. Cypress is not perfect. However, they are dramatically better in the areas that count.
Finally, neither Atmel nor Cypress will be materially affected by our situation. We are low volume. However, Cypress did indicate they had a lot of inquiries about this exact problem in one of their forums.
Thanks for listening. Use what you wish.
Well, I don't know about you, but I found this to be absolutely fascinating. This is the sort of real-world “got a problem, let's engineer a solution” scenario that is happening every day somewhere or other. What say you?