David has a 25-year track record of innovative product development and commercial success with technology startups in the electronics, software, communications, and IoT sectors and holds more than 20 patents. Joining telecom startup ADTRAN in 1992, David helped launch the company to IPO in 1994. Following that, David led engineering teams at DiscoveryCom, Nokia, and Teracruz. Most recently as part of the founding team of IoT startup Synapse Wireless, David was CTO and the principle architect of its core technology, SNAP – creating software and wireless hardware modules as well as development tools for microcontrollers. David currently serves as President of Firia, Inc. (firia.com), an IoT focused design firm helping companies achieve their product development goals through software and hardware consulting services.


's contributions
    • This is really more than just a "rebranding" of ZCL. Some very interesting work will need to be done at the edge-gateway level to map the various underlying protocols to a common application layer. Even simply mapping the addresses of Things to a common scheme is a challenge, so there's much work to be done in dotdot land to truly leverage those enormous clusters. Regarding the stack features/quality question, anything's possible and there are so many to choose from! Zigbee doesn't promote or offer a reference implementation, and many of the stacks out there are tied to particular chip families - largely since they are sponsored by the silicon vendors themselves. Perhaps the Zigbee Alliance should sponsor an open source stack implementation...

    • Yes, there will always be applications whose market economics justify the additional development expense of lower-level coding, even when the technical constraints don't necessarily mandate it. You can write even faster/tighter code in assembly language, but is it worth the extra time and maintenance burden? Really it depends on the application. I've developed embedded python applications that run for years on batteries - but I've also often mixed C and python to get the best of both worlds. It's nice to have choices!

    • Speaking from my own experience as a seasoned embedded C programmer, learning python has certainly made me more capable. The comment you reference was really about using the most appropriate tool for a particular job - similar to the tradeoff of C vs Assembly. One is not necessarily "better" than the other, but you can be more productive if you know both! And as time goes by the abstraction layers usually get more robust so that you find fewer occasions requiring the low-level tools. As far as "Python after C" goes, I think the next generation of embedded programmers will come at it the other way 'round. They're learning python in elementary school. The hard part is getting them to really dig into embedded C coding, short of locking them in an office with K&R and no Internet connection :)

    • It's true, python's generalization of the concept of a "number" is quite different than what we're accustomed to as embedded C programmers. Python does indeed have types - in the example above, type(byte) reveals that it's [type 'int']. True, there are no built-in types for constraining to unsigned ints of a particular fixed size. Libraries such as NumPy provide those if you like - handy for simulation and "scientific computing". For embedded purposes I've found python to be perfectly acceptable for bit-twiddling and such - the shift/mask operators,etc., are all based on C so they're very familiar. Just have to know the size of integers in your particular VM, and understand that it's all signed. Granted, at this low level you don't really see the power of python - it looks pretty much like C code. It's at the next layer up and higher in the software where you can build abstractions that often reduce the complexity as compared to equivalent C code.