There are many complexities to a ‘disconnected’ embedded system, but at least the software is operating within a defined domain of memory and processors, together with the I/O registers that connect to real-world sensors, timers, displays and actuators. Development engineers create architecture and design documents to specify every piece of the system, and define the response to every external stimulus. In this type of environment, an embedded system software developer can access all the design documentation. The entire universe for the embedded software in this system is well defined.
The Internet of Things has made the environment for embedded software a lot more complex. Architects and designers are finding ways of making products more functional, more competitive and more convenient by creating ‘systems-of-systems’ to implement and deliver new capabilities.
There are examples in every industry, from aerospace and industrial machinery to healthcare and consumer electronics. If you are building controllers for agricultural machinery today, you have to think about GPS capabilities to enable the connected controller to determine the optimum amount of fertilizer to apply to each square yard of the field.
The old objective of better control at lower cost is still there, but there is a new expectation. Developers of smart products know that the combination of multiple systems will enable new and disruptive change. The challenge is to understand the full scope of what is possible, then beat the competition with new approaches and concepts.
Some of the skills needed are normal extensions of the old closed view of embedded software. For example, imagine development of the networked version of a previously standalone piece of hand-held equipment. One of the design studies may be to decide if signal processing and data archiving functions could be implemented via a dedicated wireless link to an external server.
This might reduce the size, weight and cost of the hand-held part of the equipment. Investigating and optimizing workload shares between the hand-held and the external server – taking the various cost issues into account – is not simple, but it is moderately routine.
After all, Edsger Dijkstra was lecturing about Co-operating Sequential Processes in the 1960s, and it’s more than 10 years since Jim Gray wrote about the factors that must be considered to find an optimum distribution of workload between local and remote systems in Distributed Computing Economics (the context of that article is the economics of distributing computing across Internet-linked nodes, but the principles are more general). Gray characterized the breakeven point at that time as “a minute of computation per megabyte of network traffic,” and points out that this result depends critically on the relative cost of processing versus communication between nodes – which is a parameter that has changed dramatically in the intervening years.
But distributing a previously defined task across multiple nodes is still the inward looking approach – even, or perhaps especially, when it’s the updated version of multicores accessing shared memory. This is not enough. What conditions will trigger the ‘Aha!’ moment in which the potential for a completely new capability crystallizes out from a designer’s thought processes, or perhaps occurs to the marketing person writing out a road map for the product?
Perhaps an example helps. Imagine a new version of a controller for a production machine. The controller handles complex, real-time response to temperature and pressure sensors. Over many years, each new version of the software has improved the operator display information, reduced the machine’s energy consumption, things like that. This year, the controller has a network connection. What will it be used for?
Of course there’s a non-answer based on making this someone-else’s-problem – “The role of the network connection will be spelled out in the requirements.”
The problem with this non-answer is that it simply moves the problem to the requirements engineer, who in turn may have to extract answers from marketing people. Organizations that have implemented an agile development methodology can say “Quite so, that’s why we don’t try and define all these things up front, we work through iterations designed to expose this kind of issue/opportunity to the entire community of stakeholders, who make decisions for the next iteration.” And that’s quite a good answer.
While it’s quite good, I think that is only a partial answer. I have the well-known Henry Ford story in mind here, in which the punchline can be restated as “If we had asked the stakeholders, they’d have prioritized a faster horse.” The core issue is finding the right scope – of the problem, of the technology, of the development and operational systems that will deliver and sustain the new product. If the right person looks at the right scope, there’s a chance that the ‘Aha!’ will happen.
For the production machine example I’m thinking of the installation steps. At some point, often during installation, someone defines how the machine will be treated as an asset. This includes, for example, registering the machine on the company’s asset register; creation of capital value depreciation schedules, decisions about maintenance planning; classification of what qualifications, training and experience operators and maintenance people must have. It’s a long list, and, at first sight, almost nothing to do with the capabilities of the machine controller.
But let’s challenge that initial reaction. Now that the machine controller has a network connection, surely there are plenty of steps in this asset-management-setup sequence that could be automated.
When the machine is connected to a new network, it could go through a discovery phase to find its new owner’s accounting and maintenance systems. It could self-register as an asset, and negotiate with the maintenance system about regular and as-needed servicing. Of course, there is an existing machine controller development process, which for years has specified and delivered better control algorithms for sensor handling, energy management, and operator information. This is a good process, it has delivered great results so far, but will it ever consider the installation process?
My point is that the Internet of Things world is going to be full of these kinds of situations, and the vast majority of opportunities will be small, incremental steps. These won’t justify a ‘cars-rather-than-horses’ initiative, and will be invisible unless someone is looking at the project with the right scope, and with the right knowledge to start asking questions.
The agile development approach is a start, but I believe the technical opportunities, risks, and gotchas are going to be a big factor. Therefore it is also necessary to enable and encourage an outward looking mind-set across all the engineers – hardware, software, requirements, test, installation, and service. When the stakeholder reviews come round, the engineers must be willing and able to point out things that are both relevant and have a chance of being achievable.
I’d like to point to tools and techniques that make this possible, but I’m not sure this is the most important factor. Certainly, a systems engineering approach can force consideration across the domains of product, development system, and operational environment.
Lifecycle analysis can also trigger the right thinking across aspects of installation, operation, service, and recycling. But I think it’s going to be the engineer-in-the-loop who is going to be the unpredictable source of important ‘Aha!’ moments. The particular pool of talent that I believe needs to be mobilized is the group of embedded software developers who have been developing software for standalone products.
In the production machine example, it would be one of this group who would realize that the new network connection could provide visibility of the parameters of the next batch, and this would allow a more efficient cool-down-warm-up changeover sequence.
The management team must push these people not only for inward looking, control-system-algorithm type improvements, but also for outward-looking, change-the-game type insights. And the engineers must remember that being years ahead of your time is relatively easy, the ideas your company needs are the ones that can be implemented within its planning horizons.
Peter Thorne is Managing Director for Cambashi . He is responsible for consulting projects related to the new product introduction process, e-business, and other industrial applications of information and communication technologies. He has applied information technology to engineering and manufacturing enterprises for more than 20 years. Prior to joining Cambashi in 1996, he headed the UK arm of a major IT vendor's Engineering Systems Business Unit, which grew from a small R&D group to a multi-million dollar profit center under his leadership. Peter holds a Master of Arts degree in Natural Sciences and Computer Science from Cambridge University, is a Chartered Engineer, and a member of the British Computer Society.