There are many differences between some of today’s embedded systems and those of days gone by, and some of the underlying changes have significant unexpected consequences.
Perhaps the greatest change is that increasingly modern embedded systems are based not on closed proprietary platforms but on commodity OS platforms. As a consequence, such embedded systems share all the security vulnerabilities of general purpose systems built on the same OS.
In this first of a three part series, we’ll look at the move from proprietary to commodity OS platforms, the resulting negative effects on embedded system product deployment and maintenance, and the root cause of these effects. In other parts of the series we will examine a concrete approach to eliminating the root cause, and look at some practical examples of its use.
First, let’s look at why some embedded system products are built on open systems platforms like Windows and Linux. One reason is simply to eliminate the need for an embedded systems product vendor to maintain and enhance a proprietary OS or even a customized OS. A related driver is changes in the deployment environment and customer expectations.
For example, many deployment environments are standard TCP/IP networks, with customer expectations of Web-based administrator consoles, network security with SSL, and administrator authentication. Many of these functions didn’t exist in older proprietary OSs, but are readily provided by commodity OS vendors in their platform products. Further, the OS vendors are responsible for improvements in platform level features to match continuing change in the networked deployment environment.
That’s why it can be very advantageous to base an embedded system product on an open platform, be relieved of the responsibility for providing platform level functions, and have a significantly lower cost of development. Furthermore, the open systems platforms also have features and tools that aid the embedded software developer and further lower the development cost.
The consequences of uniformity
Unfortunately, and perhaps not surprisingly, these advantages do come with some undesirable side effects. With deployment on standard open networks and commodity OSs, an embedded system shares the same security and reliability vulnerabilities of the commodity OS, as well as the susceptibility to common threats from an open network environment.
Also, a fielded embedded system faces the same operational challenges as open systems, such as frequent patches from the OS vendor. Depending on how a particular embedded system product is managed in the field, the result can be either significantly higher support costs, or significant customer headache, or both.
Let’s look at one of these consequences – the patching issue – and some ways that it can play out in terms of field support. For many who work for an embedded systems manufacturer or for a company that employs embedded systems as a critical part of its business, the following scenario may be all too familiar.
A manufacturer builds an embedded system on an open platform such as a general purpose OS, and sells its systems with a service contract to customers. The customers use the embedded systems and everything is fine until … the OS vendor announces a patch. At this point, one of a number of things can happen, depending on who (if anyone at all!) applies the patch to the fielded systems:
1. The customer applies the patch. This is a relatively recent practice and may happen when the customer either intentionally patches the embedded system or accidentally patches it along with other systems. Unfortunately, platform-patching often breaks the embedded system and, depending on the service contract, the manufacturer or a service partner may end up with responsibility for the repairs.
2. The embedded system manufacturer applies the patch. For this, the manufacturer first tests the platform vendor’s patch in-house and, if the platform patch is found to break or destabilize the embedded system, will need to write a customized patch for the embedded software. The manufacturer then rolls out the platform patch, and possibly a patch to the embedded software, to all fielded systems. The key word here is “all” – easy to say but hard to do.
3. Neither customer nor embedded system maker applies the patch. As a result of inaction, a customer’s embedded system may start suffering from security exposure or otherwise become unstable. Again, depending on the service contract, the manufacturer or a partner may end up with the responsibility of providing a fix.
Sound familiar? Unfortunately, this is a common scenario for manufacturers of open platform embedded systems and their customers. To complicate matters, the effort of rolling out a patch by itself can be a non trivial problem, as for instance is the case for medical device manufacturers who essentially must arrange for technicians and their equipment to travel to the customer sites in order to apply critical patches.
Let’s gain some insight into the root cause of these issues by quickly looking at the current methodology of developing embedded systems on open platforms.
Today’s Development Framework
The fundamental benefits of developing embedded systems on open standards based general purpose platforms are ease of development, cost efficiency, and speed to market. Open platforms allow the modification and extension of the platform and the execution of new code on top of the platform. In other words, there is no need for embedded system manufacturers to reinvent the platform for their software and build the system from scratch (see Figure 1 below).
However, as convenient as accessibility and flexibility are in development, they are overly permissive when it comes time to deploy the systems. When an open standards based embedded system is placed in the field, the ability to modify and extend the platform is a liability, and the capability to execute new code is precisely what makes the system vulnerable to unexpected or undesirable changes and new code. In short, open systems are inherently vulnerable and therefore must undergo a never-ending stream of patches.
For the embedded systems vendor, the problem manifests as the vendor sells a product with one set of business assumptions about field modifications (e.g. whether to repair defects in the field and whether to upgrade in the field) while the open-ness of the platform and the ongoing release of platform patches break those expectations with increasing severity as time passes. The aggregate result is thousands of times more units landing on the vendor’s loading dock or hundreds of times more field engineer visits to apply platform patches.
Cures and Band-Aids
Current approaches for reducing the patching needs of embedded systems follow the same reactive “band-aid” approach applied to general purpose systems: use of anti-virus software, personal computer (PC) firewalls, etc. While these have some benefits, they are incomplete (e.g. zero-day attacks are not addressed), they have side effects (e.g. performance penalties, recertification requirements), and most importantly, their side effects are entirely unsuitable for embedded systems which are intended to do one thing well without breaking in the field.
Let’s look at a couple examples of why these “band-aids” are sometimes cures worse than the disease. First of all, many of these reactive solutions depend on timely updates of patterns or signatures, in order to be effective against recently identified attacks.
However, if a system is difficult to patch – perhaps due to being a mobile device and/or one with sporadic network connectivity – then signature updates are just as problematic. Both require the device to be online and accessible to download new code (patches) or data (updates). As a result, these solutions actually are no more effective that patching, and provide both a false sense of security and an additional management burden to track which devices may have their signatures or definitions out of date.
Second, many of these security tools were designed for an enterprise environment with end-users or administrative users being in the loop. Often, this simply isn’t a good assumption. To take one real-world example, I’ve seen airport parking self-service payment kiosks locked up with a security alert wanting a confirmation from an end-user that simply isn’t possible because there aren’t any input devices other than one button! More serious are instances of anti-virus software running on medical diagnostic systems, popping up at inopportune times with security alert messages containing medically sensitive terms like “virus,” “infection,” and “abort.”
Isn’t there a better alternative? Certainly. But to overcome the difficulties of both patching and band-aid security tools, there are several key requirements.
First, a better approach would be defined specifically with embedded systems in mind, without any assumptions of IT staff for ongoing management, unallocated CPU cycles to use, human input available for definitions of rules or policies, network input available for signatures or other updates, and on and on. Instead, a better approach would work in a completely unattended manner from factory ship, as well as be available for “retrofit,” that is deployment via field update to already deployed units.
Second, a better approach would place no constraints on the embedded system vendor’s existing process for software development and system engineering. All the benefits of an open system platform should still apply, but a device, once fielded, would no longer be an open system that is capable by default of running any code that can creep in from the deployment environment.
That’s the key idea we will explore in Parts 2 and 3 by describing “diversity” as the basic approach, describing techniques for applying it, and gauging the effort and applicability to various types of embedded systems.
E. John Sebes is the CTO of Solidcore Systems.