Building reliable and secure embedded systems
Reliable and secure embedded systems
It is important to note at this point that reliable systems are inherently more secure. And that, vice versa, secure systems are inherently more reliable. So, although, design for reliability and design for security will often individually yield different results--there is also an overlap between them.
An investment in reliability, for example, generally pays off in security. Why? Well, because a more reliable system is more robust in its handling of all errors, whether they are accidental or intentional. An anti-lock braking system with a fall back to mechanical braking for increased reliability is also more secure against an attack against that critical hardware input sensor. Similarly, those printers wouldn't be at risk of fuser-induced fire in the case of a security breach if they were never at risk of fire in the case of any misbehavior of the software.
Consider, importantly, that one of the first things a hacker intent on breaching the security of your embedded device might do is to perform a (mental at least) fault-tree analysis of your system. This attacker would then target his time, talents, and other resources at one or more single points of failure he considers most likely to fail in a useful way.
Because a fault-tree analysis starts from the general goal and works inward deductively toward the identification of one or more choke points that might produce the desired erroneous outcome, attention paid to increasing reliability such as via FMEA usually reduces choke points and makes the attackers job considerably more difficult. Where security can break down even in a reliable system is where the possibility of an attacker's intentionally-induced failure is ignored in the FMEA weighting and thus possible layers of protection are omitted.
Similarly, an investment in security may pay off in greater reliability--even without a directed focus on reliability. For example, if you secure your firmware upgrade process to accept only encrypted and digitally-signed binary images you'll be adding a layer of protection against a inadvertently corrupted binary causing an accidental error and product failure. Anything you do to improve the reliability of communications (through checksums, prevention of buffer overflows, and so forth) can have a similar effect on reliability.
The only way forward
Each year it becomes increasingly important for all of us in the embedded systems design community to learn to design reliable and secure products. If you don't, it might be your product making the wrong kind of headlines and your source code and design documents being pored over by lawyers. It is no longer acceptable to stick your head in the sand on these issues.
Michael Barr is CTO of Barr Group and a leading expert in the architecture of embedded software for secure and reliable real-time computing. Barr is also a former lecturer at the University of Maryland and Johns Hopkins University, author of three books and more than sixty five articles and papers on embedded systems design, and served as the editor in chief of Embedded Systems Programming magazine. Contact him at mbarr@barrgroup.com.
This content is provided courtesy of Embedded.com and Embedded Systems Design magazine.
See more content from Embedded Systems Design and Embedded Systems Programming magazines in the magazine archive.
This material was first printed in April 2012 Embedded Systems Design magazine.
Sign up for subscriptions and newsletters.
Copyright © 2012
UBM--All rights reserved.


Loading comments... Write a comment