Advertisement

IoT security, regulators and the open source movement

Art Swift

April 29, 2016

Art SwiftApril 29, 2016

It should come as no surprise that government agencies seek to slow the pace of disruptive technologies. The further the technological leap, the more legislators and regulators seem to want to study it, box it, and make it safer for the rest of us.

However well-intentioned their efforts may be, legislated control often impedes technological progress, creating roadblocks that may prevent the creation of even greater advances.

The emerging Internet of Things (IoT) phenomenon promises to be one such target.

The IoT has burst onto the technology scene precisely in the midst of a global conversation about Internet privacy and security. Governments around the world have a vested interest in both issues, so they are highly motivated to step into nascent IoT developments.

But unless they do so with great care, they are likely to create far more problems than solutions. In fact, there’s a real danger they could threaten the development of the IoT and embedded computing altogether.

More specifically, by using broad-brush approaches to regulating the functionality in embedded IoT systems, regulators could undermine consumer rights, stifling innovation and possibly impacting the viability of open source software.

This emerging threat was a prime reason for the creation of the prpl Foundation’s new document, Security Guidance for Critical Areas of Embedded Computing, which outlines how technologists and developers can proactively create more secure devices while ensuring more granular, thoughtful compliance requirements.

But this intervention cannot come soon enough.

As the Internet of Things works its way into the real world, regulators are already finding reasons to step in and create new rules for how these systems can be operated and used. Some of these rules effectively lock down the firmware on IoT devices so they can’t be altered – a move which sends regulators on a collision course with consumers and developers who can use unlocked technology on device such as network routers to customize and improve applications.

For example, the FCC is looking into mandating that the firmware in domestic Wi-Fi routers can’t be replaced, ostensibly to better manage the Wi-Fi spectrum, which is a finite resource. Since there are only 11 channels in the US available to domestic Wi-Fi networks, only three of which don’t overlap, the potential for bandwidth conflicts is significant. In fact, anybody could increase power levels, gaining range and speed while overpowering neighboring devices.

This problem may seem to warrant a legislated solution, but the FCC’s plan could create other issues. By effectively prohibiting users’ ability to tweak their routers for legitimate purpose, such as installing security patches or new open source operating systems – the regulations could render some Wi-Fi devices useless at best, and dangerous at worst.

In another example, sensors have long been measuring (and to a lesser degree controlling) a range of functions inside automobiles, from emissions to in-vehicle entertainment system and even brakes. Smart sensors in new, automated cars are taking applications even further, providing vast increases in automation including diagnostic and power-saving capabilities, self-driving cars and a variety of self-correcting and communicating mechanisms.

Yet almost as soon as smart cars started to emerge, researchers such as Miller and Valasek demonstrated that some embedded automotive systems could be exploited by remote attackers who could then potentially take control of a vehicle.

While regulators may see an opportunity to step in and legislate security controls, doing this too early and without additional insight into the potential of IoT technology may greatly impede future advances.

Tech savvy car owners, enthusiasts and even car companies themselves may want to tweak the software inside their smart cars to play around with entertainment systems and improve performance and safety. But if regulations lock down the firmware, improvements may never be realized.

The question is clear: How do regulators impose fair controls on software to make it safe, while still protecting the consumer’s rights to improve the technology?

The problem is that current IoT systems simply aren’t architected in a way which will allow for this kind of granularity. The answer is a new approach which is also outlined in Security Guidance for Critical Areas of Embedded Computing. It describes how open source development; secure boot based on a root of trust anchored in the silicon; and hardware virtualization can keep both regulators and consumers happy.

Achieving this goal will require the industry to take a targeted approach to IoT security.

The first step is to stop pretending that just because your embedded sensor is an obscure technology by consumer standards, it is therefore safe. This is simply not true. As the Stuxnet virus demonstrated only a few years ago, severe damage can be created by manipulating firmware in common industrial processors. Thus the focus should be on creating a top quality, highly usable, secure and robust end product.

Second, we need to rethink existing methods of updating firmware in connected embedded systems as they are fundamentally flawed by their lack of cryptographic signature. Code and firmware in unsecure devices can easily be reverse engineered, modified, re-flashed and rebooted to execute arbitrary or malicious code. We must ensure IoT systems only boot up if the first piece of software to execute is cryptographically signed by a trusted entity. It needs to match on the other side with a public key or certificate which is hard-coded into the device. Anchoring the “Root of Trust” into the silicon in this way will make it tamper proof.

Third, security can be greatly enhanced through a security-by-separation approach using hardware-supported virtualization and a secure hypervisor. Security-by-separation is a classic, time-tested approach to protecting computer systems and the data contained therein. Separation means functions cannot see or access other functions without authorization. In this way, software that implements one function does not have to trust software which is implementing another function – each is isolated from the other. Our Security Guidance document hinges on the use of security-by-separation created by a variety of methods: the most secure is hardware-based virtualization, but there are a spectrum of other separation methods, such as paravirtualization, hybrid virtualization and Linux containers.

Building security into the hardware of embedded systems in this way will help regulators lock down specific harmful functions while still allowing consumers free reign to tweak other parts of their product. It’s a win-win for innovation and regulation.

prpl Security Guidance for Critical Areas of Embedded Computing is available for download at http://prpl.works/security-guidance/.

 

Loading comments...