Editor's note: This article is based on a paper presented by the authors as part of a class (ESC-436) at the Spring 2012 ESC DESIGN West.
Modern mission and safety critical embedded systems present a heterogeneous footprint with different domains, operating systems, devices, and network protocols. Such systems often rely on information originating from remote embedded devices, which run on commercially available platforms that cannot be assumed to be trusted.
Hence, there is the challenge of controlling and trusting the software and information originating from such remote embedded devices. Given the nature of critical systems such as defense, healthcare, and industrial process control, there is a clear need to ensure that the remote embedded devices have not been tampered with through internal networks, the Internet, or other Ethernet, wired or wireless connection points.
In this paper, we describe the investigation of a real-time remote attestation approach that ensures software and information from remote embedded devices can be trusted. Our solution addresses two key challenges: how to ensure the integrity of code on a remote device that uses Commercial-off-the-shelf (COTS) OS and software, and how to securely communicate the state of the software both at boot-up and run time.
We investigated measurement tools to determine the integrity of critical software running on remote devices. Using virtualization, we sought to separate the target device’s OS and software from the measurement tools such that the integrity of the measurements could not be compromised. Our framework included Trusted Platform Module (TPM)  hardware as the root of trust for the software measurements. Finally, we wanted attestation information to be securely and efficiently communicated using the security-enhanced Object Management Group (OMG) Data Distribution Service (DDS) standard.
RTI and ObjectSecurity investigated an architecture that can assess the trust posture of a remote embedded device by measuring its software integrity. The architecture includes a mechanism for communicating the result of the measurements to a local verifier using DDS as the attestation transport. By using the intrinsic characteristics of DDS, we also wanted the ability to detect misconfigured local and remote devices before communication is established.
Further, we identified the mechanisms required to securely communicate attestation information from remote embedded devices to monitoring nodes. In particular we identified the parts of a solid security architecture for the described architecture: Model-driven security for access policy automation, together with attribute-based access control, authorization-based access control, and traditional mechanisms such as SSL/TLS and authentication.
We used a DDS-based real-time messaging technology as a secure communication gateway that can be integrated with hypervisor technologies. We used the misconfiguration detection mechanisms provided by the publish-subscribe middleware.
ObjectSecurity has expertise in security policy management, and has developed a model-driven security policy automation technology called OpenPMF (Open Policy Management Framework). RTI has extensive experience with defense networks and supports Technology Readiness Level (TRL) 9 technology in mission-critical applications. RTI is the leading vendor of DDS middleware. RTI co-wrote the DDS standard, chairs the OMG committee, and sits on the OMG board.
Critical Systems Use of Exposed Heterogeneous Frameworks
Safety-critical and mission-critical systems such as defense, health care, and control systems are rapidly evolving. Modern systems integrate complex distributed functions, building upon a heterogeneous framework comprising different domains, operating systems, devices, and network protocols. These include remote embedded sensors, payloads, communication nodes, command and control, and mission planning. Currently, few formal security processes or technologies are used to secure the perimeters of such critical systems.
While technologies such as Supervisory Control and Data Acquisition (SCADA) systems have been designed for reliability and personnel safety, only implicit trust of their components and communication has been the norm. For the most part, SCADA systems historically did not consider threats from malicious intruders. In fact, most SCADA users are still ignorant about their exposure, with only robustness against basic errors considered.
Untrustworthy Remote Embedded Devices and Software
The Stuxnet worm, which targets embedded systems , signals a change in this state of affairs. Stuxnet is considered the most complex threat to date that targets industrial control systems. Its final goal is to alter the code on programmable logic controllers (PLCs) to change the intended system’s behavior in a manner that is not readily observable by the operators. The exploits that are used include zero-day malware, a Windows rootkit, the first ever PLC rootkit in the wild, antivirus evasion, complex process injection, etc. The original infection may have been introduced by removable drives, but in the future, we expect to see similar exploits originating from external networks.
In a General Accounting Office (GOA) report , the US Government predicted this weakness and identified five trends that have escalated the risk to SCADA networks. The most prevalent threat involves connecting to external networks through modern technologies such as Ethernet and the Internet Protocol (IP). Although using these technologies makes systems functional and efficient, it unfortunately also opens our key national infrastructure to cyber attacks, especially through the use of embedded devices.
A similar vulnerability is inherited by a number of modern safety-critical embedded systems—from sensors to medical devices to remote vehicles to automotive systems—which are also becoming network-accessible . For example, the embedded software in implanted medical devices is now accessible via radio frequency identification (RFID) interfaces  and has already been proven vulnerable to attack . Moreover, automotive embedded-device software connects to cloud service technology  and DoD tele-medical applications enable software-controlled surgical robots in U.S. military facilities in Iraq to be operated via satellite uplinks by doctors at the U.S. Navy Hospital in Bethesda, Maryland .
Hoglund and McGrew  raise the problem of a misconception regarding embedded devices: there is an assumption that they are not vulnerable to remote software-based attacks because they do not include an interactive shell out of the box. Hence, it is assumed that the worst thing that an attacker can do to most embedded systems is merely to crash the device. However, this is not the case, as complex programs can be inserted via a remote attack on an embedded system. Shell code is only one of them. The highest risk lies in the fact that embedded device use commercially available platforms that can be reverse engineered and ultimately attacked; thus, they cannot be trusted.
Securing operations with Remote Attestation and Communication
To provide reliable evidence about the state of the software executing on an embedded device, a trusted computing approach is needed. Specifically, remote attestation can offer assurance of software invocation, delivery of content to trusted clients, and mitigation of mutual suspicion between clients . This approach relies upon measurement mechanisms that collect software integrity information on the state of the target embedded device. However, remote attestation and software-integrity measurement systems need flexibility. They must provide not only completeness of measurement and trust in the collection, but also dissemination of the attestation information. Moreover, they increasingly need to cope with mobile sensors and mobile handheld devices, which exhibit resource limitations. In this case, data compression and feature extraction are needed to balance dissemination of collected information against often high cost wireless communications .
Remote attestation architecture. RTI and ObjectSecurity investigated a remote attestation infrastructure that ensures end-to-end control of remote embedded devices and applications using commercially available platforms across different administrative domains in real time; as seen in Figure 1 .
Figure 1: Remote embedded devices attest that they are not compromised prior to initiating communication with the enterprise system. Periodic attestation is used to ensure integrity over time.
As shown in Figure 2 , our architecture used a secure hypervisor approach that allows a separation between the integrity measurements of the target COTS guest OS along with the embedded system applications, and the target COTS guest OS itself. Measurement tools are used to determine the integrity of critical software running on remote embedded devices. The architecture uses commodity TPM hardware to provide the root of trust for the software measurements. Finally, a real-time distributed framework reports attestation evidence.
Communication and Misconfiguration Detection with OMG DDS
The Object Management Group (OMG) Data Distribution Service (DDS) standard   provides the technology base for developing a distributed trust architecture.
DDS publish-subscribe middleware is well suited to communicate between applications running on different hardware, between various operating systems, and over many transports. Active efforts are underway to develop new secure and safety-certified versions.
DDS is a critical technology used by embedded military networks on land, sea, air, and space. For example, the US Navy Aegis uses DDS to provide data distribution across its distributed combat management system. DDS is also being adopted by SCADA systems. For instance, Schneider Electric uses DDS to provide global data access in its line of programmable logic controllers.
Secure real-time DDS data dissemination
The remote attestationprocess requires that the targeted embedded device control thepropagation of information about its state, and that the verifier beable to trust the attestation mechanism(Figure 3). However, thestate-of-the-art does not consider the aspect of real-time disseminationof the attestation information. Given that we are addressingmission-critical systems, it is imperative to incorporate the real-timerequirement, particularly in environments with distributed embeddeddevices that must, in effect, never fail.
DDS has been proven asa robust and scalable technology for distributed data dissemination inreal-time, with deployments in both safety- and mission-criticalenvironments. Given such DDS deployment scenarios, we took advantage ofmiddleware extensions addressing security needs of critical systems suchas the SCADA systems in the power industry. We identified and addressedthe key security mechanisms that need to be integrated with DDS inorder to secure the data flow for critical systems (see below).Typically, safety-critical devices such as SCADA devices use legacyprotocols that do not exhibit any capabilities for data integrity,confidentiality, and non-repudiation. These protocols generally do notallow for end-to-end security.
QoSmisconfiguration detection with DDS. Implementations of DDS such as theRTI Connext product have mechanisms for determining “Quality ofService” (QoS) misconfigurations that can be used to detectconfiguration problems. DDS is fundamentally a “publish subscribe”design; publishers send real-time updates to subscribers. DDS also has asophisticated QoS management system. RTI Data Distribution Serviceprovides a comprehensive set of QoS parameters, including ones thatcontrol reliability, resource usage, update rates, filtering, and statusnotification. Publishers offer QoS and subscribers request QoS; if therequest is compatible with the offer, then the middleware creates—andenforces—a “QoS contract” that governs their communication. Thus, DDSalready contains a natural system to determine mis-configurations. Inthis case, we extended our data-distribution framework to include anencompassing mechanism for determining QoS misconfigurations that canapply to both local and remote embedded system devices.
Evaluating remote device TPM hardware platforms
Wesurveyed Trusted Platform Module (TPM) hardware platforms. The surveyevaluated embedded systems, workstation platforms, and other types ofCOTS TPM implementations. After target hardware was identified, thesurvey reviewed TPM-enabled secure hypervisors from a variety of sourcesand on a variety of architectures. This includes real-time operatingsystems from vendors such as Wind River, Green Hills, and LynuxWorks,and solutions that leverage similar hardware security modules likecryptographic APIs.
The next step towardsdesigning this architecture was to determine the relevant integritymeasurements. From our survey, we chose a target COTS operating systemtypical for embedded devices used in critical systems. Next, weconsidered different integrity measurement capabilities that apply bothat load time and run time.
In particular, we identified relevantkernel integrity measurement (i.e., load-time measurement) capabilitiesalong the directions presented by Loscocco et. al. . They presentan approach that extends the conventional methods for software integritymeasurements (i.e., cryptographically hashing static code and data inthe kernel), adding examination of dynamic data structures for integritymeasurement. Another approach is property-based attestation , whichassumes the use of an entity during the bootstrap process thattranslates between binary measurements and properties.
We alsoanalyzed capabilities for measuring run-time integrity. It is importantto determine if remote embedded devices have been tampered with whilethey are in operation. One approach to measuring run-time integrity isto analyze the coherency of the target’s state . This method isbased on two properties: atomicity, which ensures that a measurementcorresponds to the state of the target at a particular point in time,and quiescence, which ensures that the target data is in a consistentstate. We thereby identify the relevant run-time measurements that canbe taken for embedded devices.
The idea is to leverage theapproaches that HIMA  and SIMA  take towards integritymeasurements to design an overall architecture. HIMA presents anintegrity measurement agent that runs in a hypervisor and performs twodifferent tasks on the guest virtual machine (VM): active monitoring ofcritical guest events, and guest memory protection.
SIMA alsoproposes a hypervisor-based approach for active monitoring using sensoragents at different layers of the system: inside every VM in the kernelspace and inside the hypervisor itself for executing special monitoringtasks and securing the hypervisor. SIMA uses a TPM to assure a trustedboot process. However, HIMA and SIMA are local solutions and do notcommunicate attestation information outside the local platform.
Asan overall objective, we defined the role and properties of all thecomponents of our architecture, and the interfaces that are required forthe integration. Specifically, we had to define the TPM’s role insupporting the integrity of measurements, the hypervisor’s role inproviding isolation for the measurement tools, and the communicationgateway’s role in disseminating attestation information to a verifier.Finally, we had to analyze the impact on the trust assurance whenembedded devices cannot support a hypervisor platform.
Using DDS-based Attestation Communications
Ourapproach aimed to address both load-time and run-time measurements andtimely notification, hence it required a real-time mechanism fordisseminating attestation information to a verifier application. Thedissemination mechanism must fulfill two properties:
- To communicate data in real time, such that the verifier can also take action in real time;
- To use a cryptographic protocol to produce a controlled agreement among the entities in the distributed system .
DDS,again, is an ideal candidate for this functionality, as it addressesboth requirements. DDS is a real-time software middleware fordistributed data dissemination. Moreover, DDS provides a pluggablearchitecture for security, which allows building customizable plug-inimplementations for authentication, access control, key management andcrypto operations. We can leverage the key management and thecryptographic plug-ins to build a framework that enables the agreementon a shared session key between remote embedded devices and verifiers.The framework also requires integration with the authentication plug-into produce parameters such as the identities of the parties involved inthe communication.
This architecture enables the communication ofattestation information extracted from the measurement tools, whichwill lead to the design of the communication gateway. For thiscomponent, we leverage RTI Connext Routing Service. Using its AdapterSDK, RTI Connext Routing Service can be used to interface with non-DDSsystems using off-the-shelf or custom-developed adapters, includingthird-party legacy code written to the network socket API. We use thiscapability for integrating the measurement tools with DDS. Thecommunication between the embedded devices and the verifier can then beachieved using the secure protocol.
Misconfiguration detection mechanisms
Giventhat DDS already targets embedded systems environments, we address thecase in which remote attestation is achieved for embedded platformsrunning DDS applications. In this case, the applications running in theguest OS of the remote attestation architecture will be DDSapplications. For this scenario, we use the intrinsic characteristics ofDDS and detect QoS misconfigurations on both local client machines andremote embedded devices running DDS. Moreover, we studied what othermechanisms can be employed to determine specific misconfigurations onembedded systems communicating to local clients.
DDS Middleware Security Mechanisms
Securityis paramount for the described architecture because there is a clearincentive for attackers to target the attestation process, for exampleto gain unauthorized access, or to carry out denial of service attacks.We used ObjectSecurity’s model-driven security policy automationtechnology to control access on the DDS and SSL layers.. The followingcomponents are noteworthy because they go beyond traditional securitymechanisms. Traditional security mechanisms such as SSL/TLS andauthentication, also have to be deployed, but are not a main focus ofthis paper. Instead, we focus on the following innovative securityaspects: Attributed Based Access Control (ABAC), AuthoriZation basedAccess Control (ZBAC), Model-Driven Security (MDS) policy automation,and Model-Driven Security Accreditation (MDSA) automation.
AttributeBased Access Control (ABAC) is used, so that technical access controlbecomes more fine-grained and contextual (e.g. based on the context ofthe access, the business process the requester of information is in, theway information is aggregated across interconnected IT systems etc.).Identity and role based access control IBAC/RBAC are still used, butonly to provide individual attributes (i.e. identities, roles) specifiedwithin ABAC policies. ABAC PDP/PEPs are deployed on each DDS node viainterceptor interfaces, so that policy decisioning is automatically donefor each message that traverses through DDS to and from theapplication.
AuthoriZation Based Access Control (ZBAC) – whilenot implemented in this case study at the time of writing – is intendedto be used to make access policies for multi-hop interactions acrosstrust domains and with delegation manageable. ), In ZBAC, authorizations(i.e. granted policies and privileges assigned to accessors) are putinto signed unforgeable/unguessable (e.g. cryptographic) tokens whichare issued by trusted authorization token servers, and passed along tothe policy decision point (PDP) associated with the protected resourcewith the request message. The PDP can then verify and enforce the policyfrom inside the tokens. The numerous benefits of this approach overidentity-based access control have been discussed in depth here .
Model-Driven Security (MDS) policy automation is used to solve that ABAC policy authoring andmaintenance show-stopper: MDS automatically generates technical securityrules from generic security policy requirements (models) – for examplecaptured in models close to the understanding of the recommendedguidance controls. MDS allows the expression of highly contextualpolicies in very generic terms (e.g. at a specific step in a businessprocess, or at a specific time, or if something else has happenedbefore), and automatically generates the matching fine-grained ABACrules for enforcement.
Since 2002, ObjectSecurity has implementedMDS, the use of model-driven approaches to automate the generation oftechnical security policy implementation from generic securityrequirements models. Numerous publications are available, e.g. ,and the authors’ definition of model-driven security (MDS) is asfollows :
“MDS is the tool supported process of modelling generic,human-understandable security requirements at a high level ofabstraction, and using other information sources available about thesystem produced by other stakeholders (e.g. mashup/orchestration models,application models, network topology models ). These inputs, which areexpressed in Domain Specific Languages (DSL), are then transformed intoenforceable security rules with as little human intervention aspossible. It also includes the run-time security management (e.g.entitlements / authorizations), i.e. run-time enforcement of the policyon the protected IT systems, dynamic policy updates and the monitoringof policy violations. MDS helps develop, operate and maintain secureapplications by making security proactive, manageable, intuitive,cheaper, and less risky.”
Through its integration with system/application specificationtools (e.g. modeling, orchestration, and development tools) also enablesa secure application development lifecycle at development time rightfrom the beginning – dealing with policy abstraction, externalization,authoring, automation, enforcement, audit monitoring and reporting, andverification.
As an example implementation, full model-drivensecurity has been implemented by ObjectSecurity in their OpenPMF product since 2002, which automates application security policies foraccess authorization and incident monitoring. Unlike any otherapplication security policy management product in the market, OpenPMFautomates the process of translating human-understandable security &compliance requirements into the corresponding numerous andever-changing technical authorization policy rules and configurations.In addition, it proactively enforces (“whitelisting”) decentralizedaccess decisions, and continuously monitors for security incidents(incl. at the application layer).
OpenPMF involves fivesteps: (1) Configure intuitive business security requirements in models,(2) Generate matching technical security policies automatically, (3)Enforce technical security policies transparently, (4) Audit technicalsecurity policies transparently, (5) Update technical security policiesautomatically. OpenPMF stands for “Open Policy Management Framework”because it is based on open standards where possible (e.g. Eclipse EMF,and web app server security APIs, XACML, syslog), and because it isdesigned as a customizable, future-proof toolkit, it can be easilyexpanded to both legacy devices and new kinds of devices from differentvendors.
Figure 4 illustrates “model-driven security (MDS) policy automation” at a highabstraction level: A model-driven development process is depicted in theright half of the figure. Application interactions are modeled using amodel-driven service orchestration (or similar) tool, which basicallyallows application modules to be “plugged together” in a drag-and-dropand plug-and-play fashion. The actual application is the integratedorchestration of those modules.
The model-driven orchestrationtool automatically deploys and integrates the modules as modeled. Thisprocess provides valuable, reliable information about the applicationwith its interaction to the model-driven security process, which worksas follows: The first step of model-driven security policy automationinvolves meta-modeling the features of the security policy using aDomain-Specific Language (DSL) (depicted left, top).
After thatyou model the security policy using features specified in themeta-modeled DSL (depicted left, top). If necessary you can customizethe policy-generation workflow (depicted left, middle). After that themodel-driven security component enforcement points are installed intothe runtime platform (depicted right, bottom).
You can then runthe model-driven security workflow to automatically generatefine-grained contextual technical security policy rules (depicted left,bottom), taking into account various system and context information,including the application integration model (depicted right side, top)used to build the application (depicted right, bottom). The technicalsecurity rules are then automatically pushed into the policy enforcementpoints for enforcement. Whenever applications change (esp. theintegration), the technical security rules can be automaticallyre-generated. A video tutorial of this process is available online .
MDShas a number of benefits when used correctly: It reduces manualadministration overheads and saves cost/time through automation (policygeneration, enforcement, monitoring, update) – especially for agilesoftware applications. It also reduces security risks and increasesassurance by minimizing human error potential, and by ensuring that thesecurity implementation is always in line with business requirements andwith the functional behavior of the system, thus improving bothsecurity and safety of the system. Furthermore, it unites policyconsistently across security silos (e.g. different application runtimeplatforms). Finally, it forms part of a more automated model-drivenapproach to agile accreditation.
Modeling systems at the rightgranularity does not actually add to the total cost of policymanagement, and can greatly reduce costs of effort of protecting thesystem and improve security and safety compared to traditional, manualpolicy definition and management . This is because if securityadministrators have to manually specify detailed technical securityrules because their tools do not support MDS, they are effectivelyspecifying the security related aspects of the application specificationwithin their policy administration tool.
In practice, this isimpossible for non-trivial systems, esp. over the whole system lifecycle. Model-driven security simply re-uses this information (whichoften make up the greater part of security policy rules) from modelsspecified by specialists (and / or tools) who understand applicationsand workflows better anyway (i.e. application developers / integrators,and process modelers).
Model-driven security adoption sometimesstill gets associated with additional cost and effort because of itsdependence on application specifications – however, modeling aspects ofthe interconnected system (esp. interactions) is an important part ofstate-of-the-art application orchestration, and is also part ofrobust/certifiable systems design.
Model-Driven SecurityAccreditation (MDSA) automation  – while not implemented in thiscase study at the time of writing – is intended to be used toautomatically generate supporting evidence for compliance/accreditationrelated to the effectiveness of the least privilege access controlpolicy implementation.
As authorization becomes increasinglyfine-grained and contextual, access control policy enforcement incidentmonitoring and analysis also need to be policy-driven , because theauthorization policy determines to a large extent what behavior isdeemed to be an incident. MDSA was originally invented for automatinglarge parts of the compliance and assurance accreditation managementprocesses (e.g. Common Criteria) to achieve reduced cost / effort, andincreased reliability / traceability.
MDSA automaticallyanalyses and documents two main compliance aspects: (1) Does the actualsecurity of the “system of systems” at any given point match with thestated requirements? (2) Do any changes to the system of systems impactthe current accreditation? A video tutorial of this process is availableonline .
Conclusions and future work
The overallscope of this effort was to develop a distributed remote attestationarchitecture that enables monitoring and control capabilities for remoteembedded devices. With such an infrastructure, the software andinformation originating from remote embedded devices can be trusted.
Further,we identified the mechanisms required to securely communicateattestation information from remote embedded devices to monitoringnodes. In particular we identified the parts of a solid securityarchitecture for the described architecture: Model-driven security foraccess policy automation, together with attribute-based access control,authorization-based access control, and traditional mechanisms such asSSL/TLS and authentication.
We used a DDS-based real-timemessaging technology as a secure communication gateway that can beintegrated with hypervisor technologies. We used the misconfigurationdetection mechanisms provided by the publish-subscribe middleware.
Thisis a new approach for distributed remote attestation. By succinctlycommunicating and verifying the software integrity of remote embeddeddevices, we enable a framework for reliable and secure embedded systems.Implementing it in a commercial product will make security moreaccessible and practical for embedded applications. Misconfigurationdetection on local and remote devices is a major benefit. Thiscapability, along with the remote attestation framework, will provide apowerful weapon in countering the security threats of today and into thefuture.
Gerardo Pardo-Castellote is Chief TechnologyOfficer at RTI and an expert in secure real-time software architecturesand networking. His professional experience includes real-timedistributed middleware, distributed systems and software, control-systemsoftware, distributed-systems software security and software-systemdesign. He was the main author of the OMG Data Distribution Service andthe OMG DDS-RTPS Wire Protocol Specifications, and the architect behindthe original RTI implementation of the standard. He is currently on theboard of directors at the Object Management Group (OMG) and chairs theData Distribution Group. Dr. Pardo-Castellote received his Ph.D. inElectrical Engineering from Stanford University. Gerardo also holds anM.S. in Computer Science, an M.S.E.E. from Stanford University, and aB.S. in Physics from the University of Granada, Spain.
Ulrich Lang is the CEO and co-founder of ObjectSecurity and responsible for thebusiness and technical strategy, architecture and direction ofObjectSecurity and the OpenPMF product. In addition, Ulrich leads theconsultancy business within ObjectSecurity (esp. for SOA & Cloudsecurity and model-driven security). He received his Ph.D. from theUniversity of Cambridge Computer Laboratory (Security Group) onconceptual aspects of middleware security in 2003 (sponsored by the UKDefence and Evaluation Research Agency (DERA), after having completed aMaster's Degree (M. Sc.) in Information Security with distinction fromRoyal Holloway College (University of London) in 1997. He is on theBoard of Directors of the Cloud Security Alliance (Silicon ValleyChapter).
- Trusted Computing Group – Trusted Platform Module (TPM)
- Etalle, Stuxnet Explained
- CRITICAL INFRASTRUCTURE PROTECTION: Challenges and Efforts to Secure Control Systems , March 2004
- Karen Goertzel, Embedded Systems Security Analysis
- Improving Medical Devices: Georgia Tech Research Center Expands Testing Capabilities to Help Reduce Potential Interference
- Halperin et. al., “Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses”, in the Proceedings of the IEEE Symposium on Security & Privacy, Oakland, 2008
- Next Generation Platform Innovation In M2M
- Robert Ackerman, Telemedicine Reaches Far and Wide
- Greg Hoglund and Gary McGraw, Exploiting Software: How to Break Code (Boston, MA: Addison-Wesley, 2004),
- Coker et. al., Principles of Remote Attestation
- Gilbert et. al., “Toward trustworthy mobile sensing”, in the Proceedings of the Eleventh Workshop on Mobile Computing Systems & Applications (HotMobile), 2010
- The Data Distribution Service specification, v1.2
- The Real-Time Publish Subscribe DDS Wire Protocol, v2.1
- Sailer et. al., “Design and Implementation of a TCG-Based Integrity Measurement Architecture”, in the Proceedings of the Usenix Security Symposium, 2004
- Paul England, Butler Lampson, John Manferdelli, Marcus Peinado, and Bryan Willman, A Trusted Open Platform , in the Proceedings of the Journal of Computer, Volume 36, Issue 7 (July 2003), 55-62.
- Jonathan M. McCune, Yanlin Li, Ning Qu, Zongwei Zhou, Anupam Datta, Virgil D. Gligor, Adrian Perrig. “TrustVisor: Efficient TCB Reduction and Attestation”, in the Proceedings of the IEEE Symposium on Security and Privacy, Oakland, May 2010
- Mark Thober, J. Aaron Pendergrass, and C. Durward McDonell, “Improving coherency of runtime integrity measurement”, in Proceedings of the 3rd ACM workshop on Scalable trusted computing (STC '08). ACM, New York, NY, USA, 51-60
- Peter A. Loscocco, Perry W. Wilson, J. Aaron Pendergrass, and C. Durward McDonell, “Linux kernel integrity measurement using contextual inspection”, in Proceedings of the 2007 ACM workshop on Scalable trusted computing (STC '07). ACM, New York, NY, USA
- Ulrich Kühn, Marcel Selhorst, and Christian Stüble. 2007. “Realizing property-based attestation and sealing with commonly available hard- and software”, in Proceedings of the 2007 ACM workshop on Scalable trusted computing (STC '07). ACM, New York, NY, USA
- Ahmed M. Azab, Peng Ning, Emre C. Sezer, and Xiaolan Zhang, “HIMA: A Hypervisor-Based Integrity Measurement Agent,” in Proceedings of the 25th Annual Computer Security Applications Conference (ACSAC '09), December 2009, Honolulu, Hawaii, USA
- Stelte et. al., “Towards integrity measurement in virtualized environments — A hypervisor based sensory integrity measurement architecture (SIMA)”, in Proceedings of the 2007 IEEE Conference on Technologies for Homeland Security, 2007, Woburn, MA
- Karp, A. H., H. Haury, and M. H. Davis. From ABAC to ZBAC: The Evolution of Access Control Models , Journal of Information Warfare, vol. 9, #2, pp. 37-45, September 2010.
- Lang, U. “Cloud & SOA Application Security as a Service”, Proceedings of ISSE 2010, Berlin, Germany, 5-7 October 2010
- Ritter, T, R. Schreiner, U. Lang. “Integrating Security Policies via Container Portable Interceptors”, IEEE distributed systems online, (vol. 7, no. 7), art. no. 0607-o7001, 1541-4922, July 2006
- Lang, U. Blog. Security policy automation using model driven security
- ObjectSecurity. OpenPMF website , 2000-2011
- ObjectSecurity. ObjectSecurity and Promia implement XML security features for next-generation US military security technology , Press Release, April 2010
- ObjectSecurity/Promia. SOA IA Demonstrator: Information Assurance (IA) for Serviec Oriented Architecture (SOA) , demo video tutorial, 2011,
- Lang, U. Blog. Study estimates 59% accreditation cost saving using automated Correct by Construction (CxC) tools (& more for agile SOA/Cloud), 2012,
- Lang, U. and R. Schreiner. “Model Driven Security Accreditation (MDSA) for Agile, Interconnected IT Landscapes”, Proceedings of WISG 2009 Conference, November 2009
- Lang, U. and Schreiner R. “Security Policy Automation for Smart Grids: Manageable Security & Compliance” at Large Scale, ISSE Conference Proceedings 2011