Advertisement

Using Monte Carlo methods to design domain-driven device clouds

Mark Benson, Logic PD

August 31, 2011

Mark Benson, Logic PDAugust 31, 2011

Here's how you can use a class of computational algorithms from the Manhattan Project to calculate the result of complex processes using random variables in the device cloud.

Cloud computing is an emerging trend. Consequently, more and more embedded devices are becoming connected to "The Cloud." Features that were previously included on the device are now being moved to The Cloud and provided as a service. This trend not only requires a new way of thinking about system design, but also enables a new level of algorithmic analysis that is moving us closer to unlocking the true promise of device clouds: data visualization.

To support this conclusion, I'll explore two problem-solving techniques that can help navigate the challenges and opportunities of embedded and mobile device-cloud designs: domain-driven design and Monte Carlo methods.

Domain-driven design (DDD) helps us dissect complex problem domains and make intelligent design decisions regarding which features should reside in the device and which should reside in The Cloud.

Monte Carlo methods (MCMs) give us tools to characterize data and predict events across a family of devices that we would not otherwise be able to do without a device-cloud infrastructure.

Device clouds
To understand how device clouds fit within the historical context of computing, here is an oversimplified chronology of events since 1950:

Centralized computing (1950-1975). During this time, advancements in electronics enabled bigger and faster computers. Because of high component costs and the highly specialized skills required for operation, there was a natural trend toward centralized computing. Computing during this time was done primarily in academic, corporate, or government contexts. Multitasking and multiuser systems, when they became available, made use of dumb terminals to act as conduits to the central processing unit.

Decentralized computing (1975-2000). Thanks to the decreasing cost of electronic components and the increase in processing power and memory densities, it was possible to put the power of computing into the hands of the average home user. Although computing in academia, corporations, and government programs of course were still major growing forces, the personal computer revolution caused a natural decentralization of both processing and storage.

Pervasive computing (2000- ). Thanks to the dot-com gold rush, a proliferation of connected devices, maturing communications infrastructures, and an increasing rate of adoption of cellular technologies, we're now living in a world where computers are everywhere around us. Our need, now more than ever, is to find ways to connect these devices together and share the data and user experiences between them in a meaningful way. Cloud computing is a class of solution that is helping us design around and leverage the exploding number of connected devices. In a sense, cloud computing symbolizes a hybrid of the previous two generations of computing, taken to their logical conclusion: from centralized computing to cloud computing, and from decentralized computing to mobile computing.

Although the proliferation of connected devices may seem like a phenomenon that is confined to the consumer market, it's important to realize that the users of nonconsumer embedded systems are themselves also casual users of consumer smartphones and tablets. Because of this, nurses, soldiers, refinery plant engineers, utility services personnel, factory floor workers, and users of ambulatory medical equipment have expectations that user interfaces, features, performance, size, display, touch-screen technology, and connectivity to cloud services are just as good as the smartphone in their pocket.

Although device clouds provide many benefits (shifts processing and storage costs away from the device, reduced need for in-field repair labor, better diagnostic mechanisms), important questions remain to be answered:

  • What are the key design challenges for device clouds and how do we overcome them?
  • What types of features are possible with device clouds that were not previously conceivable?
  • What do designers, developers, and users ultimately want from a device cloud system?

To help answer these questions, let's explore two problem-solving formalisms: domain-driven design and Monte Carlo methods.

Domain-driven design
What are the key design challenges for device clouds and how do we overcome them?

When we think about design challenges for device clouds, a bright engineer might start with mechanics:

  • How should we get data off the device and into The Cloud?
  • How do we do interesting things with the data in The Cloud?
  • How do we provide rich views of the data across multiple devices?

These are great questions to ask. However, we should be asking ourselves many other classes of questions that explore deeper into the heart of the problem space, putting ourselves into the mind of the user:


  • Who is using the device?
  • What tasks does the user need to perform?
  • When is the user performing the tasks?
  • Where is the user when they are using the device?
  • Why is the user using the device?
  • How does the device help the user in a way they would not otherwise have?

These questions only scratch the surface of designing for high usability, but they get us started in thinking empathetically about how system design decisions affect the experience of the user.

Domain-driven design (DDD) was first introduced by Eric Evans in his 2003 book Domain-Driven Design: Tackling Complexity in the Heart of Software. DDD is a set of techniques and terms that provide a framework for making design decisions about software systems in complex domains. DDD techniques are:

  • Domain. A description of the environment in which the user will use the system. The domain includes the people, the processes, and the tools that users use to accomplish a set of tasks.
  • Model. A set of related abstractions that represent the architectural solution to the problem domain. The model will ideally contain objects, behaviors, attributes, relationships, use cases, and both static and dynamic views of each.
  • Ubiquitous language. A common language that is developed that helps all stakeholders, including developers and domain experts, connect software to the domain in which it operates. This is a key part of DDD since some domains have slightly different meanings for common words, or have entirely new words. Defining and using an ubiquitous language helps design the system more empathetically and increases the rate of user adoption since the system features and interfaces will closely match the language that is common in their domain.
  • Context. The environment in which a term or phrase in the ubiquitous language is used. The context is important to communicate π to system designers and software engineers how certain terms are used within the context of a task flow.

By thinking about these concepts, the idea is that we can more easily sort device-cloud design problems and reap the added benefit of keeping designers, developers, and domain experts focused on the problems that exist within the target domain.

Monte Carlo methods
What types of features are possible with device clouds that were not previously conceivable?

With device clouds, in addition to centralized data storage, we open the solution space to include the ability to remotely control equipment, monitor data and trends, enable rich diagnostics and support, provide in-field configuration and software update capabilities, track usability statistics, and give resource-constrained embedded devices access to powerful cloud processing power.

With all these added benefits, we now have the ability to collect very large quantities of data about a fleet of devices. Having detailed data about many devices enables two types of analysis:

  • Detailed analysis and diagnostics on individual devices. This is great for enabling customer support, software upgrades, and diagnosing problems in the field without sending repair technicians.
  • Statistical analysis on many devices. This type of analysis is especially interesting to the author since it uncovers new ways to optimize user experiences, reduce cost, or mitigate failures.

Since we're particularly interested in statistical analysis on data from a number of devices, we turn our attention to one such method called Monte Carlo methods.

Monte Carlo methods (MCM) were contrived in the 1940s by physicists working on nuclear weapons projects at the Los Alamos National Laboratory. Specifically, MCM were used while investigating radiation shielding to predict the distance that neutrons would travel through various types of materials.

MCM are a class of computational algorithms based on stochastic techniques that are used to calculate the result of complex processes using random variables. You can find MCM used in everything from economic policy, to nuclear physics, to regulating the flow of traffic in metropolitan areas.

Strictly speaking, to call something a Monte Carlo experiment, all you need to do is use random numbers to examine a problem. Here are the key steps to solving a problem using MCM:

  • Define a domain of possible inputs.
  • Generate inputs randomly from the domain using a probability distribution (such as Gaussian).
  • Perform a deterministic computation using the inputs.
  • Aggregate the results of the individual computations into the final result.

To illustrate this, here is a quick example of how we can estimate the value of π using MCM:


1. Draw a square on the ground, then inscribe a circle within it. From plane geometry, we know that the ratio of the area of an inscribed circle to that of the surrounding square is π / 4.
2. Uniformly scatter objects of uniform size throughout the square. For example, you might use grains of rice or sand.
3. Since the two areas are in the ratio π/ 4, the objects should fall in the areas in approximately the same ratio (uniform distribution).
4. Count the number of objects in the circle and divide by the total number of objects in the square. By doing this, we will yield an approximation for π/ 4.
5. Multiply the result by 4 to get an approximation for π itself.

MCM are useful for deriving meaning from device-cloud data in a number of ways:

  • User experience refinement. By characterizing how users use a system across a large number of devices, we can optimize or consolidate task steps to enrich the user experience.
  • Fault prediction. If we design our device-cloud systems to report fault indications or performance data, we can not only use that to locate a faulty device, but can use that in conjunction with other faulty devices to identify trends. For instance, we may learn that all devices that are located within a certain proximity of each other experience a higher rate of fault occurrences. By further analysis of the device location data, we may find that each faulty device is located in an area near salty seawater that is corroding contacts.
  • Risk mitigation. If multiple sensing devices within an oil refinery have fault conditions within a short period of time, there may be a more severe or systemic issue at play that will cause future failures. MCM can help us analyze and predict these types of problems.
  • Cost savings. By analyzing usage trends, communications infrastructure providers can shape and prioritize data traffic over a network to make tradeoffs between cost, performance, and power consumption.

MCM are a powerful set of techniques that can be used to quantitatively assess risk, predict failures, or understand how users transact tasks. Both DDD and MCM can put us in a better position to achieve the true value of device clouds: data visualization.

Data visualization
How do we provide rich views of the data across multiple devices?

So far, we've discussed two techniques that can help us get the most out of device clouds. Both of these tools, however useful, are means to an end, and I would like to contend that the most important value of device clouds is data visualization.

Visualizing data is important to all stakeholders of a system:

  • Users want to use devices to quickly accomplish tasks, monitor status indicators, and get timely feedback on the results of their actions.
  • Maintenance personnel want to reproduce, diagnose, and fix problems easily in addition to managing software versions reliably.
  • Designers want to see how users interact with the system. By planting markers in the software in key places, device clouds make visualizing this possible.
  • Sponsors want to visualize how their system is being used. Seeing this data may redirect a business model, or prompt a more cost-effective way to manage the fleet.


By carefully making informed system design choices that cut through the distractions caused during implementation by focusing intently on the problem domain, being empathetic toward key stakeholders, and using advanced algorithmic analysis techniques, we can begin to unlock the true value of cloud computing for the next generation of connected devices.

Mark Benson is director of software strategy at Logic PD, where he is responsible for leading product software development, championing the software aspects of the technology roadmap, and setting the overall software strategy for the company. Mark holds a bachelor's degree in computer science from Bethel University in Arden Hills, Minnesota, and a master's degree in software engineering from the University of Minnesota. Mark can be contacted at mark.benson@logicpd.com.


This article provided courtesy of Embedded.com and Embedded Systems Design magazine.
See more articles like this one on Embedded.com.
This material was first printed in Embedded Systems Design magazine.
Sign up for subscriptions and newsletters.
Copyright © 2011
UBM--All rights reserved.

Loading comments...