Designing intelligent sensors for use on the "Internet of Things" - Part 2 -

Designing intelligent sensors for use on the “Internet of Things” – Part 2

Once the sensor signal has been digitized (Part 1) , there are two primary options in how we handle those numeric values and the algorithms that manipulate them. We can either choose to implement custom digital hardware that essentially “hard-wires” our processing algorithm, or we can use a microprocessor to provide the necessary computational power.

In general, custom hardware can run faster than microprocessor-driven systems, but usually at the price of increased production costs and limited flexibility. Microprocessors, while not necessarily as fast as a custom hardware solution, offer the great advantage of design flexibility and tend to be lower-priced since they can be applied to a variety of situations rather than a single application.

Once we have on-board intelligence, we’re able to solve several of the problems that we noted earlier. Calibration can be automated, component drift can be virtually eliminated through the use of purely mathematical processing algorithms, and we can compensate for environmental changes by monitoring conditions on a periodic basis and making the appropriate adjustments automatically. Adding a brain makes the designer’s life much easier.

A relatively new class of microprocessor, known as a digital signal controller or DSC, is rapidly finding favor in products that require low cost, a high degree of integration, and the ability to run both branch-intensive and computationally intensive software efficiently.

Although usually not as fast as custom digital hardware, in many cases DSCs are fast enough to implement the necessary algorithms. At the end of the day, that’s all that really matters.

Finish Up with Quick and Reliable Communications
That leaves just one unresolved issue: sharing sensor values so systems that have to share sensor outputs can scale easily. Once again, the fact that the sensor data is numeric allows us to meet this requirement reliably.

Just as sharing information adds to its value in the human world, so too the sharing of measurements with other components within the system or with other systems adds to the value of these measurements. To do this, we need to equip our intelligent sensor with a standardized means to communicate its information to other elements.

By using standardized methods of communication, we ensure that the sensor’s information can be shared as broadly, as easily, and as reliably as possible, thus maximizing the usefulness of the sensor and the information it produces.

Put It All Together, and You’ve Got an Intelligent Sensor
At this point, we’ve outlined the three characteristics that most engineers consider to be mandatory for an intelligent sensor (sometimes called a smart sensor):

1. a sensing element that measures one or more physical parameters (essentially the traditional sensor we’ve been discussing),

2. a computational element that analyzes the measurements made by the sensing element, and

3. a communication interface to the outside world that allows the device to exchange information with other components in a larger system.

It’s the last two elements that really distinguish intelligent sensors from their more common standard sensor relatives (Figure 1.3 below ), because they provide the abilities to turn data directly into information, to use that information locally, and to communicate it to other elements in the system.

Figure 1.3. Block diagram of a standard sensor (above) and an intelligent sensor (below)

Essentially, intelligent sensors “flatten” the sensor world, allowing sensors to connect to other sensors nearby or around the globe and to accomplish tasks that simply weren’t possible prior to their development.

Just as importantly, because so much of their functionality comes from the software that controls them, companies can differentiate their products merely by changing the configuration of the software that runs in them.

This has two very important consequences for suppliers of intelligent sensors. First, it essentially moves the supplier from a hardware-based product to a soft-ware-based product.

While it’s certainly true that there has to be a basic hardware platform for the sensor (this is, after all, a physical device), the hardware is no longer the primary vehicle for adding (or capturing) value; the software that controls the intelligent sensor is.

Because the manufacturer can add or delete features by flipping a configuration bit in software, it can alter its product mix almost instantaneously, and the specific product configuration doesn’t have to be finalized until just before final test and shipment.

One hardware platform can be used on multiple products targeted for different market segments at different price points; and, once new features have been developed, no additional production costs are required in order to include them in the product, so marginal profit soars.

The second consequence is that, because the intelligent sensor is connected to the outside world, the supplier now has the ability to gather information on the operation of its sensors in the field under real-world conditions and to update the software running the sensors after they leave the factory.

Not only does the information from the field offer the sensor manufacturer unparalleled insight into the needs and concerns of its customers, but it also provides the hard data required to determine the issues that are most important to those customers (and hence are the ones that the customers are most likely to value).

Armed with this information, sensor manufacturers can quickly add new features, offer certain configurations on an as-needed basis, or perform maintenance, all without having to touch the sensor itself. Services can now be delivered cost-effectively from central locations, providing yet another opportunity for the supplier to capture additional value and profits. An example of this is reported in the Harvard Business Review:

Most manufacturers cannot charge more than $90 to $110 per hour for their technical support because of price and benefit pressures from local competitors. But GE Energy, because of its efficient network-enabled remote servicing, can charge $500 to $600 per hour for the same technician.

Even more important, the information generated by its continual monitoring allows GE to take on additional tasks, such as managing a customer’s spare parts inventory or providing the customer’s and GE’s service and support personnel with complete access to unified data and knowledge about the status of the equipment.

Why Don’t We Make Everything Intelligent?
With all of the benefits that come from turning a standard, stand-alone sensor into a connected, intelligent sensor, are there any reasons why we wouldn’t want to make all sensors intelligent?

The answer is “yes,” and it’s important to understand the situations for which it’s not appropriate to add the type of intelligence and connectivity that we’ve been discussing. In general, adding intelligence may not make sense under one or more of the following conditions:

1) the additional product development and manufacturing costs cannot be recouped from the customers within a reasonable time frame,

2) the end user is either unable or unwilling to supply the infrastructure required to power and/or communicate with the intelligent device, or

3) the physical constraints of a particular application preclude adding the additional circuitry required to implement the intelligence and connectivity.

Development and Production Costs
In order for any product to remain viable over the long term, customers must be willing to pay enough for the device to cover the cost to develop and manufacture it.

That principle holds just as true for leading-edge technical products such as intelligent sensors as it does for more prosaic products like paper towels; no company can long afford to make a product for which it receives less money than it costs to make.

Before investing the time and resources to add intelligence to its devices, a sensor manufacturer needs to determine whether its customers will be willing to pay enough of a differential in price or services to cover at least the cost of development and any increased production expenses (less any savings the manufacturer may enjoy based on the new design).

Unless customers sufficiently value the benefits that an intelligent device offers, the manufacturer is better off sticking to nonintelligent products.

At first, it might seem that customers would clearly see the benefits of adding intelligence, but some applications are so cost-driven and have such razor-thin margins that customers are completely unwilling to invest in new technology.

An example of this would be low-end disposable plastic cutlery, a commodity for which manufacturers receive a fraction of a cent of profit per finished item. With such miniscule profit margins, producers of this type of product simply will not spend much money on equipment; their buying decisions are focused on the bottom-line purchase price, and anything that even appears to be optional holds no value at all.

Lack of Necessary Infrastructure
A second major condition under which intelligent sensors should not (or cannot) be used occurs when a customer lacks the minimum level of infrastructure required to support both the sensors’ power requirements and their communication channels.

Closely related to the previous condition, in which the sensor manufacturer couldn’t cost-justify building the products, this condition is one in which the customer is unable to economically justify adding the additional infrastructure needed in order for the sensors to work.

Both power and communication channels are mandatory for intelligent connected sensors; without power, the sensors can’t even turn on, and without communication channels they are unable to report their information.

This aspect of implementing intelligent sensor systems should not be underestimated. Although more and more manufacturing plants are becoming wired for digital data networks, it still represents a significant cost to the customer, one that many find to be a deal killer.

Some of the new networking protocols provide power along with the wires used for communications (for example, Power-over-Ethernet (PoE)), but older plants in particular can be very expensive to wire.

Newer low-power wireless sensors are coming into the market to help address these twin issues, but such solutions tend to be more expensive to purchase (although their long-term total cost of ownership may be lower).

Conditions Precluding Additional Electronic Circuitry
The final, and least common barrier to the use of intelligent sensors occurs when the environmental conditions of a particular application preclude the use of any additional electronic circuitry.

Such conditions might be size, operating temperatures, severe vibration, or exposure to caustic chemicals. In these cases, a hardened standard sensor may be the only option, although the sensor’s performance can often be significantly improved by converting the measured parameter to a digital value as soon as possible.

Real-world Examples of Intelligent Sensors
Before proceeding to the next part in this series, let’s look at three examples of intelligent sensors in the real world, two that come from the industrial process-control market and one from the vehicular-control market.

Multichannel Digital Temperature Sensor. Temperature is a widely used parameter in the control of various industrial processes, and one of the most common temperature sensors is the thermocouple.

In some ways, a thermocouple is an extremely simple sensing element; it consists of two dissimilar metals joined together at a single point and, due to what’s known as the Seebeck effect, the junction of these two metals produces a voltage that varies with the temperature of the junction (Figure 1.4, below ).

igure 1.4. Diagram of a Basic Thermocouple

The important concept to understand is that the voltage it produces is very small, on the order of millivolts, and is frequently measured in the presence of significant levels of electronic noise, which may be on the order of hundreds of volts.

Complicating matters is the fact that the temperature response of thermocouples is nonlinear, so a linearization operation usually must be performed before the temperature reading can be used.

There are other serious challenges in using thermocouples, but these two are sufficient to illustrate how intelligent sensors can overcome these issues to provide accurate readings in an extreme environment: an injection-molding machine.

For those readers unfamiliar with injection molding, it is a manufacturing process in which solid plastic pellets are heated to between 300°F and 900°F to melt them. The melted plastic is then injected into a mold under high pressure (on the order of 10,000–30,000 psi), and the plastic is then allowed to cool back to a solid in the shape of the mold.

This process is repeated rapidly so that the manufacturer can make parts as quickly as possible. The key to running a successful injection-molding operation is to keep cycle times (the time it takes to open and close the mold once) short and scrap rates low.

So long as a molder can produce good quality parts at a profit per part, he essentially has the ability to “print money” based on the speed at which he can run his cycle.

An important aspect is the proper regulation of the temperature of the plastic at various points throughout the molding machine, which requires the distribution of temperature sensors (thermocouples) at key points in the process.

Unfortunately, one of the drawbacks to using thermocouples is that the wire used to create them is expensive. Molding machines in general are not particularly small, and the machines employ multiple zones of temperature monitoring and control (250 zones or more in the larger systems).

The thermocouple wires thus must be run long distances to their associated temperature controllers, resulting in the worst of all possible worlds: multiple strands of expensive wire that have to be run long distances.

One pioneering company in the temperature-control field realized that they could save their customers a tremendous amount of money by digitizing the temperature readings at the mold itself and then shipping the digitized readings to the controller via standard (and inexpensive) copper cables.

Furthermore, they could do this for many channels of thermocouple readings and, since the thermocouple readings changed relatively slowly, the digitized readings could be time-multiplexed when sent to the controller.

In the end, up to 96 channels of thermocouple data could be reported for each device, thus turning a costly, noise-prone system of long thermocouple wires into an easily managed single pair of copper wires.

Flow Sensors . While the plastic must be kept in a molten state until it gets to the mold, once there the goal is to solidify the molten plastic in the desired shape. To do this, cooling channels are built into the molds that circulate cool water or other fluids to remove the heat from the plastic quickly.

If the flow of the coolant fluid is impeded, the coolant will warm up because it is staying in contact with the hot mold longer. This in turn reduces the cooling efficiency of the coolant and lengthens the time it takes for the part to solidify, thus lengthening the injection cycle time and killing profitability.

Since this is obviously not something any rational molder wants to happen, smart molders include flow sensors in the coolant systems to ensure that the coolant is flowing within a desired range.

If one knows some of the characteristics of the cooling fluid itself, can measure the temperature of the fluid at two different points, and knows the rate of flow, it is also possible to calculate the number of BTUs transferred from one point to the other in the cooling system.

Flow sensors come in a variety of configurations, but one of the most popular types is what’s known as the in-line flow sensor. In this type of sensor, a propeller-like device is inserted in-line with the coolant flow, with the speed of the propeller indicating the flow of the fluid through the sensor.

Over time, the bearing on which the propeller is situated will wear, resulting in an eccentric motion of the propeller that significantly degrades the quality of flow reading.

One leading company developed a handheld portable unit that used special filtering algorithms to compensate for propeller bearing wear, which allowed accurate flow readings to be maintained longer and significantly extended the life of the sensor.

Steer-by-Wire Steering-position Sensor . The final example of a real-world intelligent sensor comes from the vehicular control market, specifically the steer-by-wire market for marine vehicles (boats). In a normal mechanical steering system, there is a physical link between the steering wheel and the steering-control surface (the wheels of a car, for instance, or the rudder of a boat).

When the driver turns the wheel one way, the motion is translated through a series of mechanical linkages into the corresponding change in the steering control surface. Depending upon how the system is configured, the driver gets feedback from the steering system (either from the road or the water), which helps the driver adjust his actions accordingly.

Although reliable, mechanical steering systems suffer from the inability to have more than one steering wheel without extremely complex (and expensive) mechanical fixtures.

That’s not such a big deal in a car, where only one steering wheel is normally needed, but it can be a problem for large boats, in which it would be very helpful to be able to have one steering wheel at the front (bow) of the boat when docking and one at the rear (stern) of the boat during normal cruising.

In a steer-by-wire system, by contrast, most of the mechanical linkages are replaced by electronic controls; although the driver may turn a steering wheel, that wheel is linked electronically, not mechanically, to the control surfaces.

This offers several immediate advantages, not the least of which is a significant reduction in the size and weight required for the steering system. In addition, one can more easily accommodate two or more steering wheels since they can be linked by an electronic cable without requiring additional mechanical linkages.

One drawback to steer-by-wire, however, is that until recently the driver had no feedback from the control surfaces, which could cause a disconcerting feeling of disconnection between the driver’s actions and the resulting response of the vehicle.

With the advent of a special material that changes its properties based on the strength of a magnetic field passed through it, that lack of feedback has changed.

Using a steering sensor that measures the position of the wheel many times a second, a unique steer-by-wire system developed by a global innovator adjusts the feedback to the driver by adjusting the density of the special material based on a number of factors, including how quickly the driver is turning the wheel.

In addition, the feedback to the driver can be changed based on conditions on the control surface, giving the driver not only a more enjoyable driving experience, but also a safer one.

To read Part 1, go to “The basics of sensor design.”
Next in Part 3: ”The role of DSP in smart sensor design.”

Creed Huddleston is President of Real-Time by Design, LLC, specializing in the design of intelligent sensors, located in Raleigh-Durham, North Carolina.

This series of articles is based on material from “Intelligent Sensor Design” by Creed Huddleston, with permission from Newnes, a division of Elsevier. Copyright 2007. For more information about this title and other similar books, please visit

1. Management Challenges for the 21st Century, by Peter F. Drucker. Collins, 2001.

2. According to the CIA World Factbook, the estimated total world population as of July 2005 was 6,446,131,400.

3. Based on a study (GB-200N Industrial Sensor Technologies and Markets) by B. L. Gupta for Business Communications Company, Inc. in which the 2004 industrial sensor market size in the United States was $6.1 B, with an anticipated annual growth rate of 4.6%.

4. Ohm’s Law is V = I * R, whereV is the voltage measured across a resistance (in volts), I is the current through the resistance (in amps), and R is the value of the resistance itself (in ohms). Ohm’s Law holds true for a purely resistive element, which is all we’re worried about in this example.

5. Branch intensive software is software that makes frequent changes, known as branches, in the processing of its program instructions. Computationally intensive software is software in which a significant portion of the processing time is devoted to performing mathematical computations.

6. Four Strategies for the Age of Smart Devices, by Glen Allmendinger and Ralph Lombreglia. Harvard Business Review, October, 2005. Reprint R0510J.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.