Designing intelligent sensors for use in an "Internet of Things" - Part 1 - Embedded.com

Designing intelligent sensors for use in an “Internet of Things” – Part 1

In today’s instant-access Internet-centric world, people want and expect to be able to get information when they want it, in the form they need, and at a price they can afford (preferably free).

As Peter Drucker, the greatest management mind of the past 100 years, points out, unlike physical products, information doesn’t operate under the scarcity theory of economics (in which an item becomes more valuable the less there is of it).

On the contrary, information becomes more useful (and valuable) the more there is of it and the more broadly it is disseminated. Individuals and organizations that understand this concept have begun to unlock the tremendous value that has lain fallow in commercial, academic, and nonprofit enterprises throughout the globe by digitizing their mountains of raw data, analyzing it to create meaningful information, and then sending that information via standardized communication links to others within and outside their organizations to accomplish meaningful work.

It would be difficult to overstate the effects that this new paradigm of ubiquitous connectivity has wrought in society economically, intellectually, and in everyday life. People now speak about working in “Internet time,” a frame of reference in which both space and time are greatly compressed.

Information from anywhere on the globe can be distributed to virtually anywhere else quickly and reliably, and Bangalore is now as close to New York City as Boston. In this new world, people work differently than before; individuals or groups can easily team with others from around the corner or around the globe to produce new ideas, new products, or new services, creating fabulous new wealth for some and destroying ways of life for others.

Truly, the new paradigm, which author Thomas Friedman refers to as the “flattening” of the globe, represents a tectonic shift in the way people view their world and interact within it.

What are Intelligent Sensors?
Interestingly, a nearly identical though largely unnoticed sea change is occurring in the rather mundane world of sensors. For the uninitiated, sensors (or sensing elements as they’re sometimes called) are devices that allow a user to measure the value of some physical condition of interest using the inherent physical properties of the sensor.

That’s quite a mouthful for a pretty simple concept, namely monitoring the behavior of one (relatively) easy-to-observe parameter to deduce the value of another difficult-to-observe parameter.

An example of a very familiar nonelectronic temperature sensor (Figure 1 below ) is the mercury bulb thermometer, in which a column of mercury contracts or expands in response to the temperature of the material to which it’s exposed.

In this case, the physical condition that we’re measuring is the temperature of the material in which the thermometer is inserted, and the inherent physical property of the sensor that we use for measurement is the height of the mercury in the thermometer.

Figure 1.1. Two Mercury Bulb Thermometers Showing the Temperature of a Material (Ice and Boiling Water) under Two Different Conditions

So what kinds of parameters can we measure with sensors? The answer is quite a lot, actually, with the limiting factor generally being our imaginations. Probably the most widely measured parameter is temperature, but other applications include pressure, acceleration, humidity, position, pH, and literally thousands more.

What makes sensors so useful, though, is not just their ability to accurately measure a wide range of parameters but that the sensors can perform those measurements under environmental conditions in which human involvement is simply impossible.

Whether it’s measuring the temperature of molten steel at the center of a blast furnace or monitoring the ocean current thousands of feet below the surface, sensors provide the accurate information that allows us to monitor and control all sorts of important processes.

At first glance, it might seem that sensors fall in the same category as a comfortable sweatshirt, nice to have but not particularly exciting. In this case, such a first impression would be dead wrong.

To put things in perspective, in 2005 there were an estimated 6.4 billion people living on the planet. Coincidentally, the market for industrial sensors in the United States alone in 2005 was estimated to be $6.4 billion, and $40 billion worldwide.

There are far more sensors in the world than humans, they’re called upon to do tasks that range from the mundane to the cutting edges of science, and people are willing to pay for the value that sensors bring to the table. That’s a powerful and profitable confluence of need, technical challenge, and economic opportunity, and into the fray has stepped a new class of devices that is bringing disruptive change to the sensing world: intelligent sensors.

Just what are these intelligent sensors? Conceptually, they’re a new class of electronic sensing device that’s literally revolutionizing the way we gather data from the world around us, how we extract useful information from that data and, finally, how we use our newfound information to perform all sorts of operations faster, more accurately, safer, and less expensively than ever before.

Even better, we can leverage the power of individual intelligent sensors by communicating their information to other intelligent sensors or to other systems, allowing us to accomplish tasks that weren’t possible before and creating incredible advancements in a wide variety of applications. Sound familiar?

Conventional Sensors Aren’t Perfect
Before we delve into a discussion of intelligent sensors, we first need to examine regular sensors a bit more closely so that we have a solid foundation upon which to develop our understanding of intelligent sensors.

For all that they do well, most sensors have a few shortcomings, both technically and economically. To be effective, a sensor usually must be calibrated—that is, its output must be made to match some predetermined standard so that its reported values correctly reflect the parameter being measured.

In the case of a bulb thermometer, the gradations next to the mercury column must be positioned so that they accurately correspond to the level of the mercury for a given temperature.

If the sensor’s not calibrated, the information that it reports won’t be accurate, which can be a big problem for the systems that use the reported information.

Now, not all situations require the same level of accuracy. For instance, if the thermostat in your house is off by a degree or two, it doesn’t really make much difference; you’ll simply adjust the temperature up or down to suit your comfort.

In a chemical reaction, however, that same difference of a degree or two might literally mean the difference between a valuable compound, a useless batch of goop, or an explosion! We’ll discuss the issue of calibration in greater depth later, but for now the key concept to understand is that the ability to calibrate a sensor accurately is a good, often necessary, feature.

It’s also important to understand that, as important as it is to calibrate a sensor, often it’s extremely difficult if not impossible to get to a sensor in order to calibrate it manually once it’s been deployed in the field.

The second concern one has when dealing with sensors is that their properties usually change over time, a phenomenon known as drift. For instance, suppose we’re measuring a DC current in a particular part of a circuit by monitoring the voltage across a resistor in that circuit (Figure 1.2 below ).

Figure 1.2. Example of a resistive sensing element used to monitor current.

In this case, the sensor is the resistor and the physical property that we’re measuring is the voltage across it, which, as we know from Ohm’s Law will vary directly with the amount of current flowing through the resistor. As the resistor ages, its chemical properties will change, thus altering its resistance.

If, for example, we measured a voltage of 2.7V across the resistor for a current of 100 mA when the system was new, we might measure a voltage of 2.76V across it for the same current five years later. While 0.06V may not seem like much, depending upon the application it may be significant.

As with the issue of calibration, some situations require much stricter drift tolerances than others; the point is that sensor properties will change with time unless we compensate for the drift in some fashion, and these changes are usually undesirable.

By the way, are you wondering why in the previous example I referred to the resistor as the sensing element and not the voltmeter used to measure the voltage? The distinction is a bit pedantic but important. In the example, I said that we were monitoring the current in the circuit by measuring the voltage across the resistor.

That made the resistor the primary sensor and the voltage across it the property that changes in response to a change in the parameter of interest. The voltmeter is a secondary sensing device that we use to measure the primary parameter. As one might guess, the voltmeter itself has its own issues with calibration and drift as well.

The reason that the distinction between the primary and secondary sensors is important is that it’s critical to know precisely what you’re measuring. Without a clear understanding of the parameter(s) of interest, it’s possible to create a system that doesn’t really measure what you want or that introduces excessive problems with accuracy. We’ll devote more attention to that particular aspect later.

A third problem is that not only do sensors themselves change with time, but so, too, does the environment in which they operate. An excellent example of that would be the electronic ignition for an internal combustion engine.

Immediately after a tune-up, all the belts are tight, the spark plugs are new, the fuel injectors are clean, and the air filter is pristine. From that moment on, things go downhill; the belts loosen, deposits build up on the spark plugs and fuel injectors, and the air filter becomes clogged with ever-increasing amounts of dirt and dust.

Unless the electronic ignition can measure how things are changing and make adjustments, the settings and timing sequence that it uses to fire the spark plugs will become progressively mismatched for the engine conditions, resulting in poorer performance and reduced fuel efficiency.

That might not strike you as particularly important if you’re zipping around town and have a gas station on most corners, but you probably wouldn’t be quite so sanguine if you were flying across the ocean and had to make it without refueling! The ability to compensate for often extreme changes in the operating environment makes a huge difference in a sensor’s value to a particular application.

Yet a fourth problem is that most sensors require some sort of specialized hardware called signal-conditioning circuitry in order to be of use in monitoring or control applications.

The signal-conditioning circuitry is what transforms the physical sensor property that we’re monitoring (often an analog electrical voltage that varies in some systematic way with the parameter being measured) into a measurement that can be used by the rest of the system.

Depending upon the application, the signal conditioning may be as simple as a basic amplifier that boosts the sensor signal to a usable level or it may entail complex circuitry that cleans up the sensor signal and compensates for environmental conditions, too. Frequently, the conditioning circuitry itself has to be tuned for the specific sensor being used, and for analog signals that often means physically adjusting a potentiometer or other such trimming device.

In addition, the configuration of the signal-conditioning circuitry tends to be unique to both the specific type of sensor and to the application itself, which means that different types of sensors or different applications frequently need customized circuitry.

Finally, standard sensors usually need to be physically close to the control and monitoring systems that receive their measurements. In general, the farther a sensor is from the system using its measurements, the less useful the measurements are.

This is due primarily to the fact that sensor signals that are run long distances are susceptible to electronic noise, thus degrading the quality of the readings at the receiving end. In many cases, sensors are connected to the monitoring and control systems using specialized (and expensive) cabling; the longer this cabling is, the more costly the installation, which is never popular with end users.

A related problem is that sharing sensor outputs among multiple systems becomes very difficult, particularly if those systems are physically separated. This inability to share outputs may not seem important, but it severely limits the ability to scale systems to large installations, resulting in much higher costs to install and support multiple redundant sensors.

What we really need to do is to develop some technique by which we can solve or at least greatly alleviate these problems of calibration, drift, and signal conditioning. If we could find some way to share the sensor outputs easily, we’d be able to solve the issue of scaling, too. Let’s turn now to how that’s being accomplished, and examine the effects the new approach has on the sensor world.

First Things First – Digitizing the Sensor Signal
When engineers design a system that employs sensors, they mathematically model the response of the sensor to the physical parameter being sensed, they mathematically model the desired response of the signal-conditioning circuitry to the sensor output, and they then implement those mathematical models in electronic circuitry.

All that modeling is good, but it’s important to remember that the models are approximations (albeit usually fairly accurate approximations) to the real-world response of the implementation. It would be far better to keep as much of the system as possible actually in the mathematical realm; numbers, after all, don’t drift with time and can be manipulated precisely and easily.

In fact, the discipline of digital signal processing or DSP, in which signals are manipulated mathematically rather than with electronic circuitry, is well established and widely practiced.

Standard transformations, such as filtering to remove unwanted noise or frequency mappings to identify particular signal components, are easily handled using DSP. Furthermore, using DSP principles we can perform operations that would be impossible using even the most advanced electronic circuitry.

For that very reason, today’s designers also include a stage in the signal-conditioning circuitry in which the analog electrical signal is converted into a digitized numeric value.

This analog-to-digital conversion, A/D conversion, or ADC, step is vitally important, because as soon as we can transform the sensor signal into a numeric value, we can manipulate it using software running on a microprocessor.

Analog-to-digital converters, or ADCs as they’re referred to, are usually single-chip semiconductor devices that can be made to be highly accurate and highly stable under varying environmental conditions.

The required signal-conditioning circuitry can often be significantly reduced, since much of the environmental compensation circuitry can be made a part of the ADC and filtering can be performed in software.

As we’ll see shortly, this combination of reduced electronic hardware and the ability to operate almost exclusively in the mathematical world provides tremendous benefits from both a system-performance standpoint and from a business perspective.

Next in Part 2, go to “Next Step – Add some intelligence”

Creed Huddleston is President of Real-Time by Design, LLC, specializing in the design of intelligent sensors, located in Raleigh-Durham, North Carolina.

This series of articles is based on material from “Intelligent Sensor Design” by Creed Huddleston, with permission from Newnes, a division of Elsevier. Copyright 2007. For more information about this title and other similar books, please visit www.elsevierdirect.com.

References
1. Management Challenges for the 21st Century, by Peter F. Drucker. Collins, 2001.

2. According to the CIA World Factbook, the estimated total world population as of July 2005 was 6,446,131,400. http://www.cia.gov/cia/publications/fact-book/rankorder/2119rank.html

3. Based on a study (GB-200N Industrial Sensor Technologies and Markets) by B. L. Gupta for Business Communications Company, Inc. in which the 2004 industrial sensor market size in the United States was $6.1 B, with an anticipated annual growth rate of 4.6%.

4. Ohm’s Law is V = I * R, whereV is the voltage measured across a resistance (in volts), I is the current through the resistance (in amps), and R is the value of the resistance itself (in ohms). Ohm’s Law holds true for a purely resistive element, which is all we’re worried about in this example.

5. Branch intensive software is software that makes frequent changes, known as branches, in the processing of its program instructions. Computationally intensive software is software in which a significant portion of the processing time is devoted to performing mathematical computations.

6. Four Strategies for the Age of Smart Devices, by Glen Allmendinger and Ralph Lombreglia. Harvard Business Review, October, 2005. Reprint R0510J.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.