How to Choose A Sensible Sampling Rate

David B. Stewart

July 01, 2002

David B. StewartJuly 01, 2002

How to Choose A Sensible Sampling Rate
Trial-and-error testing is neither the fastest nor the best way to determine the sampling rate for a given application, although it's probably the most common. Systematic engineering analysis, plus a few guided experiments, will help you find a good rate quickly.

I recently asked an engineer what sampling rate he used for an application he was working on.

"Five milliseconds," he replied. I asked him why.

"Because it works," he said. "We spent days testing a variety of sampling rates, and that is one that works."

His code was for a push-button switch that required some debouncing. He and his team ultimately selected a 5ms polling interval because, during testing, it did not appear to erroneously register a single push as two, and was fast enough to avoid misinterpreting an intentional double-push as a bounce.

Five milliseconds might be an acceptable answer, but without considering other factors in the system-its real-time response, in particular-we don't really know whether or not it is the best answer.

What if the processor is overloaded, and the 5ms sampling accounts for 40% of that overload? Increasing the sampling time to 10ms would reduce the CPU utilization of that code to 20%. An alternative might be to keep the sampling rate the same but execute the control code at half the speed. Which is better from a system perspective? Is there a good compromise between the resources used for sampling, and the effect that sampling has on processor utilization and other real-time factors, like schedulability and priority inversion? Let's figure it out.

Selection criteria

Several factors usually compete when you're selecting a sampling rate. You want to:

  • Sample as fast as possible, to maximize accuracy
  • Sample as slow as possible, to conserve processor time
  • Sample fast enough to provide adequate response time
  • Sample slow enough that noise doesn't dominate the input signal
  • Sample at a rate that is a multiple of the control algorithm frequency to minimize jitter

While no single answer satisfies every requirement completely, one sampling rate often stands out as better than most others when you consider the particularities of a specific application. The technique I put forward in this article will help identify that rate.

The technique consists of the following steps:

  1. Measure sensor characteristics.
  2. If there is noise in the input, select the algorithm that will be used to filter the data.
  3. Compute the lower and upper bound for sampling rates based on function alone.
  4. Identify the trade-offs between using the lower and upper bound rates.
  5. Prioritize the trade-offs to determine a suitable sampling rate that is between the computed lower and upper bounds.

While this approach can be used for almost every type of sensor, we're going to focus on digital switches for our practical examples.

Digital switches

The simplest form of a digital input is a switch. When closed or on, the switch produces a value of 1. When open or off, the switch yields a value of 0. (The reverse can be true if negative logic is used.) Many embedded systems have one or more switches. When few switches are used (for example, less than the size of a single digital input/output (DIO) port), they're usually connected directly to the DIO port. A large number of switches can be managed by a switch matrix, but that's a discussion for another day.

An ideal switch provides a 1 when the switch is closed and a 0 when the switch is open, and the transition from one state to the other is instantaneous. In reality, you have to deal with rise and fall times. Since these times are proportional to the capacitance in a circuit, their value is generally on the order of nanoseconds. For our analysis, we can neglect the rise and fall time as long as sensor data is read slower than 100,000-or-so samples per second. When read this slowly, some switches, such as optical and tightly constructed momentary switches, do exhibit ideal behavior.

Figure 1: Electrical response of a mechanical switch with bouncing

Most mechanical switches, however, are subject to something called bouncing. When the switch is closed, the transition from 0 to 1 is not instantaneous or uniformly rising. Look at Figure 1a, which shows the oscilloscope output of one such mechanical switch, to see how the transition progresses. Figure 1b shows a digital representation of the switch's output. The extra pulses preceding and following the main pulse-these are the bounces-occur when contact is made between mechanical plates located inside the switch. When bouncing is an issue, you usually have to filter the input. This process is commonly called debouncing.

Measuring closure times

To determine the sampling rate, we need to know the minimum switch closure time, which we'll call σmin. This value is defined as the minimum amount of time the switch must be closed (or open) in order for the software to guarantee its detection as a switch closure (or opening). In some cases, the application will specify the value. In others, you have to experiment, based on application needs and the particularities of the hardware.

The minimum switch time serves as the threshold for considering a data item noise, rather than a real switch closure or opening. If a pulse is detected on the input with a duration that is shorter than the minimum closure time, the software might miss the switch closure. This would not be considered a failure. On the other hand, if a pulse is at least σmin, the software will guarantee detection of the switch closure.

Figure 2: Switches that need a) fast, b) medium, and c) slow polling rates

Throughout this section, I use my experiences working with an engineering team on a pinball machine as a case study. The pinball machine is a good experimental testbed, because it has several kinds of digital input switches, each with a different set of characteristics. Some of the switches are shown in Figure 2.

The switches in Figure 2a must be polled frequently because the pinball moves very quickly. For these types of switches, we measured σmin to be about 10ms.[1] This value depends on the environment; changing the characteristics of the environment might yield a different value for the fastest switch closure time. It might also be possible to experimentally or analytically determine the fastest rate at which the ball can travel across one of the switches. If so, σmin could be derived mathematically, as a function of the maximum ball speed and switch size.

Figure 2b shows medium-speed switches. Due to a change in the ball's direction, the bound is much lower on the maximum velocity of the ball as it travels across the switch. In our experiments, we measured the shortest switch closure time for these switches to be about 50ms.

A slow-switch is a switch that is guaranteed to remain closed until the control software detects it and issues a command to re-open the switch. Figure 2c shows examples of such switches. In the first case, a ball sits in a saucer on the switch. When the software detects this, it fires a solenoid that kicks the ball out. In the second case, the targets are spring-loaded to fall when a ball hits it. A solenoid must later fire to re-raise the target.

For the slow switches, the shortest switch closure time is a function of the control software used to fire the solenoid. In our testbed application, the solenoid firing process was executing at a rate of 10Hz.

In general, we assume that a switch closing is not latched. Using latches is often impractical, and sometimes impossible, as is the case with switch matrices. If a latch were used, the response would be similar to the switches shown in Figure 2c where the rate is a function of the task that generates the signal to clear the latch.

If the switch is not ideal, you need to measure its settling time, which we call t. This is the amount of time the switch may bounce before settling to a value that correctly represents the switch as either closed or open.

For the switches shown in Figure 2a, we found that the rollover switch (left side of diagram) is not ideal. The opto-switch is an ideal switch, however, and did not show any bouncing. For the purposes of our analysis, we are especially interested in identifying the worst-case settling time, τmax.

Figure 3: Circuit used to measure σmin and τmax.

We obtain the values σmin and τmax experimentally. To measure them, connect the switch in question between Vcc and GND (through a pull-down resistor, to limit the current) as shown in Figure 3, and connect a logic analyzer at Vout. Set up the logic analyzer to trigger on the rising edge.

Figure 4: Measurement of σmin and τmax; τmax = max (t01, t10)

Close and re-open the switch as fast as possible. If the switch is ideal, or near ideal, you should see a smooth switch transition from 0 to 1 and back to 0. If it is a bouncy switch, the output would be similar to that shown in Figure 4. Repeat this experiment several dozen times at least, recording the values of σmin and τmax for each.

When performing these experiments, you have to consider how the switch will function in the final application. In our pinball machine, the ball can pass over a switch faster than a human can press and release it, and much faster than a human can roll the ball over the switch manually. For this reason, we used the solenoid-activated flippers to propel the ball over the switches during testing, rather than just touching the switch with our fingers or rolling the ball manually.

Since switches designed to be pressed by a human undergo a variety of presses, you should repeat the experiment accordingly. For example, a light tap may yield a fast settling time, but a short closure time. On the other hand, a heavy press will produce a long closure time, but might, due to bouncing, have a longer settling time. Record the minimum, average, and maximum σmin and τmax for your experiments. To truly get a good collection of data samples, set up the experiment in a lunch room. Ask each person that comes in to press the switch a few times quickly, a few times lazily, and a few times twice in a row. The key is to capture data as close to the intended uses as possible. If users will be kids, get kids to press the buttons. If users will range from 18 to 80 years old, collect data from people in the entire age group. A few hours of additional data collection can prevent a catastrophe such as delivering a product that doesn't operate properly for some users.

Ideal switches

The settling time of an ideal switch is always 0. The sampling rate to guarantee that all switch closures are detected must, therefore, be less than the minimum closure time. While this seems simple, there is a trade-off. What if σmin is 10μs? Must we poll the switch every 10μs? This would surely use all the available CPU resources.

The best way to overcome such an impasse is to consider the application specifications and start making trade-offs. Let's say that a switch closure of just 10μs is possible, but only happens about once per 1,000 closures (0.1%). Since σmin is greater than 10μs 99.9% of the time, a 5ms minimum closure time is much more practical and uses a lot less CPU time than a 10μs minimum closure time. But is it acceptable for the application to miss a switch closure that is only 10μs?

The answer depends on the application specifics. If the switch closure is human input, we can assume that the switch was closed too lightly, and the user simply needs to press harder. If the closure is one of the switches in our pinball machine, we may conclude that the switch was not really closed and thus the player is not awarded the points. On the other hand, if the switch closure is associated with the release of a poisonous gas, then we want to capture it. In this latter case, we could latch the switch, or dedicate a small processor to reading it at 10μs intervals.

Let's suppose that it is acceptable to guarantee detecting only those switch closures with σmin greater than 5ms, thus being 99.9% accurate. What if the CPU is overloaded? Can we halve the CPU utilization of this task and poll at 10ms instead of 5ms? Based on experimentation, this might reduce accuracy to 99.0%. If that is still okay for the application, then the trade-off is acceptable. But if slowing the sampling rate to 10ms reduces our accuracy to 85.0%, the trade-off is most likely too steep. Logging all results of the experimentation for determining σmin enables you to evaluate the trade-off between accuracy and CPU utilization analytically.

The switches I've described so far have all been ideal. Switches that bounce impose additional constraints on selecting an appropriate sampling rate.

Less than ideal

Let's reconsider the rollover switch in Figure 2a. A sample of the output for this switch is shown in Figure 4a; a filtered version of the sample output appears in Figure 4b. The output is filtered through a debouncing algorithm to provide a clean signal to the application code, which acts in response to the switch closure. A variety of debouncing algorithms exist (both in hardware and software) as discussed in A. Smith's "The Merest Flick of a Switch."[2]

Figure 5: State diagram and boolean functions for debouncing algorithm

The algorithm I use in the forthcoming analysis is shown as a synchronous state machine in Figure 5. It requires two consecutive samples of the same value to register a change in the switch's state. (With another algorithm, the analysis, and the resulting sampling rate, would be different.)

Implementing this algorithm on an embedded processor is straightforward using boolean algebra. This approach also has the added advantage that multiple switches can be debounced in parallel. For example, the code shown in Listing 1 implements the debouncing algorithm shown in Figure 5 for eight independent inputs at a time, assuming each input is represented by a separate bit in the input variable x.

Listing 1: Code to implement debouncing algorithm

//x is the input for 8 independent binary switch inputs -- 1 input per bit. unsigned char debounce(unsigned char x)

static unsigned char Y1, Y0; // next state
unsigned char y1, y0; // current state
unsigned char z; // filter output
y0 = Y0 // current state is last
// cycle's next state
y1 = Y1
Y0 = (y1&y0) | (x&y0) | (x&y1);

// compute state
Y1 = x;
z = Y0;

// compute output
return (z);

If the hardware design is flexible, the state machine logic can alternately be implemented in hardware using an FPGA. In such a case, the software no longer needs to debounce the switch, and the switch can be treated as ideal. However, the timing for the hardware state machine would not be different from what we produce with the following analysis.

If the switch closure is not sampled at least twice within the minimum closure time, the switch hit will be filtered. This places an upper bound of σmin /2 on the sampling period.

To determine a lower bound, we consider the minimum case needed for the debouncing algorithm to mistake bounces for two consecutive switch hits. Say we obtain two samples showing 1,

followed by two with 0, then two more with 1. Assume that only the last two 1's are the steady state. Such a filtered output would require at least four samples during the settling time. To prevent such an occurrence, we must sample at most three times during the transient bouncing of the switch closure. Therefore, the sampling period must be greater than τmax /3.

Combining the lower and upper bounds, we have the following condition on the sampling period Ts (sampling fs = 1/Ts) for the input driver that debounces a digital input according to the state machine in Figure 5:


The range of possible values shows the range of acceptable trade-offs for the sampling rate. Suppose τmax is 3ms and σmin is 10ms. Then Equation 1 yields 1ms < ts="">< 5ms.="" to="" minimize="" the="" amount="" of="" cpu="" time="" devoted="" to="" sampling,="" we="" would="" select="" a="" sampling="" time="" near="" 5ms.="">

However, there are several other application criteria that might come into play, such that it is desirable to sample faster despite using more processing power. Considerations include the rate of the control or decision-making software that uses the input, response time to the input, potential for errors, and real-time schedulability.

If a decision-making algorithm that uses the binary data as input were executing at 250Hz, it would be desirable to sample the input at 250Hz as well, or every 4ms. If response time is an issue, the debouncing algorithm causes a delay of up to four cycles from the moment the first bounce is detected to the moment that the filter confirms that it is definitely a pulse. If the input sampling executes every 4ms, then there could be a delay of 16ms before a decision is made. On the other hand, if the input is sampled every millisecond, the response time goes down to 4ms (assuming proper phasing of the tasks).

Another reason for sampling faster, say at 1ms, is that the experimentally obtained closure-time might not have yielded the lowest possible value. Or, if σmin were selected to catch 99.0% of switch closures, using the faster sampling rate might raise this value even higher. To keep your application from mistaking a bounce for a switch closure, however, you should never choose a sampling rate faster than 1ms.

Of course, it could be that no range for the sampling rate is acceptable. Consider, for example, a particularly bouncy switch, in which the settling time is 6ms and the minimum closure time is 4ms. For this case, Equation 1 yields the empty set; no sampling rate can guarantee the capture of the switch closure and ensure that bounces are not mistaken for multiple closures.

To address this issue, the designer must consider other options. One option is to use a different debouncing algorithm, such as looking for two out of three 1's instead of two consecutive 1's. Another option is to consider the minimum inter-arrival time of switch closures. A third possibility is to either accept an occasional miss of a switch closure, in which case σmin can be raised, or accept an occasional switch closure to be mistaken for two separate events, thus reducing τmax. Regardless of your decision, these choices can be documented easily, and should the decision prove poor, modifying the design is simply a matter of changing the sampling rate or altering the algorithm defined by the finite state machine.


In summary, use experimentation and analysis to obtain a valid range of sampling rates for an input. Once the range is determined, consider other factors of the application to select the best sampling rate from within the range of acceptable values. This approach yields good sampling rates relatively quickly, and adjustments to those rates, based on other application and system issues, can be made analytically in minutes, rather than relying on days or weeks of trial-and-error testing and fine-tuning.

While the digital switches considered in this article are representative of devices in many embedded systems, the coverage is far from complete. My purpose was to demonstrate the effectiveness of a combined analytical and experimental approach, rather than to provide a solution that will work for every possible sensor. The approach will need to be modified depending on the particular sensors being used, the needs of an application, and your ability to obtain reasonable measurements through simple experiments.

Practicing sound fundamental engineering when developing software for embedded systems will save you a lot of time in the development process, and produce answers that are usually better than-and at least as good as-any ad-hoc approach.

Dave Stewart is executive vice president and chief technology officer of Embedded Research Solutions LLC. Prior to that, he was the director of the software engineering for Real-Time Systems Laboratory at the University of Maryland. Dave has a PhD in computer engineering from Carnegie Mellon University. His e-mail address is


The work described in this paper was funded in part by the National Science Foundation and in part by the Department of Electrical and Computer Engineering at University of Maryland. The pinball machine that was used as an experimental testbed was built as part of a capstone design course and sponsored by Lockheed Martin Corporation; details of this project are online at Special thanks to Tom Carley, Melissa Moy, Julian Requejo, and the crew from the pinball machine project for their contribution in building the experimental testbeds and programming sample software.


1. We have more accurate measurements available and we use those more accurate measurements in actual analysis. However, for the sake of the present discussion, it is much easier to round to the nearest whole number.

2. Smith, A. "The Merest Flick of a Switch," Practical Electronics. April 1991, p. 24.

Return to the July 2002 Table of Contents

Loading comments...