Remember the days when, if your car’s alternator died, you could hit the regulator with a hammer and get it working again? Those were the days. Back then the regulator was a steel box mounted on the fender well. Inside, was a normally closed contact that opened up when the battery voltage exceeded some predetermined level. It was a simple circuit containing a few electronic components and a relay. As the relay wore out occasionally the contacts would not connect. You could whack the thing and jar the contacts just enough to get it working again. This was an easy fix and an early warning at the same time. You knew the alternator voltage regulator would stop working again so you would replace it at your earliest convenience (for me, when I could afford it). This was a simple system that worked for many years. Why, then, do we need a microcontroller?
Why take an alternator voltage regulator from the simplicity of a relay and a few parts to something with more computing power than NASA had on the first Apollo moon shot? The answer lies with the potential for extended battery life, improved gas mileage, lower emissions, stability at lower engine idles, and most importantly, flexibility.
Many low-volume charging-system applications beg for features but are stuck with what everyone else is using. For the OEMs that make these devices, microcontrolled regulators enable them to first prototype a potential customer’s configuration quickly and then provide low-volume products without enormous tooling charges.
A more universal principle is in play as well. It seems that nothing conceived by mankind remains simple for long. If man can make it, man can make it more complicated–all under the guise of “we can make it better.” In this case, alternator voltage regulation is no exception.
A microprocessor-based alternator voltage regulator seems, at first glance, to be a simple project to complete. After all, anything done with a relay can’t be all that taxing on a microcontroller. Well, sometimes ignorance is bliss. The smarter the controller, the more difficult it becomes to keep the alternator voltage regulator stable.
An alternator, such as the one shown in Figure 1, is a current-mode machine. You put a current in and, with some help from a mechanical input (a pulley), you get a gained current out. But, you say, isn’t this an alternator voltage regulator? Yes, and this is where the problems begin. To control the output of a current-mode machine by looking at its voltage, knowing that the loads can vary in type (or phase) and amplitude at any time, is quite a challenge.
Figure 1: Visteon alternator in a Ford Taurus
The advantage of the old relay-driven system was that it was slow. Its poor response time and built-in hysteresis allowed for some voltage variation with load but kept the system fairly stable regardless of load.
The input to a charging system is both mechanical and electrical. The mechanical part is the pulley that rotates the field (or the rotor in Figure 2) inside the stator, and the electrical part is the field current. The rotation generates the current “gain” in the system. Without rotation, there is no output current.
Figure 2: Cutaway of Delphi alternator
The output of the charging system is a product of rectifying the stator current, as shown in Figure 3. The rotating magnetic field inside the stator winding generates a sinusoidal waveform. This waveform is rectified to provide a direct current, which is a function of the input, or field current, and the field rotation. Since we want to regulate an output voltage, not a current, the load is involved in the regulation algorithm. As a result, stability is based on the response time of the alternator as well as the reaction time of the load it drives. In the end, the charging-system regulation loop now includes all of the vehicle electronics, not just the alternator and its mechanical inputs.
Figure 3: Electrical diagram of an alternator
With electronic controls automotive engineers can add neat features beyond simple output-voltage regulation. Remember man’s quest for complexity? Once the engineers decide to use a microcontroller-based system, these added features can help justify the added cost. The bonus features include field duty-cycle slew-rate limiting, soft start, auto start, and serial communications with the engine controller. With serial communications, engineers can also include higher level diagnostics such as no rotation, faulted rectifier-diode detection, as well as shorted or open field windings.
Ensuring system stability
Before the design engineer can start adding on all the glitter that makes a microcontrolled system cost effective, the charging system must be stable at regulating the output voltage. So, the first algorithm of any importance is the regulation algorithm. Regulation consists of sensing the battery voltage with adequate filtering to remove unwanted information and then responding with an appropriate current level for the field.
Sensing and filtering
The output-voltage waveform of an alternator is not so flat. Figure 4 shows that the rectified sinusoidal waveform has a ripple voltage or ripple current associated with it. This ripple can be quite large, even when we include the 1F capacitance of the battery. The peak-to-peak ripple voltage out of an alternator can be as much as 4 to 6V at high output currents.
Figure 4: Alternator voltages at 100A output and 6,000RPM
This peak-to-peak ripple voltage can be overcome without large filters by synchronously sampling the output voltage with a phase signal. The ripple is periodic with the phase, so this technique is quite effective. We can effectively filter out most of the periodic noise generated by the stator phasing. That noise is over 90% of what needs to be filtered out, with most of it in the lower frequency range.
Filtering of higher frequencies can be done by placing some simple RC networks in front of the analog-to-digital (A/D) converter. You have to do some level shifting anyway to bring the battery setpoint voltage of about 14.5V to below the 5V level of most A/D converters. We can do further filtering in software by a rolling average of the sensed voltage. A four- to eight-sample averaging seems to work well.
Those who are familiar with system dynamics can also understand that system poles and zeros can shift depending on load, load type, engine RPM (gain), and regulation setpoint. With that little bit of information, we realize that our regulation must have a very low-frequency dominant pole (less than 50Hz) and produce a fairly low–unity-gain bandwidth (less than 3kHz) to keep things from getting out of hand. The more time we take to decide on what to respond to, the lower that pole becomes. Also, the speed at which we react to a step change in output voltage determines our unity-gain bandwidth. This brings me to the next topic.
Response comes in the form of providing an appropriate field current to maintain regulation. The field current is typically regulated by driving a transistor switch at a duty cycle to attain a desired current, as shown in Figure 3. This technique is called voltage control of the field-current input. (Seems appropriate, don’t you think? After all, you’re regulating a current-mode machine by looking at its voltage.) The two generally accepted ways of regulating the alternator are through a fixed-frequency duty cycle or a variable-frequency duty cycle .
The variable-frequency systems are dependent on the load and the response time of the system (alternator and loads). They tend to be high-gain systems. As a result they tend to be more accurate in maintaining a fixed setpoint voltage at the output while having inherent instability problems.
Variable-frequency systems work by the old standard-relay method: turn on the field current when the output voltage is below a setpoint and turn it off when the setpoint is exceeded. The hope is that the system is so dynamic that it only stays in one point for a moment. Thus if the system finds an unstable point, it would only operate there for a moment, and the battery would help mask those brief, infrequent moments.
The fixed-frequency systems are more stable in both input waveform and output control. These systems tend to use a lower-gain system allowing the setpoint voltage to vary by as much as 200 to 300mV over the load range.
Fixed-frequency systems are easier to manipulate if you want to do other things like soft-start or load-response control. They are less dependent on the load dynamics and quite a bit more stable than the variable-frequency method.
A few hundred millivolts compared with a 6V ripple is not very much. So the difference between the variable-frequency and fixed-frequency system is not critical, at least to my way of thinking. However, some engineers feel that a few hundred millivolts can make a significant difference in battery longevity. I’m not so convinced.
To summarize, with the variable-frequency method, the decision to turn on or off the field driver is made with each sample. With the fixed-frequency method, the duty cycle is determined by where in the setpoint window the sample is, as shown in Figure 5.
Figure 5: Setpoint band for determining field duty cycle. (The setpoint is typically around 14.5V at 25C.)
I have worked with both systems and find that fixed-frequency systems are much more stable and predictable—especially when considering the extended feature set made possible with a microcontroller.
Let’s go into a design of a simple microcontrolled system in detail. Figure 6 shows a basic block diagram for a microcontroller-based alternator voltage regulator. Since the fixed-frequency system is much more stable and easier to work with, we’ll focus on this approach.
Figure 6: Basic block diagram of an alternator voltage regulator
The fixed-frequency method has a regulation band as shown in Figure 5. Within the regulation band, a specific duty cycle is associated with a specific voltage. At the low end of the band, the duty cycle is the highest. This point corresponds to the highest loading on the system. At the high end of the band, the duty cycle is the lowest, which corresponds to the lightest load the system can tolerate.
We want to make this band as narrow as possible without making the system unstable. As you can imagine, when the window is narrowed, the gain increases. With higher gain, we have trouble keeping the low-frequency dominant pole or the low-frequency–gain bandwidth product. Typical systems on the market today that use this method have around a 200 to 300mV range from full field to no field current.
We can’t possibly look directly at 14.5V with a micro A/D converter, and simply dividing this voltage down loses a lot of information as well. A 200mV window divided by four ends up with a 50mV spread to convert with the A/D converter. A 5V, 10-bit A/D converter gives 4.88mV per bit. This means I have a resolution of only 10 to 11 steps for regulation if I divide by four.
This leaves us with building an offset amplifier with an accompanying gain stage as shown in Figure 7. The offset amplifier centers the setpoint to the middle of the A/D converter’s sensing voltage range. The gain amplifier then gains that up to get as much out of the A/D conversion as possible and still stay within the boundaries of the range of voltages we need to regulate.
Figure 7: Possible gain and offset amplifier circuit
With the proper biasing, the input is 14.5V +/-100mV and the output is something reasonable for the A/D converter to read. Biasing is a hardware issue, so we’ll let the hardware guys worry about that.
The second issue with the regulation setpoint is temperature compensation. The battery’s ability to accept a charge is dependent on ambient temperature. At -40C a typical car battery can be charged to as much as 16V whereas at 125C, the charging voltage has to be much lower to keep the battery from boiling over. The temperature-compensation curve is totally dependent on what is best for battery life. Figure 8 shows a typical temperature compensation curve for some cars on the road today.
Figure 8: Typical temperature compensation curve
Oddly enough, temperature-compensation curves vary from car manufacturer to car manufacturer even though the battery technology does not change from car to car. Flexibility is one of the advantages of making a complex alternator voltage regulator, right?
With the proper input offset and a total gain of two, the input to the A/D converter is 0.5V to 4.5V. With a gain of two, the 200mV band is now 400mV to the A/D converter. Using the aforementioned 4.88mV per bit at the A/D converter translates to an actual 2.4mV per step measured, or 81 steps over the 200mV window. With some averaging, this can easily be bumped up to 128 steps or 7 bits of resolution. This level of resolution is plenty for what we need.
Now we’re ready to convert data. We sample synchronously with one of the phases to minimize noise issues. We may be sampling every time the phase is switched, but we only use the last sample that happened just prior to the field driver turning off. From that, an average is taken over a continuously running sample of four to six samples. The frequency of the alternator phase can be in the kilohertz range, while the field duty-cycle frequency is typically between 100 and 400Hz. Again, slow is good. The lower the field duty-cycle frequency, the lower the electromagnetic interference and the lower the power loss due to switching. Some regulators today work at less than 100Hz.
The running average value is then compared with the values in a table that corresponds to the temperature compensation your customer wants.
Field duty-cycle slew rate
Quite often, the load variation drops outside of the regulation window. When this happens, the field duty cycle is commanded to go to 100% from wherever it was. Instant 100% duty-cycle moments can cause stability issues in the engine at idle.
One feature that makes the microcontroller desirable is the ability to “feather in” electric loads into the mechanical system. Essentially, this technique limits the slew rate on the field-current duty cycle. This limiting allows for lower engine idling, which affects gas mileage and emissions. The flowcharts in Figure 9 show this as RATE(up/dn).
Figure 9: Phase interrupt and regulation loop routines
The duty cycle is increased at a fixed slow rate at step increases in load. This rate is so slow, that some systems today can take as much as 10 seconds to go from 0% duty cycle to full field. Others can take around 2s to get to full field. The longer it takes, the longer the battery must hold things up until the alternator catches up. One way in which you can see this delay is by the momentary dimming of headlights every time the AC kicks in.
A duty-cycle slew-rate generator is based on a simple timed counter that looks at the measured setpoint results and counts up or down. Instead of the setpoint band directly setting the duty cycle, a duty-cycle register that’s influenced by the setpoint results is used. The final value of the register should be the measured setpoint band value. Whatever is in this register is what is used to generate the duty cycle seen at the field driver.
For example, say the loading was such that the duty cycle was 50%. This would be right in the center of the setpoint band as shown in Figure 8. A step increase in load occurs requiring an increase of the duty cycle to 75%. Without slew-rate control, the duty cycle would change to 75% on the next period causing a step increase in mechanical load as well. With slew-rate control, the duty cycle would increase at a fixed rate until the system was satisfied, e.g. the measured value matched the duty cycle register value. At a 2 sec overall slew rate, a 25% increase in duty cycle would take a half a second to realize.
There are some issues with over-voltage such that we cannot count down as slowly as we would want to count up. The battery is more willing to go higher in voltage than it is to dip under load. Also, over-voltage conditions are frowned upon. As a result, the algorithm in Figure 9 bypasses the field duty-cycle register altogether when the measured voltage is above the setpoint band (~100mV above setpoint). I do not set the duty cycle register to zero; instead, I turn off the field and let the duty-cycle register count down until the over-voltage condition has passed.
With a duty-cycle slew-rate limiter, we are now slowing down the response of the alternator to step increases in load, thereby stabilizing the system. Effectively, we are over-damping the system to keep it more stable, while not allowing over-voltage conditions to occur.
The main loop
For me, the main loop as shown in Figure 10 is basically a housekeeper and a watchdog of sorts. Everything of interest to me is run by interrupts, so the main loop is what the micro does when it has nothing else to do.
Figure 10: Main routine
The main loop checks for loose ends, such as what the ambient temperature is, if the phase input is really working, and if the alternator has lost all ability to charge the battery. Since we use the phase input to look at the output voltage, if we lose phase, we have to generate an artificial interrupt. That sort of thing.
Dilbert cartoonist Scott Adams labeled the first week of a project the Wally Period. Wally explains that “most tasks become unnecessary within seven days.” In the case of our alternator, most faults go away within one second. So, I put a delay of 1s in reporting a fault. Anything less than a second is not of concern, so we can ignore faults for the alternator’s Wally Period.
Once we add serial communications, we can remove the lamp driver in place of high-level diagnostics. We can also add the ability to change the setpoint by commands from the engine controller. This capability is added today in many vehicles. Instead of looking at the positive temperature-controlled resistor for temperature information, the engine controller sends a signal requesting a specific setpoint band. Some engine controllers include temperature compensation, while others do not.
Most systems today require more communication to the regulator than from it. There are opportunities for PWM (pulse width modulation) communications (where the PWM duty cycle commands the setpoint value), LIN (local interconnect network) bus communications, “bit serial interface” communications as well as CAN (controller area network) communications protocols.
Waking up the alternator
When the ignition is turned on, the conditions of the regulator are fairly specific. At start up, ignition occurs with no phase voltage, and the setpoint has not been reached. The battery-rest voltage is always below the setpoint. With this condition present, the regulator knows that the system is waiting to start up.
At this point, the regulator provides a lower duty cycle, between 10% and 25%, to determine if the rotor is spinning without drawing too much current. Small amounts of current in a spinning rotor can generate enough voltage on the stator to be easily read by a microcontroller. The rotor is only spinning if the engine is running. Once a proper phase voltage/frequency is sensed, the regulator can begin normal regulation.
Soft start at wakeup essentially consists of slew-rate limiting the field duty cycle once the regulator has recognized that the system is starting up. We do this to prevent the engine from stalling due to an alternator loading before the engine-idle control system has had a chance to stabilize. Some engineers just delay initiating the regulator for several seconds as the engine stabilizes. Either way it works.
Auto start is another way of waking up the regulator assembly without using the ignition input. If the ignition-input connection is broken or disconnected, you can use a phase input to look for activity. Typically there is some residual magnetism in the rotor, such that if the engine is running, the stator will exhibit some low-level voltage. This voltage is typically a few hundred millivolts at a few thousand RPM in the alternator. This voltage can be detected by using additional circuitry (such as op-amps) between the stator and the microcontroller. Without the auto-start feature, the phase input can be a simple resistor divider coupled with a zener clamp and capacitor. These low voltages typically found on the sensed phase when the field is not excited require some amplification to be detected by a microcontroller.
Let me say a little about a hardware watchdog. In an embedded microcontroller system, the ambient temperatures can sometimes get out of hand. With that said, we may not know how the microcontroller will operate at such temperatures. A software watchdog or internal watchdog cannot be counted on under these adverse conditions. It’s kind of like putting the fox in charge of the hen house. Instead, I use a voltage regulator that has a built-in watchdog as shown in Figure 10. In the main routine, I toggle a bit that goes to the watchdog. The watchdog looks for transitions. If it stops seeing transitions, it resets the micro and we start all over.
The beauty of electronics
Today, electronics eliminates the option to make a regulator work just by hitting it. Of course, electronics also eliminates the need to do so. Even the aftermarket versions of those old regulators are fully electronic today. One of the not-so-hidden benefits of electronics is that, even with all their complexity, they’re immensely more reliable than the mechanical systems they replaced.
David Swanson is a principal engineer in STMicroelectronics’ Automotive Business Unit. David has worked with ST since 1987 in various roles. Prior to ST, David worked for Delco Products Division of GM. He has a BSEE from North Carolina State University and holds many patents in several areas of automotive electronics. You can reach him at .