Your mission, if you choose to accept it: design and build a pair of satellites to measure the effects of atmospheric lightning, on a shoestring budget of $120,000. Think you could? Someone else already has.
This installment of “Inside Look” examines the design of a pair of small satellites referred to jointly as Emerald. (The two satellites are named Berylium and Chromium, the two chemical elements that make up an emerald gemstone.) The twin spacecraft are currently under development, and are planned for launch in 2002.
The Emerald system is intended to serve several functions: to measure the radio waves emitted by high-altitude lightning strikes, to demonstrate the strengths of a modular and distributed design approach in small spaceborne platforms, and to prove…well, I'll come back to the third goal after we've covered the other two.
As you will soon see, there is more to Emerald than I can cover in one article-most of the components that make up Emerald's implementation are worthy of stand-alone articles themselves. I have provided links to additional information at the end of this article.
The VLF lightning experiment
Emerald's primary job, once it reaches orbit, is to collect information on the very low frequency (VLF) radio waves emitted by lightning strikes. Lightning strikes an average of 100 times per second worldwide, and Emerald will continuously time-tag samples taken from two simple VLF radio receiver circuits for later analysis on the ground.
Scientists have known for many years that lightning strikes produce emissions in the VLF radio frequency ranges. The VLF burst occurs simultaneously with the strike, and in some cases, the radio energy is captured by the earth's magnetic field and conducted towards the polar regions. When this happens, anyone listening to audio signals on VLF carriers can hear a low whistle.
Existing theories about the nature and composition of lightning do not adequately explain these VLF bursts or whistles. In fact, some of the theories don't go beyond “when you dump 20GW (yes, gigawatts!) of power in a few milliseconds, strange things happen,” so any additional insight into what's going on will help both to explain the emissions, and understand how things like communications satellites and weather patterns are affected by them.
The nature of lightning is part of the reason why Emerald is a two-satellite system: lightning strikes take place over large geographic areas, so having two coordinated satellites stationed a known and controllable distance apart provides better information on what is actually happening before, during, and after a strike.
With a disturbingly casual tone, Emerald's designers occasionally mention that they are hoping that the satellites will get struck by lightning at some point during their year-long mission, because the data produced by such an event would be extremely valuable-assuming the spacecraft survives the encounter. (And people wonder what makes aerospace different.)
Distributed, modular design
Although the data from the VLF lightning experiment pays the bills, the main thing to be learned from the Emerald experiment is the utility of a distributed control architecture in a small satellite system. An inexpensive, physically modular spaceborne platform is a kind of holy grail in the aerospace industry, because such configurations can provide natural system redundancy in case of component failure. Modularity can also facilitate quick reconfiguration of a design when the project's mission objectives change course. Physical modularity also encourages logical modularity, which leads to systems that can be more easily specified, developed, and tested in pieces before integration.
The benefits of physical and logical modularity are well known in terresterial applications, but have proven difficult to adopt in systems with loftier altitudes in mind. Part of the difficulty may be the aerospace industry's occasional aversion to change, but aerospace systems also approach diverse and complicated problems that are challenging to decompose: even subsystems like cameras provide multiple spacecraft services related to navigation, science data gathering, and spacecraft health; the orchestration of all of these sometimes conflicting activities often drives the designers towards integrated, monolithic solutions rather than more modular alternatives.
Emerald approaches modularity by deliberately carving the design up into several independent physical and electrical subsystems, connected by communications networks that hide interconnection details from other subsystems. The “pluggable” and redundant architecture achieved with this strategy has some enticing benefits, as you will see after we talk about the architecture itself.
Emerald's modular hardware architecture is based on several small, networked single-board computers. These computers-Emerald's designers often call them “PIC subsystems,” because almost all of them are based on Microchip's PIC microcontrollers-draw code from a common library of routines for things like network communications and interrupt management, but are individually specialized for the responsibilities of the computer's role in the satellite. In other words, each subsystem serves as a specialized sensor or actuator; all of them speak a common protocol and share a common heritage.
Each satellite also includes two radio transceivers (one for satellite-to-ground communications, the other for satellite-to-satellite communications), which operate in the amateur radio frequency bands, and a high-precision GPS receiver. These subsystems are based on off-the-shelf hardware and are, therefore, not PIC subsystems.
The PIC subsystems are largely autonomous devices. Once the satellites reach orbit, the subsystems will move servo-driven steering panels, measure the strength and orientation of Earth's magnetic field, find the Sun, and charge batteries as the twin Emerald satellites take VLF data (also via PIC subsystem) and fly in formation with each other and with a third satellite called Orion. (Orion's very different architecture and mission objectives will will be the subject of a future “Inside Look.”)
PIC subsystem network
Inside each satellite, the PIC subsystems communicate with each other over an I2C network using a simple packet-oriented protocol. Commun-ication begins with a device identifier and command code, and ends with optional parameters and a checksum. Subsystem device identifiers are unique across both satellites.
Each satellite also includes a handful of components like digital thermometers and analog-to-digital convertors that use Dallas Semiconduc-tor's One-Wire (1W) network protocol. Rather than be their own subsystems, these devices are bridged to the I2C bus through two subsystems called the Bus Monitor and System CPU, which perform protocol translation between the generic subsystem packet format and the 1W data and framing format. This set-up makes these components look like PIC subsystems to the rest of the I2C network. This translation is also provided for the GPS receiver.
Figure 1: Block diagram of Emerald
A block diagram of each satellite's subsystems and interconnections is shown in Figure 1. Each PIC subsystem supports a set of common commands like “reset” and “how are you?” in addition to subsystem-specific commands like “move servo to 20%” and “charge the batteries.”
All PIC subsystems, except the Bus Monitor and System CPU, use the PIC 16F877 microcontroller, which provides 384 bytes of RAM, 8KB of ROM, a handful of timers and other peripherals, and an I2C communications controller. Each subsystem also includes an external 16KB EEPROM, an optional 1.5MB SRAM (for operations requiring more memory), and overcurrent protection to save the chips from damage should stray radiation cause a component to “latch up” during operation. There are seven PIC subsystems in each satellite.
The control applications in the PIC subsystems are all implemented as flat, state-driven assembly and C code with interrupt-driven I/O routines. The only PIC that uses a formal scheduler is the Bus Monitor, which runs a PIC 17C56 (32KB ROM, 904 bytes RAM) and Pumpkin's Salvo RTOS.
System CPU and bus monitor subsystems
The System CPU and Bus Monitor subsystems, which route packets between subsystems, between the two satellites, and between each satellite and the ground, are both little more than store-and-forward packet routers and protocol translators. There is considerable overlap in the functionality provided by these two subsystems, so that one can provide these services in case of failure of the other-as detected by the loss of a keep-alive square wave signal passed between them.
The System CPU subsystem is an off-the-shelf, space-qualified single-board computer from Spacequest. Its NEC V53 microcontroller runs a proprietary RTOS called SCOS, and the board's 16KB of flash and 1MB of error-correcting RAM is enough to hold the operating system, a task to manage command packet queuing and delivery, and a task that periodically gathers health and status packets from the other subsystems for later transfer to the ground. The System CPU also generates Emerald's “beacon”: a small but constant stream of data through the radio transmitters that tells ground handlers when the satellite is passing overhead, and what its condition is.
The System CPU's command packet queuing and delivery task is a simple list manager that receives packets from the ground, compares their timestamps with calendar time (taken from the GPS receiver and an onboard clock), and holds packets with future timestamps until their scheduled delivery time. When a packet's delivery time arrives, the packet is pushed to the I2C network as though it had been sent directly from the ground at that moment. Any responses from the I2C network are captured by the System CPU and held in another list, and transferred to the ground during the next communications session.
The Bus Monitor subsystem is a functional clone of the System CPU subsystem, except that it cannot queue commands or data. The Bus Monitor's main function is to provide a redundant path between the PIC subsystem network and the radios and GPS receiver, so that the satellite's other subsystems can continue to use those components should the System CPU fail. But without the System CPU's command queuing capability, how would the satellite continue to pace itself and store data?
If the System CPU fails, the Bus Monitor uses the satellite's radios to contact the other Emerald satellite's System CPU, which subsequently begins performing command queuing and forwarding capabilities on behalf of both satellites. Except for a minor delay as packets traverse the half-duplex intersatellite radio link, the two-satellite system continues to operate as it did before the failure.
To prevent the loss of queued commands and data that would occur when a System CPU failed, the queues on the two satellites are kept identical-the System CPUs in both satellites contain copies of all the commands to be communicated to any subsystem in either satellite, and the command is pushed to the I2C network in each satellite even when the addressed subsystem does not reside on that satellite's network. The failover procedure to recover from a damaged System CPU is, therefore, trivial: the Bus Monitors simply start shuttling packets between their I2C network and the intersatellite radio link, and subsystems in the injured satellite suddenly start hearing commands from the other satellite's System CPU instead of their own (although they cannot tell the difference).
Emerald's redundancy strategy illustrates why the System CPU and Bus Monitor subsystems do little more than store and forward packets: if either subsystem held a more central role in the satellites day-to-day operations, then their failure would lobotomize the spacecraft instead of distracting it momentarily.
Pre-positioned stations on the ground will use amateur radio frequencies to upload commands to change flying formations, download data, and patch code as the two satellites pass overhead. These communications sessions will occur over roughly five-minute intervals spaced between a few hours and a few days apart, and will use the popular and public AX.25 packet radio communications protocol. Operating frequencies and callsigns have not yet been selected, but will be published along with packet data formats on Emerald's website prior to launch.
The flexibility of an AX.25-conforming setup means that ground stations can and will take on a variety of forms. Off-the-shelf radios that understand AX.25 and feature LCD displays for composing and translating messages, analog radios with add-on AX.25 terminal node controllers, and freely available but state-of-the-art setups that use software and a PC's sound card to translate digital data to tones are but some of the possibilities. In any case, it is likely that ground stations will be geographically diverse, and will use the Internet to coordinate activities and to consolidate data at Emerald's home page for analysis and display.
Each communication session will begin when the Emerald's beacon signals are detected, indicating that the satellites are within range of the ground station. A command instructing the satellites to transmit any stored data will be sent, followed by the uploading of any commands the satellites will need to run before the next communications opportunity. Finally, a block of data containing code patches will be uploaded, validated, and installed before the spacecraft goes out of range.
Since communications sessions will be short and busy, the entire process will be automated by a computer program connected to the ground station's packet radio transceiver.
Among the list of code patches anticipated after launch is the addition of a small task for the System CPUs that will monitor data coming from the PIC subsystems and intelligently decide which packets contain data that indicate a decline in spacecraft health. Such packets will get top priority at the next communication session, providing the ground station's communications management software the opportunity to take corrective action during the session if necessary. The corrective action may place the spacecraft into a safe mode for further analysis, or postpone certain operations if, for example, there is not enough battery power to complete them.
About the only thing that Emerald's designers would do differently if they could start over is not select the single board computer used for the System CPU. Their original motivation was to provide a solid, dependable platform on which to build the rest of the Emerald system (a role that the chosen hardware performs well), but as the architecture's decentralization has become more refined, the importance of the System CPU's reliability has been deemphasized. As a result, the headache of working in a multi-architecture environment-PIC and V53-is no longer offset by the capabilities of the System CPU's hardware.
To make matters worse, the System CPU uses a proprietary RTOS and cross compiler that has proven, at times, to be a challenge to set up and use. It isn't that the tools themselves are so bad, they're just overkill for what Emerald is trying to accomplish.
The third goal
As nifty as the Emerald system design is, the most enticing thing about the project is that it is run entirely by-brace yourself-undergraduate and graduate engineering students at Stanford University and Santa Clara University, both located outside of San Jose. One might consider this to be Emerald's most ambitious goal: to prove that college students can deliver a system as complicated as Emerald on a shoestring budget ($120,000) and an aggressive timeline. The students themselves don't seem worried, however, as they are on track and expect to have the system ready on schedule.
But wait, there's more. There is room for you, gentle reader, to be a part of the Emerald project. People with amateur radios are needed to serve as ground stations and listening posts after Emerald launches, to provide more opportunities to monitor the system's performance and collect data during the early phases of Emerald's mission. If you have a packet radio setup, you're invited to participate. Contact information is on Emerald's website.
Exit Emerald, enter CubeSat
Even though Emerald is still a work in progress, its developers are already looking forward to an emerging design and deployment philosophy that may replace it, an approach known as CubeSat.
CubeSat may represent the first opportunity for anyone to put a small satellite-or system of several satellites-into orbit. For a one-time charge of about $45,000 per satellite, you can fly a 1kg, 100mm cube that contains almost anything you like (due to international treaty and safety concerns, no explosives please). Launch opportunities started last month and will occur about once a year.
Students from several universities are already pooling their resources and starting CubeSat projects of their own. I myself seek people interested in using a CubeSat project as a focused, team-oriented training opportunity unlike any other. Tie-ins with scientific projects (potential funding sources) are available, and even projects with educational but limited practical value (“Hey, it beeps!”) are encouraged. See the CubeSat link at the end of this article for more information.
Bill Gatliff is an independent consultant specializing in GNU-focused embedded system design, development, and training. He can be reached for comments via e-mail at .
The author wishes to acknowledge the efforts of the Emerald and Orion project team members, including team manager Bryan Palmintier, for their assistance in preparing this article. Special thanks to Bob Twiggs of Stanford University for his contribution to this article and for his efforts in creating the CubeSat program.