Along with the rapid development of computer hardware and software,demand for advanced computing DRAM modules has been increasing rapidlyduring the past years. As a result, DRAM modules now feature muchfaster data rate and much higher memory density than earlier versions.
Meanwhile, the size and power consumption of DRAM modules have beencontinuouslyreduced to meet the requirements of modern computing applications.
Used in PCs, workstations, laptops and server systems, thesecomputing DRAM modules store instruction codes and computing resultsfor the CPU. All information saved will be lost upon power off.
This volatile characteristic is due to the very simple architectureof the basic DRAM storage cell. A basic DRAM cell is composed of onetransistor working as a switch and one capacitor working as theinformation carrier (Figure 1, below ).
|Figure1. A basic DRAM cell consists of one transistor and one capacitor|
However, in order to make the DRAM module fast, reliable andhigh-density, the DRAM cell array interface and its operating procedurehave to feature a very complex design. New technologies andarchitectures have now emerged for DRAM chips and modules.
Different module outlines have been designed for variousapplications. For most desktop and server applications, the regular DIMM outline is adopted.This outline was developed with the evolution of the memory data busfrom 32bit to 64bit.
For notebooks, the small-outline dual in-line memory module (SO-DIMM ) andeven the Micro-DIMM outlinesare created tosave space. They are smaller than the regular DIMM and contain lessDRAM chips. On the other hand, customized DIMM outlines also exist insome special server systems. They are usually larger and contain moreDRAM chips.
To further differentiate DRAM module types, we need to discuss themodule interface. The simplest interface is the one used on theunbuffered DIMM, where all the DRAM signals go directly to the memorycontroller.
With registered DIMM, the address and command signals from thememory controller are intercepted and buffered normally by two registerchips. The clock signal is also buffered by a PLL chip on the DIMM.Then, by the register chips and the PLL chip, all control signals aredriven into each DRAM component. Data signals or data quests (DQs),however, arealways directly connected like on the unbuffered DIMM.
The simple unbuffered interface is good enough for PCs andworkstations, where normally only a few modules are plugged on themotherboard.
However, in server systems, the requirement for system memory ismuch higher and the memory controller would have to drive a largequantity of DRAM chips provided that no registers were used. Withregister and PLL chips, the load on the motherboard chipset's addressand control pins could be greatly decreased.
Disadvantages of the registered DIMM are that they are slightlyslower than unbuffered modules and are more expensive.
Dealing with errors
Data stored in the DRAM cell might be subject to deterioration, forexample by cosmic radiation. Error might also be caused by noise orinterference during read or write. Although the error probability isextremely low, in some applications such as server systems where dataintegrity is paramount, error detection and correction protocol isstill preferred.
Error correction code (ECC)uses a special algorithm to encode data under protection in a block ofbits that contains sufficient information to permit recovery ofdeteriorated data. It requires additional DRAM chips on the module andspecial support from the memory controller.
Normal ECC mode is a 64/8 SEC/DED (single error correction/doubleerror detection) Hamming code. Itcan detect single-bit error andcorrect it transparently. Two-bit errors can be detected, but theycannot be corrected. Although more than two bit errors rarely happen,they might also be detected, depending on the position of corruptedbits.
Thus, to perform ECC for a 64bit data bus, eight extra DQs have tobe provided and one or two extra DRAM chips have to be added on an ECCmodule. As a result, cost will be approximately 12.5 percent more thanthat of a non-ECC module.
Computing DRAM modules are also distinguished by different DRAMarchitectures. SDR SDRAM (SDR SDRAM) is one DRAM architecture that isstill widely used.
It reads or writes data only at the rising edge of the clock signal.Meanwhile, DDR SDRAM increases its data bandwidth by transferring dataon both rising and falling edge of the clock signal. This effectivelydoubles the data transfer rate without increasing the clock frequency.
DDR2 architecture further increases its data rate by enhancing theDRAM I/O frequency to twice that of the cell array core frequency. DDR3even quadruples this ratio and achieves data rates over 1Gbps.
This continuous increase in frequency is only possible by using thelatest semiconductor technology, sophisticated bus termination schemesand perchip data strobe signals.
The reason behind this evolution is that ramping up the speed ofDRAM I/O buffers is much easier than increasing the speed of the DRAMcore, which makes about 80 percent of the total chip die.
The inherent slowness of the DRAM core is due to customers' desirefor a low-cost solution in which as many DRAM cells as possible areconnected to a single data line and decoded address line.
Along with the development of the silicon processing technology, thepower supply for different SDRAM architectures keeps going down, from3.3V for SDR to 1.5V for DDR3. As an added benefit, power consumptioncan be significantly reduced.
The Fullly-Buffered DRAM approach
As discussed, data lines from the memory controller go directly intoeach DRAM chip on both unbuffered and registered DIMMs. However, tofurther increase the memory density and the data rate, data signalswill degrade at both ends of the bus. To solve this problem, fullybuffered DIMM (FBDIMM) was invented (Figure 2, below ).
|Figure2. The FB-DIMM technology splits the direct data signaling interfaceinto two indepenent interfaces by a buffer on the module.|
The FB-DIMM technology splits the direct data signaling interfaceinto two independent interfaces by a buffer on the module, called theAdvanced Memory Buffer (AMB). The interface between the buffer and DRAMchips is the same as regular DIMMs.
It supports DDR2 in current FB-DIMM platforms and DDR3 in thefuture. However, the interface between the memory controller and theAMB is changed from shared parallel channels to serial point-to-pointinterconnection. This makes it possible to have up to eight DIMMs permemory channel, and up to six channels per memory controller.
On the other hand, the FBDIMM uses different paths for datatransmission and reception, also called northbound and southbound,respectively.
In point-to-point serial communication, the AMB uses 10 pairs ofdifferential lines to receive information from the memory controller,or the Northbridge, and another 12 or 14 pairs to transmit data back.
The AMB chip is the central part on an FB-DIMM, which takes care ofthe serial point-to-point communication with the memory controller andthe subsequent FB-DIMMs.
Since the memory controller writes to the DRAM via the AMB, the AMBcan compensate for signal deterioration by buffering and resending thesignal. In addition, the AMB can offer better reliability, availabilityand serviceability by extending the data ECC to all command and addresssignals.
It can also use the bit-lane fail-over correction feature toidentify bad signal paths and remove them from operation, which reducescommand/ address errors. Moreover, it automatically retries when anerror is detected, allowing for uninterrupted operation in case oftransient errors.
Xiaojun Lu and Xiiao Yu areengineers at QimondaTechnologies Co. Ltd.