Wednesday, July 29, 2009

DRAM

DRAM is dynamic random access memory. (This is what most people are talking about when they mention RAM.)

When you expand the memory in a computer, you are adding DRAM chips.

You use DRAM to expand the memory in the computer because it’s cheaper than any other type of memory.

Dynamic RAM chips are cheaper to manufacture than other types because they are less
complex.

Dynamic refers to the memory chips’ need for a constant update signal (also called a
refresh signal) in order to keep the information that is written there.

If this signal is not received every so often, the information will cease to exist. Currently, there are four popular implementations of DRAM: SDRAM, DDR, DDR2, and RAMBUS.

SDRAM

The original form of DRAM had an asynchronous interface, meaning that it derived its clocking
from the actual inbound signal, paying attention to the electrical aspects of the waveform, such
as pulse width, to set its own clock to synchronize on the fly with the transmitter.

Synchronous DRAM (SDRAM) shares a common clock signal with the transmitter of the data.

The computer’s system bus clock provides the common signal that all SDRAM components use for each step to be performed.

This characteristic ties SDRAM to the speed of the FSB and the processor, eliminating the
need to configure the CPU to wait for the memory to catch up.

Every time the system clock ticks, one bit of data can be transmitted per data pin, limiting the bit rate per pin of SDRAM to the corresponding numerical value of the clock’s frequency.

With today’s processors interfacing with memory using a parallel data-bus width of 8 bytes (hence the term 64-bit processor), a 100MHz clock signal produces 800MBps.

That’s megabytes per second, not megabits. Such memory is referred to as PC100, because throughput is easily computed as eight times the rating.

DDR

Double Data Rate (DDR) SDRAM earns its name by doubling the transfer rate of ordinary SDRAM by double-pumping the data, which means transferring it on both the rising and
falling edges of the clock signal.

This obtains twice the transfer rate at the same FSB clock frequency. It’s the rising clock frequency that generates heating issues with newer components, so keeping the clock the same is an advantage.

The same 100MHz clock gives a DDR SDRAM system the impression of a 200MHz clock in comparison to a single data rate (SDR) SDRAM system.

You can use this new frequency in your computations or simply remember to double your results for SDR calculations, producing DDR results.

For example, with a 100MHz clock, two operations per cycle, and 8 bytes transferred per operation, the data rate is 1600MBps.

Now that throughput is becoming a bit tricker to compute, the industry uses this final figure to name the memory modules instead of the frequency, which was used with SDR.

This makes the result seem many times better, while it’s really only twice as good. In this example, the module is referred to as PC1600.

The chips that go into making PC1600 modules are named after the perceived double-clock frequency: DDR-200.

DDR2

Think of the 2 in DDR2 as yet another multiplier of 2 in the SDRAM technology, using a lower peak voltage to keep power consumption down (1.8V vs. the 2.5V of DDR and others).

Still double-pumping, DDR2, like DDR, uses both sweeps of the clock signal for data transfer.

Internally, DDR2 further splits each clock pulse in two, doubling the number of operations it
can perform per FSB clock cycle.

Through enhancements in the electrical interface and buffers, as well as through adding off-chip drivers, DDR2 nominally produces four times what SDR is capable of producing.

However, DDR2 suffers from enough additional latency over DDR that identical throughput ratings find DDR2 at a disadvantage.

Once frequencies develop for DDR2 that do not exist for DDR, however, DDR2 could become the clear SDRAM leader, although DDR3 is nearing release.

Continuing the preceding example and initially ignoring the latency issue, DDR2 using a 100MHz clock transfers data in four operations per cycle and still 8 bytes per operation, for a total of 3200MBps.

Just like DDR, DDR2 names its chips based on the perceived frequency. In this case, you would be using DDR2-400 chips.

DDR2 carries on the final-result method for naming modules but cannot simply call them PC3200 modules because those already exist in the DDR world. DDR2 calls these modules PC2-3200.

The latency consideration, however, means that DDR’s PC3200 offering is preferable to DDR2’s PC2-3200.

After reading the “RDRAM” section, consult Table 1.2 below , which summarizes how each technology in the “DRAM” section would achieve a transfer rate of 3200MBps, even if only theoretically.

For example, SDR PC400 doesn’t exist.











RDRAM

Rambus DRAM, or Rambus Direct RAM (RDRAM), named for the company that designed
it, is a proprietary synchronous DRAM technology.

RDRAM can be found in fewer new systems today than just a few years ago.

This is because Intel once had a contractual agreement with Rambus to create chipsets for the motherboards of Intel and others that would primarily use RDRAM in exchange for special licensing considerations and royalties from Rambus.

The contract ran from 1996 until 2002.

In 1999, Intel launched the first motherboards with RDRAM support.

Until then, Rambus could be found mainly in gaming consoles and home theater components.

RDRAM did not impact the market as Intel had hoped, and so motherboard manufacturers got around Intel’s obligation by using chipsets from VIA Technologies, leading to the rise of that company.

Although other specifications preceded it, the first motherboard RDRAM model was known as PC800.

As with non-RDRAM specifications that use this naming convention, PC800 specifies that, using a faster 400MHz clock signal and double-pumping like DDR/DDR2, an effective frequency of 800MHz and a transfer rate of 800Mbps per data pin are created.

PC800 uses only a 16-bit (2-byte) bus called a channel, exchanging a 2-byte packet during each read/write cycle, still bringing the overall transfer rate to 1600MBps per channel because of the much higher clock rate. Modern chipsets allow two 16-bit channels to communicate simultaneously for the same read/ write request, creating a 32-bit dual-channel.

Two PC800 modules in a dual-channel configuration produce transfer rates of 3200MBps.

Today, RDRAM modules are also manufactured for 533MHz and 600MHz bus clock frequencies and 32-bit dual-channel architectures.

Termed PC1066 and PC1200, these models produce transfer rates of 2133 and 2400MBps per channel, respectively, making 4266 and 4800MBps per dual-channel.

Rambus has road maps to 1333 and 1600MHz models.

The section “RIMM” in this chapter details the physical details of the modules. Despite RDRAM’s performance advantages, it has some drawbacks that keep it from taking over the market.

Increased latency, heat output, complexity in the manufacturing process, and cost are the primary shortcomings.

PC800 RDRAM had a 45ns latency, compared to only 7.5ns for PC133 SDR SDRAM.

The additional heat that individual RDRAM chips put out led to the requirement for heat sinks on all modules.

High manufacturing costs and high licensing fees led to triple the cost to consumers over SDR, although today there is more parity between the prices.

In 2003, free from its contractual obligations to Rambus, Intel released the i875P chipset. This new chipset provides support for a dual-channel platform using standard PC3200 DDR
modules.

Now, with 16 bytes (128 bits) transferred per read/write request, making a total transfer rate of 6400MBps, RDRAM no longer holds the performance advantage it once did.

SRAM

The S in SRAM stands for static.

Static random access memory doesn’t require a refresh signal
like DRAM does.

The chips are more complex and are thus more expensive.

However, they are faster. DRAM access times come in at 60 nanoseconds (ns) or more; SRAM has access times as fast as 10ns.

SRAM is often used for cache memory.

ROM

ROM stands for read-only memory.

It is called read-only because the original form of this memory could not be written to.

Once information had been written to the ROM, it couldn’t be changed. ROM is normally used to store the computer’s BIOS, because this information normally does not change very often.

The system ROM in the original IBM PC contained the power-on self-test (POST), Basic Input/Output System (BIOS), and cassette BASIC.

Later IBM computers and compatibles include everything but the cassette BASIC.

The system ROM enables the computer to “pull itself up by its bootstraps,” or boot (start the operating system).

Through the years, different forms of ROM were developed that could be altered.

The first generation was the programmable ROM (PROM), which could be written to for the first time in the field, but then no more.

Following the PROM came erasable PROM (EPROM), which was able to be erased using ultraviolet light and subsequently reprogrammed.

These days, our flash memory is a form of electrically erasable PROM (EEPROM), which does not require UV light, but rather a slightly higher than normal electrical pulse, to erase its contents.

CMOS

CMOS is a special kind of memory that holds the BIOS configuration settings.

CMOS memory is powered by a small battery, so the settings are retained when the computer is shut off.

The BIOS starts with its own default information and then reads information from the CMOS, such as which hard drive types are configured for this computer to use, which drive(s) it should search for boot sectors, and so on.

Any conflicting information read from the CMOS overrides the default information from the BIOS.

CMOS memory is usually not upgradable in terms of its capacity and is very often integrated into the modern BIOS chip.

1 comment: