Руководство По Устранению Ошибки для Cisco Cisco UCS B230 M2 Blade Server
Trends in Server Memory Systems
UCS Enhanced Memory Error Management
Page 5
Trends in Server Memory Systems
Memory systems are a key area of ongoing innovation in servers. One trend is larger memory system capacity as
this improves application performance by reducing the time spent waiting for slower disk accesses. In addition,
increasing bandwidth is another trend, because high memory system bandwidth improves application
performance through faster access to instructions and data needed by high core count processors. Also, as
memory systems grow, operating voltages have been reduced to support denser, faster designs while improving
power efficiency.
this improves application performance by reducing the time spent waiting for slower disk accesses. In addition,
increasing bandwidth is another trend, because high memory system bandwidth improves application
performance through faster access to instructions and data needed by high core count processors. Also, as
memory systems grow, operating voltages have been reduced to support denser, faster designs while improving
power efficiency.
Increasing Capacity
A primary driver of increased error rates is the fact that memory systems are rapidly getting larger. As more and
more bits of memory are added to the system, the likelihood of any one of them encountering an error increases.
Such increases in system memory capacities are due to shrinking DRAM geometries (i.e. the ability to pack more
bits on a single die). Since 2008, DRAM capacities have increased 16x from 512Mbit to 8Gbit. As chip capacity has
increased, individual bit cells have been getting smaller. As the bit cell gets smaller, the number of stored charges
per bit decreases, making it more difficult to distinguish between a stored “1” and “0”. The basic storage element,
or bit cell, in a DRAM chip is a tiny capacitor. DRAM bit cells are inherently leaky; thus, smaller bit cells storing
fewer chargers are less tolerant of this leakage. Additionally, smaller bit cells are more easily upset by external
sources like alpha particles or cosmic rays.
more bits of memory are added to the system, the likelihood of any one of them encountering an error increases.
Such increases in system memory capacities are due to shrinking DRAM geometries (i.e. the ability to pack more
bits on a single die). Since 2008, DRAM capacities have increased 16x from 512Mbit to 8Gbit. As chip capacity has
increased, individual bit cells have been getting smaller. As the bit cell gets smaller, the number of stored charges
per bit decreases, making it more difficult to distinguish between a stored “1” and “0”. The basic storage element,
or bit cell, in a DRAM chip is a tiny capacitor. DRAM bit cells are inherently leaky; thus, smaller bit cells storing
fewer chargers are less tolerant of this leakage. Additionally, smaller bit cells are more easily upset by external
sources like alpha particles or cosmic rays.
Today’s advanced DRAM technologies deliver up to 8 Gbits of memory on a single die, and up to 64 GBytes of
memory on a single memory module (DIMM). In addition, today’s CPUs incorporate multiple memory channels on
each processor socket, and multiple DIMMs on each channel.
memory on a single memory module (DIMM). In addition, today’s CPUs incorporate multiple memory channels on
each processor socket, and multiple DIMMs on each channel.
Increasing Bandwidth
Memory system bandwidth has also been increasing steadily. In addition to the multiple memory channels on
each processor socket, the speed of those channels has increased. Just a few years ago the top speed for DDR2
memory interfaces was 800 Mtps. Using advanced DDR4 memory, Cisco’s B200-M4 supports memory channels
operating at 2133 Mtps. Ever increasing operating frequencies, while providing higher bandwidth, also result in
smaller bit times. As individual bit times decrease, timing margin also decreases, making it more difficult for
receiving circuitry to separate each bit from those that precede and follow it.
each processor socket, the speed of those channels has increased. Just a few years ago the top speed for DDR2
memory interfaces was 800 Mtps. Using advanced DDR4 memory, Cisco’s B200-M4 supports memory channels
operating at 2133 Mtps. Ever increasing operating frequencies, while providing higher bandwidth, also result in
smaller bit times. As individual bit times decrease, timing margin also decreases, making it more difficult for
receiving circuitry to separate each bit from those that precede and follow it.
Lower Operating Voltages
Another underlying technology trend is an ongoing reduction in operating voltages to lower power and cooling
requirements, and to accommodate the smaller transistors associated with advances in process technology.
DRAM voltages have decreased over the years, going from 2.5V to 1.8V to 1.5V to 1.35V to 1.2V as the industry has
shifted from DDR to DDR2 to DDR3 and DDR4. As the operating voltages decrease, the available noise margin
also decreases, making it more difficult for receivers and sense amps to distinguish between a “1” and a “0”.
requirements, and to accommodate the smaller transistors associated with advances in process technology.
DRAM voltages have decreased over the years, going from 2.5V to 1.8V to 1.5V to 1.35V to 1.2V as the industry has
shifted from DDR to DDR2 to DDR3 and DDR4. As the operating voltages decrease, the available noise margin
also decreases, making it more difficult for receivers and sense amps to distinguish between a “1” and a “0”.