Cisco 4XIB Cable DDR Ready - 5m CAB-04XD-05= ユーザーズマニュアル
製品コード
CAB-04XD-05=
All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
Page 3 of 7
However, the actual gain in performance realized may be limited by the efficiency of the server BUS architecture—PCI-X, PCI-Express,
etc.—which must be taken into account to ensure the most effective utilization of the available bandwidth. For PCI and PCI-X 1.0 servers,
InfiniBand 4X SDR provides balanced transmission because they support 4 and 8 Gbps of bandwidth capacity (Table 2).
Table 2.
Server PCI and InfiniBand Link Matrix
Bus Technology
Backplane Capacity
InfiniBand
4X-SDR (8 Gbps)
4X-DDR (16 Gbps)
12X-SDR (24 Gbps)
12X-DDR (48 Gbps)
PCI-X 1.0
(64 bit at 100 MHz)
(64 bit at 100 MHz)
6.4 Gbps
X
PCI-X 1.0
(64 bit at 133 MHz)
(64 bit at 133 MHz)
8.5 Gbps
X
PCI-X 2.0
(64 bit at 266 MHz)
(64 bit at 266 MHz)
17 Gbps
X
–
PCI-X 2.0
(64 bit at 533 MHz)
(64 bit at 533 MHz)
34 Gbps
X
–
–
–
PCI-Express (X8)
16 Gbps
X
X
X
PCI-Express (X12)
24 Gbps
X
X
X
Note:
InfiniBand data rates are quoted that are 20 percent less than the signal rate.
Note:
Table denotes ideal configurations for PCI-X based servers. Additional cards attached to the PCI-X bus will reduce
available bandwidth.
It should be noted that although DDR and QDR may imply that bandwidth is doubled or quadrupled, several factors such as CPU,
memory speed, PCI architecture (PCI-X, PCI-Express 8X or 12X), application characteristics, drivers, and physical cable plant may not
deliver the full potential of InfiniBand DDR transmission, and it is important to consider all components when looking to increase overall
system performance. For example, although a PCI–Express 8X (16-Gbps) server can theoretically saturate a 4X DDR link, the actual
performance is less. Additionally, the application may also cause performance to drop if large volumes of data are transferred that require
being paged to hard disk if the physical RAM cannot accommodate the volume of data.
Note:
PCI-Express currently utilizes a maximum packet size of 88 bytes that has a 24-byte header and 64-byte data payload. This limits
the efficiency and the effective throughput of an 8X PCI-Express server to approximately 72% of 16 Gbps, or 11.6 Gbps. If the throughput
of current 64-byte PCI-Express 8X InfiniBand SDR and DDR implementations are compared, the performance difference between SDR
and DDR is approximately 30%. However, new PCI-Express chip-sets that increase the PCI-Express packet size to 128 bytes will increase
efficiency to 85% of 16 Gbps or approximately 13.5 Gbps. If the throughput of current PCI-Express 8X InfiniBand SDR and DDR
implementations with 128-byte packets are compared, the performance difference between SDR and DDR is expected to be approximately
60%. It should also be noted that for smaller packet sizes, such as those less than 64 bytes, throughput drops due to the relationship
between payload and 24-byte header.
In common with InfiniBand SDR, DDR and QDR transmission also use cut-through switching, although it should be noted that mixing
transmission modes within the same network can be problematic because of the difference in how the packet is physically transmitted.
If different transmission rates are used, either the InfiniBand subnet manager must be topology-aware and only switch SDR packets to
SDR links, and DDR packets to DDR links, or the switch fabric must be able to store and forward the packets to provide rate matching.