HP 4X QDR InfiniBand Switch Module c-Class BladeSystem 489184-B21 전단

제품 코드
489184-B21
다운로드
페이지 9
enclosures, and 16 QSFP uplink ports for inter-switch links or to connect to external servers. All ports are capable of supporting
20Gbps (DDR) bandwidth. A subnet manager has to be provided; see the paragraph on subnet managers for more details.
An InfiniBand fabric consists of one or more InfiniBand switches connected via inter-switch links. The most commonly deployed fabric
topology is a fat tree or its variations. A subnet manager is required to manage and control an InfiniBand fabric. The subnet manager
functionality can be provided by either a rack-mount InfiniBand switch with an embedded fabric manager (aka internally managed
switch) or host-based subnet manager software on a server connected to the fabric. OpenSM is a host-based subnet manager that
runs on a server connected to the InfiniBand fabric. Mellanox OFED software stack includes OpenSM for Linux, and Mellanox
WinOF includes OpenSM for Windows. For comprehensive management and monitoring capabilities, Mellanox FabricIT™ is
recommended for managing the InfiniBand fabric based on Mellanox InfiniBand products; and Voltaire Unified Fabric Manager™
(UFM) is recommended for managing the InfiniBand fabric based on Voltaire InfiniBand switch products and Mellanox ConnectX-
based mezzanine HCA running Voltaire OFED stack.
Embedded fabric manager is available on Voltaire internally managed 36-port 4X QDR switch, 24-port, 96-port, and 288-port DDR
switches. Please refer to: 
 for information about
InfiniBand rack switches.
The following InfiniBand products based on QLogic technology are available for the HP BladeSystem c-Class from HP:
QLogic 4X QDR IB Mezzanine HCA for HP BladeSystem c-Class
QLogic BLc 4X QDR IB Switch for HP BladeSystem c-Class
QLogic BLc 4X QDR IB Management Module
The QLogic 4X QDR IB Mezzanine HCA is based on the QLogic TrueScale ASIC architecture which has a unique hardware
architecture that delivers unprecedented levels of performance, reliability, and scalability, making it an ideal solution for highly scaled
High Performance Computing (HPC) and high throughput, low-latency enterprise applications.
InfiniBand host stack software (driver) is required to run on servers connected to the InfiniBand fabric. For HCAs based on QLogic
technology, HP supports QLogic OFED driver stacks on Linux 64-bit operating systems.
The QLogic BLc 4X QDR IB Switch for HP BladeSystem c-Class uses the QLogic TrueScale ASIC architecture designed to cost-
effectively link workgroup resources into a cluster or provide an edge switch option for a larger fabric. Customers have the ability to
internally or externally manage the modular IB switch. The QLogic BLc 4X QDR IB Switch also has an optional management module
that includes an embedded subnet manager. This management module is an option for both the 32-port QDR BladeSystem switch
and the 36-port QDR edge switch. When combined with the optional InfiniBand Fabric Suite software, users can mange up to a 288
node fabric using only the management capability of the unit, without requiring additional host processing. In a bladed environment
this alleviates the need to use a node for fabric management.
An InfiniBand fabric is constructed with one or more InfiniBand switches connected via inter-switch links. The most commonly deployed
fabric topology is a fat tree or its variations. A subnet manager is required to manage an InfiniBand fabric. OpenSM is a host-based
subnet manager that runs on a server connected to the InfiniBand fabric. QLogic OFED software stack includes OpenSM for Linux. For
comprehensive management and monitoring capability, QLogic InfiniBand Fabric Suite (IFS) is recommended for managing the
InfiniBand fabric based on QLogic InfiniBand products.
HP supports InfiniBand copper and fiber optic cables with CX4 to CX4, CX4 to QSFP, and QSFP to QSFP connectors. The CX4 to
CX4 copper cables range from 0.5M to 8M for HCA to switch, or inter-switch links at DDR speed, and up to 12M for certain inter-
switch links at DDR speed. The CX4 to CX4 fiber optic cables range from 1M to 100M for HCA to switch, or inter-switch links at
DDR speed. Please note that only 12 ports on Voltaire 24-port DDR switches support fiber optic cables. The CX4 to QSFP copper
cables range from 1M to 5M for HCA to switch, or inter-switch links at either DDR or QDR speed. The QSFP to QSFP copper cables
range from 1M to 7M for HCA to switch, or inter-switch links at either DDR or QDR speed (please note that QLogic QDR switches
only support up to 5 meters at QDR speed) , and up to 10M at DDR speed. Please refer to:
 for more details on supported InfiniBand
cables.
QuickSpecs
HP InfiniBand Options for HP BladeSystems c-Class
Overview
DA - 12586   Worldwide QuickSpecs — Version 13 — 1.22.2010
Page 2