HP 649283-B21 用户手册

下载
页码 16
 
HP supports 56 Gbps Fourteen Data rate (FDR), 40Gbps 4x Quad Data Rate (QDR) InfiniBand products that include Host Channel
Adapters (HCA), HP FlexLOM adaptors, switches, and cables for HP ProLiant servers, and HP Integrity servers.
For details on the InfiniBand support for HP BladeSystem c-Class and server blades, please refer to the HP InfiniBand for HP
BladeSystem c-Class QuickSpecs at: 
.
HP supports InfiniBand products from InfiniBand technology partners Mellanox and QLogic.
The following InfiniBand adaptor products based on Mellanox technologies are available from HP
HP IB FDR/EN 10/40Gb 2P 544QSFP Adaptor (Dual IP/IB)
HP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adaptor (Dual IP/IB)
HP IB QDR/EN 10Gb 2P 544FLR-QSFP Adaptor(Dual IP/IB)
HP IB 4X QDR CX-2 PCI-e G2 Dual-port HCA
The HP IB FDR/EN 10/40Gb 2P 544QSFP Adaptor, HP IB FDR/EN 10/40Gb 2P 544FLR-QSFP Adaptor and HP IB QDR/EN 10Gb
2P 544FLR-QSFP Adaptor are based on the Mellanox ConnectX-3 IB technology. The HP IB 4X QDR PCI-e Dual-port is based on
the Mellanox ConnectX-2 IB technology. The FDR IB HCA delivers low-latency and up to 56Gbps (FDR) bandwidth for performance-
driven server and storage clustering applications in High-Performance Computing (HPC) and enterprise data centers. The HP IB
FDR/EN 10/40Gb 2P Adaptors are also capable of dual 40 or 10 Gb Ethernet ports. The HP IB FDR/EN 10/40Gb 2P 544 HCA
card is designed for PCI Express 3.0 x8 connectors on HP Gen8 servers.
InfiniBand host stack software (driver) is required to run on servers connected to the InfiniBand fabric. For HCAs based on Mellanox
technologies, HP supports Mellanox OFED driver stacks on Linux 64-bit operating systems, and Mellanox WinOF driver stack on
Microsoft Windows (HPC) server 2008.
An InfiniBand fabric is constructed with one or more InfiniBand switches connected via inter-switch links. The most commonly
deployed fabric topology is a fat tree or its variations. A subnet manager is required to manage an InfiniBand fabric. OpenSM is a
host-based subnet manager that runs on a server connected to the InfiniBand fabric. Mellanox OFED software stack includes
OpenSM for Linux, and Mellanox WinOF includes OpenSM for Windows. For comprehensive management and monitoring
capabilities, Mellanox FabricIT™ is recommended for managing the InfiniBand fabric based on Mellanox InfiniBand products.
The following InfiniBand switch products based on Mellanox technologies are available from HP
Mellanox IB FDR 36-port Managed switch (front-to-rear cooling)
Mellanox IB FDR 36-port Managed switch with reversed airflow fan unit (rear-to-front cooling)
Mellanox IB FDR 36-port switch (front-to-rear cooling)
Mellanox IB FDR 36-port switch with reversed airflow fan unit (rear-to-front cooling)
Voltaire IB 4X QDR 36-port switch (front-to-rear cooling)
Voltaire IB 4X QDR 36-port switch with reversed airflow fan unit (rear-to-front cooling)
Voltaire IB 4X QDR 162-port (144 ports fully non-blocking) director switch
Voltaire IB 4X QDR 324-port director switch
Mellanox IB 4X QDR Modular Switches
The front-to-rear cooling switch has air flow from the front (Power side) to the rear (ports side), and the rear-to-front cooling switch
has air flow from the rear (ports side) to the front.
For HCAs based on Mellanox technologies, HP also supports Mellanox OFED driver stacks on Linux 64-bit operating system. A
subnet manager is required to manage an InfiniBand fabric. For comprehensive management and monitoring capabilities, Mellanox
Unified Fabric Manager™ (UFM) is recommended for managing the InfiniBand fabric based on Voltaire InfiniBand switch products
QuickSpecs
HP InfiniBand Options for HP ProLiant and Integrity
HP InfiniBand Options for HP ProLiant and Integrity
HP InfiniBand Options for HP ProLiant and Integrity
HP InfiniBand Options for HP ProLiant and Integrity
Servers
Servers
Servers
Servers
Overview
DA - 13078   North America — Version 18 — March 6, 2012
Page 1