Q-Logic IB6054601-00 D Manuel D’Utilisation

Page de 122
1 – Introduction
Interoperability
1-2
IB6054601-00 D
Q
 Glossary of technical terms
In addition, the InfiniPath Install Guide contains information on InfiniPath hardware 
and software installation.
1.3
Overview
The material in this documentation pertains to an InfiniPath cluster. This is defined 
as a collection of nodes, each attached to an InfiniBand™-based fabric through the 
InfiniPath Interconnect. The nodes are Linux-based computers, each having up to 
eight processors.
The InfiniPath interconnect is InfiniBand 4X, with a raw data rate of 10 Gb/s (data 
rate of 8Gb/s).
InfiniPath utilizes standard, off-the-shelf InfiniBand 4X switches and cabling. 
InfiniPath OpenFabrics software is interoperable with other vendors’ InfiniBand 
HCAs running compatible OpenFabrics releases. There are two options for Subnet 
Management in your cluster:
Use the Subnet Manager on one or more managed switches supplied with your 
Infiniband switches. 
Use the OpenSM component of OpenFabrics.
1.4
Switches
The InfiniPath interconnect is designed to work with all InfiniBand-compliant 
switches. Use of OpenSM as a subnet manager is now supported. OpenSM is part 
of the OpenFabrics component of this release.
1.5
Interoperability
InfiniPath participates in the standard InfiniBand Subnet Management protocols for 
configuration and monitoring. InfiniPath OpenFabrics (including IPoIB) is 
interoperable with other vendors’ InfiniBand HCAs running compatible OpenFabrics 
releases. The InfiniPath MPI and Ethernet emulation stacks (
ipath_ether
) are not 
interoperable with other InfiniBand Host Channel Adapters (HCA) and Target 
Channel Adapters (TCA). Instead, InfiniPath uses an InfiniBand-compliant 
vendor-specific protocol that is highly optimized for MPI and TCP between 
InfiniPath-equipped hosts.