Mellanox Technologies ConnectX-3 VPI MCX354A-QCBT プリント
製品コード
MCX354A-QCBT
ConnectX
®
-3 VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect
®
page 2
350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085
Tel: 408-970-3400 • Fax: 408-970-3403
Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
3546PB Rev 1.2
© Copyright 2012. Mellanox Technologies. All rights reserved.
Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd.
FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
I/O Virtualization — ConnectX-3 SR-IOV
technology provides dedicated adapter
resources and guaranteed isolation and
protection for virtual machines (VM) within the
server. I/O virtualization with ConnectX-3 gives
data center managers better server utiliza-
tion while reducing cost, power, and cable
complexity.
technology provides dedicated adapter
resources and guaranteed isolation and
protection for virtual machines (VM) within the
server. I/O virtualization with ConnectX-3 gives
data center managers better server utiliza-
tion while reducing cost, power, and cable
complexity.
Storage Accelerated — A consolidated
compute and storage network achieves
significant cost-performance advantages over
multi-fabric networks. Standard block and
file access protocols can leverage InfiniBand
RDMA for high-performance storage access.
compute and storage network achieves
significant cost-performance advantages over
multi-fabric networks. Standard block and
file access protocols can leverage InfiniBand
RDMA for high-performance storage access.
Software Support
All Mellanox adapter cards are supported by
a Windows, Linux distributions, VMware, and
Citrix XENServer. ConnectX-3 VPI adapters
support OpenFabrics-based RDMA protocols
and software and are compatible with configu-
ration and management tools from OEMs and
operating system vendors.
a Windows, Linux distributions, VMware, and
Citrix XENServer. ConnectX-3 VPI adapters
support OpenFabrics-based RDMA protocols
and software and are compatible with configu-
ration and management tools from OEMs and
operating system vendors.
INFINIBAND
– IBTA Specification 1.2.1 compliant
– Hardware-based congestion control
– 16 million I/O channels
– 256 to 4Kbyte MTU, 1Gbyte messages
ENHANCED INFINIBAND
ENHANCED INFINIBAND
– Hardware-based reliable transport
– Collective operations offloads
– GPU communication acceleration
– Hardware-based reliable multicast
– Extended Reliable Connected transport
– Enhanced Atomic operations
ETHERNET
ETHERNET
– IEEE Std 802.3ae 10 Gigabit Ethernet
– IEEE Std 802.3ba 40 Gigabit Ethernet
– IEEE Std 802.3ad Link Aggregation and
Failover
– IEEE Std 802.3az Energy Efficient Ethernet
– IEEE Std 802.1Q, .1p VLAN tags and priority
– IEEE Std 802.1Qau Congestion Notification
– IEEE P802.1Qaz D0.2 ETS
– IEEE P802.1Qbb D1.0 Priority-based Flow
Control
– Jumbo frame support (9.6KB)
HARDWARE-BASED I/O VIRTUALIZATION
– Single Root IOV
– Address translation and protection
– Dedicated adapter resources
– Multiple queues per virtual machine
– Enhanced QoS for vNICs
– VMware NetQueue support
ADDITIONAL CPU OFFLOADS
ADDITIONAL CPU OFFLOADS
– RDMA over Converged Ethernet
– TCP/UDP/IP stateless offload
– Intelligent interrupt coalescence
FLEXBOOT™ TECHNOLOGY
FLEXBOOT™ TECHNOLOGY
– Remote boot over InfiniBand
– Remote boot over Ethernet
– Remote boot over iSCSI
PROTOCOL SUPPORT
PROTOCOL SUPPORT
– Open MPI, OSU MVAPICH, Intel MPI, MS
– MPI, Platform MPI
– TCP/UDP, EoIB, IPoIB, SDP, RDS
– SRP, iSER, NFS RDMA
– uDAPL
PCI EXPRESS INTERFACE
– PCIe Base 3.0 compliant, 1.1 and 2.0
compatible
– 2.5, 5.0, or 8.0GT/s link rate x8
– Auto-negotiates to x8, x4, x2, or x1
– Support for MSI/MSI-X mechanisms
CONNECTIVITY
CONNECTIVITY
– Interoperable with InfiniBand or 10/40GbE
switches
– Passive copper cable with ESD protection
– Powered connectors for optical and active
cable support
– QSFP to SFP+ connectivity through QSA
module
OPERATING SYSTEMS/DISTRIBUTIONS
– Novell SLES, Red Hat Enterprise Linux
(RHEL), and other Linux distributions
– Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
– OpenFabrics Enterprise Distribution (OFED)
– OpenFabrics Windows Distribution (WinOF)
– VMware ESX Server 3.5, vSphere 4.0/4.1
FEATURE SUMMARY*
COMPATIBILITY
Ordering Part Number
VPI Ports
Dimensions w/o Brackets
MCX353A-QCBT
Single QDR 40Gb/s or 10GbE
14.2cm x 5.2cm
MCX354A-QCBT
Dual QDR 40Gb/s or 10GbE
14.2cm x 6.9cm
MCX353A-FCBT
Single FDR 56Gb/s or 40GbE
14.2cm x 5.2cm
MCX354A-FCBT
Dual FDR 56Gb/s or 40GbE
14.2cm x 6.9cm
*This brief describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability.