Mellanox Technologies ConnectX-3 VPI MCX354A-QCBT 전단
제품 코드
MCX354A-QCBT
©2012 Mellanox Technologies. All rights reserved.
PRODUCT BRIEF
INFINIBAND/VPI
ADAPTER CARDS
ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI)
supporting InfiniBand and Ethernet connectivity provide the highest
performing and most flexible interconnect solution for PCI Express Gen3
servers used in Enterprise Data Centers, High-Performance Computing,
and Embedded environments.
supporting InfiniBand and Ethernet connectivity provide the highest
performing and most flexible interconnect solution for PCI Express Gen3
servers used in Enterprise Data Centers, High-Performance Computing,
and Embedded environments.
Clustered data bases, parallel processing,
transactional services and high-performance
embedded I/O applications will achieve signifi-
cant performance improvements resulting in
reduced completion time and lower cost per
operation ConnectX-3 with VPI also simplifies
system development by serving multiple fabrics
with one hardware design.
transactional services and high-performance
embedded I/O applications will achieve signifi-
cant performance improvements resulting in
reduced completion time and lower cost per
operation ConnectX-3 with VPI also simplifies
system development by serving multiple fabrics
with one hardware design.
Virtual Protocol Interconnect
VPI-enabled adapters enable any standard
networking, clustering, storage, and manage-
ment protocol to seamlessly operate over any
converged network leveraging a consolidated
software stack. With auto-sense capability,
each ConnectX-3 port can identify and operate
on InfiniBand, Ethernet, or Data Center Bridging
(DCB) fabrics. FlexBoot™ provides additional
flexibility by enabling servers to boot from remote
InfiniBand or LAN storage targets. ConnectX-3
with VPI and FlexBoot simplifies I/O system
design and makes it easier for IT managers to
deploy infrastructure that meets the challenges of
a dynamic data center.
networking, clustering, storage, and manage-
ment protocol to seamlessly operate over any
converged network leveraging a consolidated
software stack. With auto-sense capability,
each ConnectX-3 port can identify and operate
on InfiniBand, Ethernet, or Data Center Bridging
(DCB) fabrics. FlexBoot™ provides additional
flexibility by enabling servers to boot from remote
InfiniBand or LAN storage targets. ConnectX-3
with VPI and FlexBoot simplifies I/O system
design and makes it easier for IT managers to
deploy infrastructure that meets the challenges of
a dynamic data center.
World-Class Performance
InfiniBand —ConnectX-3 delivers low latency,
high bandwidth, and computing efficiency for
performance-driven server and storage clustering
applications. Efficient computing is achieved by
offloading from the CPU protocol processing and
data movement overhead such as RDMA and
high bandwidth, and computing efficiency for
performance-driven server and storage clustering
applications. Efficient computing is achieved by
offloading from the CPU protocol processing and
data movement overhead such as RDMA and
Send/Receive semantics allowing more processor
power for the application. CORE-Direct™ brings
the next level of performance improvement by
offloading application overhead such as data
broadcasting and gathering as well as global
synchronization communication routines. GPU
communication acceleration provides additional
efficiencies by eliminating unnecessary internal
data copies to significantly reduce application run
time. ConnectX-3 advanced acceleration tech-
nology enables higher cluster efficiency and large
scalability to tens of thousands of nodes.
power for the application. CORE-Direct™ brings
the next level of performance improvement by
offloading application overhead such as data
broadcasting and gathering as well as global
synchronization communication routines. GPU
communication acceleration provides additional
efficiencies by eliminating unnecessary internal
data copies to significantly reduce application run
time. ConnectX-3 advanced acceleration tech-
nology enables higher cluster efficiency and large
scalability to tens of thousands of nodes.
RDMA over Converged Ethernet —
ConnectX-3 utilizing IBTA RoCE technology
delivers similar low-latency and high-performance
over Ethernet networks. Leveraging Data Center
Bridging capabilities, RoCE provides efficient low
latency RDMA services over Layer 2 Ethernet.
With link-level interoperability in existing
Ethernet infrastructure, Network Administrators
can leverage existing data center fabric manage-
ment solutions.
ConnectX-3 utilizing IBTA RoCE technology
delivers similar low-latency and high-performance
over Ethernet networks. Leveraging Data Center
Bridging capabilities, RoCE provides efficient low
latency RDMA services over Layer 2 Ethernet.
With link-level interoperability in existing
Ethernet infrastructure, Network Administrators
can leverage existing data center fabric manage-
ment solutions.
Sockets Acceleration — Applications utilizing
TCP/UDP/IP transport can achieve industryleading
throughput over InfiniBand or 10 or 40GbE. The
hardware-based stateless offload engines in
ConnectX-3 reduce the CPU overhead of IP packet
transport. Sockets acceleration software further
increases performance for latency sensitive
applications.
TCP/UDP/IP transport can achieve industryleading
throughput over InfiniBand or 10 or 40GbE. The
hardware-based stateless offload engines in
ConnectX-3 reduce the CPU overhead of IP packet
transport. Sockets acceleration software further
increases performance for latency sensitive
applications.
ConnectX
®
-3 VPI
Single/Dual-Port Adapters with Virtual Protocol Interconnect
®
BENEFITS
– One adapter for InfiniBand, 10/40 Gig
Ethernet or Data Center Bridging fabrics
– World-class cluster, network, and storage
performance
– Guaranteed bandwidth and low-latency
services
– I/O consolidation
– Virtualization acceleration
– Power efficient
– Scales to tens-of-thousands of nodes
KEY FEATURES
KEY FEATURES
– Virtual Protocol Interconnect
– 1us MPI ping latency
– Up to 56Gb/s InfiniBand or 40 Gigabit
Ethernet per port
– Single- and Dual-Port options available
– PCI Express 3.0 (up to 8GT/s)
– CPU offload of transport operations
– Application offload
– GPU communication acceleration
– Precision Clock Synchronization
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– Ethernet encapsulation (EoIB)
– RoHS-R6
HIGHLIGHTS