Fujitsu InfiniBand HCA 40Gb 1-port QDR-E S26361-F4475-L103 Datenbogen

Produktcode
S26361-F4475-L103
Seite von 3
Data Sheet
 Fujitsu InfiniBand HCA 40 Gb 1/2 port QDR enhanced   
Page 1 / 3
www.fujitsu.com/fts
Data Sheet
Fujitsu InfiniBand HCA 40 Gb 1/2 port QDR 
enhanced  
InfiniBand Host Channel Adapter 40 Gb 1/2 port QDR enhanced based on Mellanox MCX35xA-QCBT
Main Features
Benefits
 
„
Virtual Protocol Interconnect
 
„
One adapter for InfiniBand, 10 Gigabit Ethernet or Data Center 
Bridging fabrics
 
„
Up to 40 Gb/s InfiniBand or 10 Gigabit Ethernet per port
 
„
Guaranteed bandwidth and low latency services
 
„
Hardware based I/O virtualization 
 
„
Virtualization acceleration
 
„
Single and Dual port options available
 
„
World-class cluster, network and storage performance
InfiniBand Host Channel Adapter (HCA) enable 
data exchange between all the devices connected 
in an InfiniBand Network. 
A networked IT infrastructure that functions 
well is of great significance when managing 
and controlling critical business processes in a 
company. The wide range of complex information 
transported across the network relies on fast and 
reliable data processing by the InfiniBand HCA. 
InfiniBand technology provides a low-latency, 
high-bandwidth interconnect that can help 
increase scalability and performance in 
high-performance computing (HPC) cluster 
environments.
InfiniBand HCA 40 Gb 1/2 port QDR enhanced
The enhanced InfiniBand Host Channel Adapter 
(HCA) 40 Gb/s are based on the ConnectX-3 
silicon and have a PCIe 3.0 interface. They provide 
the highest performing interconnect solution 
for Enterprise Data Centers, Web 2.0, Cloud 
Computing, High-Performance Computing, and 
embedded environments. Clustered data bases, 
parallelized applications, transactional services 
and high-performance embedded I/O applications 
will achieve significant performance improvements 
resulting in reduced completion time and lower 
cost per operation.
InfiniBand adapters deliver industry-leading 
bandwidth with ultra low-latency and efficient 
computing for performance-driven server and 
storage clustering applications. Network protocol 
processing and data movement overhead such as 
RDMA and Send/Receive semantics are completed 
in the adapter
without CPU intervention. Application acceleration 
and GPU communication acceleration brings 
further levels of performance improvement. 
InfiniBand adapters’ advanced acceleration 
technology enables higher cluster efficiency and 
large scalability to tens of thousands of nodes.