Fujitsu InfiniBand HCA 56 Gb S26361-F4533-L202 Data Sheet

Product codes
S26361-F4533-L202
Page of 3
Data Sheet
 FUJITSU InfiniBand HCA 56 Gb 1/2 port FDR   
Page 1 / 3
www.fujitsu.com/fts
Data Sheet
FUJITSU InfiniBand HCA 56 Gb 1/2 port FDR  
InfiniBand Host Channel Adapter 56 Gb 1/2 port FDR based on Mellanox MCX35xA-FCAT
Main Features
Benefits
 
„
Virtual Protocol Interconnect
 
„
One adapter for InfiniBand, 10/40 Gigabit Ethernet or Data Center 
Bridging fabrics
 
„
Up to 56 Gb/s InfiniBand or 40 Gigabit Ethernet per port
 
„
Guaranteed bandwidth and low latency services
 
„
Hardware based I/O virtualization 
 
„
Virtualization acceleration
 
„
Single and Dual port options available
 
„
World-class cluster, network and storage performance
InfiniBand Host Channel Adapter (HCA) enable 
data exchange between all the devices connected 
in an InfiniBand Network. 
A networked IT infrastructure that functions 
well is of great significance when managing 
and controlling critical business processes in a 
company. The wide range of complex information 
transported across the network relies on fast and 
reliable data processing by the InfiniBand HCA. 
InfiniBand technology provides a low-latency, 
high-bandwidth interconnect that can help 
increase scalability and performance in 
high-performance computing (HPC) cluster 
environments.
InfiniBand HCA 56 Gb 1/2 port FDR
InfiniBand Host Channel Adapters (HCAs) 56 Gb/s 
provides the highest performing interconnect 
solution for Enterprise Data Centers, Web 2.0, 
Cloud Computing, High-Performance Computing, 
and embedded environments. Clustered data 
bases, parallelized applications, transactional 
services and high-performance embedded I/O 
applications will achieve significant performance 
improvements resulting in reduced completion 
time and lower cost per operation.
InfiniBand adapters deliver industry-leading 
bandwidth with ultra low-latency and efficient 
computing for performance-driven server and 
storage clustering applications. Network protocol 
processing and data movement overhead such as 
RDMA and Send/Receive semantics are completed 
in the adapter
without CPU intervention. Application acceleration 
and GPU communication acceleration brings 
further levels of performance improvement. 
InfiniBand adapters’ advanced acceleration 
technology enables higher cluster efficiency and 
large scalability to tens of thousands of nodes.