Fujitsu PY Eth Mezz Card 10Gb 2 Port V2 S26361-F3997-L1 Data Sheet

Product codes
S26361-F3997-L1
Page of 4
Data Sheet
 Intel® Dual port 10 Gbit/s Ethernet Mezzanine Card   
Page 1 / 4
www.fujitsu.com/fts
Data Sheet
Intel® Dual port 10 Gbit/s Ethernet Mezzanine Card  
High performance Intel® 82599 based 10 GbE dual port mezzanine card for PRIMERGY server blades
To connect your PRIMERGY Blade Server to outside 
networks and storage, Fujitsu offers a variety of 
options to support your familiar standards such 
as Ethernet, Fibre Channel, SAS and InfiniBand. 
They connect directly to the high performance 
midplane and guarantees lossless, highly efficient 
data transfer to and from the Connection Blades. 
In order to optimally meet your needs, Fujitsu 
provides a wide range of various Mezzanine Cards 
starting from 1 and 10 Gbit/s Ethernet, 8 and 16 
Gbit/s Fibre Channel, 40 and 56 Gbit/s InfiniBand 
to 6 Gbit/s SAS with RAID functionality. Even 
Converged Network Adapters (CNAs) that provide 
multiple network connections and Fibre Channel 
over Ethernet (FCoE) as well as iSCSI are part of the 
portfolio. 
Dual port 10 Gbit/s Ethernet Mezzanine Card
Most flexible and scalable Ethernet Mezzanine 
card for today’s demanding datacenter 
environment - accelerating PRIMERGY BX900 
server blades’ LAN traffic and the herein running 
mission-critical applications. Just in virtualized 
environments with a growing number of VMs per 
physical server, this controller combines provision 
of unmatched features and reliable performance. 
With up to 4 physical 10 Gb ports (via two 
Mezzanine cards) in one dual socket server 
blade the overall performance boost is dramatic. 
Operating in emulation mode in conjunction with 
the Virtual Switch of a Virtual Machine Manager, 
the integrated VMDq technology offloads data 
sorting and copying from the Virtual Switch to 
the LAN controller; this is the optimal solution 
for a large number of VMs running standard 
applications with limited bandwidth and latency 
requirements. Larger, mission-critical applications 
on the other hand require dedicated I/O for 
maximum network performance; they optimally 
operate using the VMDc feature, allowing data 
to bypass the Virtual Switch for near native 
performance.