Mellanox Technologies MSX6036F-1SFR Merkblatt
©2013 Mellanox Technologies. All rights reserved.
PRODUCT BRIEF
SWITCH SYSTEM
BENEFITS
– Software Defined Network (SDN) support
– Industry-leading, switch platform in
performance, power, and density
– Designed for energy and cost savings
– Quick and easy setup and management
– Maximizes performance by removing fabric
congestions
– Fabric Management for cluster and
converged I/O applications
– IPv6 Ready
– IPv6 IPSEC
KEY FEATURES
KEY FEATURES
– 36 FDR (56Gb/s) ports in a 1U switch
– Up to 4Tb/s aggregate switching capacity
– Compliant with IBTA 1.21 and 1.3
– FDR/FDR10 support for Forward Error
Correction (FEC)
– Port mirroring**
– InfiniBand to InfiniBand Routing**
– Optional redundant power supplies and fan
drawers
SX6036
36-port Non-blocking Managed 56Gb/s InfiniBand/VPI SDN Switch System
Scaling-Out Data Centers with Fourteen
Data Rate (FDR) InfiniBand
Faster servers based on PCIe 3.0, combined
with high-performance storage and applications
that use increasingly complex computations,
are causing data bandwidth requirements to
spiral upward. As servers are deployed with
next generation processors, High-Performance
Computing (HPC) environments and Enterprise
Data Centers (EDC) will need every last bit
of bandwidth delivered with Mellanox’s next
generation of FDR InfiniBand high-speed smart
switches.
with high-performance storage and applications
that use increasingly complex computations,
are causing data bandwidth requirements to
spiral upward. As servers are deployed with
next generation processors, High-Performance
Computing (HPC) environments and Enterprise
Data Centers (EDC) will need every last bit
of bandwidth delivered with Mellanox’s next
generation of FDR InfiniBand high-speed smart
switches.
FDR
FDR InfiniBand technology moves from 8b/10b
encoding to a more efficient 64/66 encoding
while increasing the per lane signaling rate
to 14Gb/s. Mellanox end-to-end systems can
also take advantage of the efficiency of 64/66
encoding using Mellanox FDR 10 supporting 20%
more bandwidth over QDR using the same cables/
connectors designed for 40 GbE.
encoding to a more efficient 64/66 encoding
while increasing the per lane signaling rate
to 14Gb/s. Mellanox end-to-end systems can
also take advantage of the efficiency of 64/66
encoding using Mellanox FDR 10 supporting 20%
more bandwidth over QDR using the same cables/
connectors designed for 40 GbE.
Sustained Network Performance
Built with Mellanox’s sixth latest SwitchX
®
InfiniBand switch device, the SX6036 provides up
to thirty-six 56Gb/s full bi-direc tional bandwidth
per port. These stand-alone switches are an ideal
choice for top-of-rack leaf connectivity or for
building small to extremely large sized clusters.
to thirty-six 56Gb/s full bi-direc tional bandwidth
per port. These stand-alone switches are an ideal
choice for top-of-rack leaf connectivity or for
building small to extremely large sized clusters.
Why Software Defined Network (SDN)?
Data center networks have become exceedingly
complex. IT managers cannot optimize the
networks for their applications leading to
high CAPEX/OPEX, low ROI and IT headaches.
Mellanox InfiniBand SDN Switches ensure
complex. IT managers cannot optimize the
networks for their applications leading to
high CAPEX/OPEX, low ROI and IT headaches.
Mellanox InfiniBand SDN Switches ensure
HIGHLIGHTS
separation between control and data planes.
InfiniBand enables centralized management and
view of network. Programmability of the network
by external applications and enable cost effective,
simple and flat interconnect infrastructure.
The SX6036 enables efficient computing with
features such as static routing, adaptive routing,
and congestion control. These features ensure
the maximum effective fabric bandwidth by
eliminating congestion hot spots.
InfiniBand enables centralized management and
view of network. Programmability of the network
by external applications and enable cost effective,
simple and flat interconnect infrastructure.
The SX6036 enables efficient computing with
features such as static routing, adaptive routing,
and congestion control. These features ensure
the maximum effective fabric bandwidth by
eliminating congestion hot spots.
Virtual Protocal Interconnect
®
(VPI)
Virtual Protocol Interconnect (VPI) flexibility
enables any standard networking, clustering,
storage, and management protocol to seamlessly
operate over any converged network leveraging
a consolidated software stack. VPI simplifies
I/O system design and makes it easier for IT
managers to deploy infrastructure that meets the
challenges of a dynamic data center.
enables any standard networking, clustering,
storage, and management protocol to seamlessly
operate over any converged network leveraging
a consolidated software stack. VPI simplifies
I/O system design and makes it easier for IT
managers to deploy infrastructure that meets the
challenges of a dynamic data center.
Management
SX6036 comes with an onboard subnet manager,
enabling simple, out-of-the-box fabric bring-up
for up to 648 nodes. The SX6036 MLNX-OS
enabling simple, out-of-the-box fabric bring-up
for up to 648 nodes. The SX6036 MLNX-OS
®
software delivers complete chassis management,
to manage the firmware, power supplies, fans,
ports and other interfaces.
SX6036 can also be coupled with Mellanox’s
Unified Fabric Manager
to manage the firmware, power supplies, fans,
ports and other interfaces.
SX6036 can also be coupled with Mellanox’s
Unified Fabric Manager
™
(UFM
™
) software
for managing scale-out InfiniBand computing
environments. UFM enables data center operators
to efficiently provision, monitor and operate
the modern data center fabric. UFM boosts
application performance and ensures that the
fabric is up and running at all times.
environments. UFM enables data center operators
to efficiently provision, monitor and operate
the modern data center fabric. UFM boosts
application performance and ensures that the
fabric is up and running at all times.
SX6036 switch system provides the highest performing fabric solution
in a 1U form factor by delivering up to 4Tb/s of non-blocking bandwidth
with 170ns port-to-port latency.
in a 1U form factor by delivering up to 4Tb/s of non-blocking bandwidth
with 170ns port-to-port latency.