Cisco Cisco Nexus 5010 Switch Guida Alla Progettazione
Design Guide
© 2010 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
Page 6 of 22
●
Architectures that provide an intrinsic lower latency than traditional LAN networks, so that computing cloud
can be built on the same LAN infrastructure as regular transactional applications.
can be built on the same LAN infrastructure as regular transactional applications.
●
Architectures that provide the ability to distribute Layer 2 traffic on all available links.
●
Simplified cabling: For a more efficient airflow, lower power consumption, and lower cost of deployment of
high-bandwidth networks.
high-bandwidth networks.
●
Reduction of management points: It’s important to limit the impact of the sprawl of switching points (software
switches in the servers, multiple blade switches, and so on).
switches in the servers, multiple blade switches, and so on).
All Links Forwarding
The next-generation data center provides the ability to use all links in the LAN topology by taking advantage of
technologies such as virtual PortChannels (vPCs). vPCs enable full, cross-sectional bandwidth utilization among LAN
switches, as well as between servers and LAN switches.
technologies such as virtual PortChannels (vPCs). vPCs enable full, cross-sectional bandwidth utilization among LAN
switches, as well as between servers and LAN switches.
Server Connectivity at 10 Gigabit Ethernet
Most rackable servers today include redundant LAN-on-motherboard (LOM) interfaces for management, an
integrated-lights-out (iLO) standard-based port, and one or more Gigabit Ethernet interfaces, and redundant host bus
adapters (HBA). The adoption of 10 Gigabit Ethernet on the server simplifies server configuration by reducing the
number of network adapters and providing enough bandwidth for virtualized servers. The data center design can be
further optimized with the use of Fibre Channel over Ethernet (FCoE) to build a unified fabric.
integrated-lights-out (iLO) standard-based port, and one or more Gigabit Ethernet interfaces, and redundant host bus
adapters (HBA). The adoption of 10 Gigabit Ethernet on the server simplifies server configuration by reducing the
number of network adapters and providing enough bandwidth for virtualized servers. The data center design can be
further optimized with the use of Fibre Channel over Ethernet (FCoE) to build a unified fabric.
Cost-effective 10 Gigabit Ethernet connectivity can be achieved by using copper twinax cabling with Small Form-
Factor Pluggable Plus (SFP+) connectors.
Factor Pluggable Plus (SFP+) connectors.
A rackable server configured for 10 Gigabit Ethernet connectivity may have an iLO port, a dual- LOM, and a dual-port
10 Gigabit Ethernet adapter (for example, a converged network adapter). This adapter would replace multiple Quad
Gigabit Ethernet adapters and, in case the adapter is also a Cisco Network Adapter, it would also replace an HBA.
10 Gigabit Ethernet adapter (for example, a converged network adapter). This adapter would replace multiple Quad
Gigabit Ethernet adapters and, in case the adapter is also a Cisco Network Adapter, it would also replace an HBA.
Fabric Extender
Fabric extender technology simplifies the management of the many LAN switches in the data center by aggregating
them in groups of 10 to 12 under the same management entity. In its current implementation, Cisco Nexus 2000
Series Fabric Extenders can be used to provide connectivity across 10 to 12 racks that are all managed from a single
switching configuration point, thus bringing together the benefits of top-of-the-rack and end-of-the-row topologies.
them in groups of 10 to 12 under the same management entity. In its current implementation, Cisco Nexus 2000
Series Fabric Extenders can be used to provide connectivity across 10 to 12 racks that are all managed from a single
switching configuration point, thus bringing together the benefits of top-of-the-rack and end-of-the-row topologies.
Unified I/O Support
Significant cost reduction can be achieved by replacing Quad Gigabit Ethernet cards and dual HBAs with a dual-port
converged network adapter card connected to a data-center-bridging (DCB)-capable device. A device such as the
Cisco Nexus 5000 Series Switch also provides Fibre Channel Forwarder (FCF) capability.
converged network adapter card connected to a data-center-bridging (DCB)-capable device. A device such as the
Cisco Nexus 5000 Series Switch also provides Fibre Channel Forwarder (FCF) capability.
The next-generation data center provides no-drop capabilities through priority flow control (PFC) and a no-drop
switch fabric for suitable traffic types.
switch fabric for suitable traffic types.
Cut-through Operations and Latency
While not mandatory, using a cut-through-capable access layer enables low-latency communication between servers
for any packet size. The Cisco Nexus 5000 Series supports deterministic 3.2- microseconds latency for any packet
size with all features enabled (with access control list [ACL] filtering applied, as an example) and similarly the Cisco
Nexus 4000 Series supports 1.5- microseconds latency.
for any packet size. The Cisco Nexus 5000 Series supports deterministic 3.2- microseconds latency for any packet
size with all features enabled (with access control list [ACL] filtering applied, as an example) and similarly the Cisco
Nexus 4000 Series supports 1.5- microseconds latency.
Using Cisco Nexus Switches and vPC to Design Data Centers
Figure 2 illustrates how a next-generation data center would look like with Cisco Nexus Switches and vPC.