Руководство По Проектированию для Cisco Cisco Nexus 5010 Switch

Скачать
Страница из 22
 
Design Guide 
 
© 2010 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. 
Page 13 of 22
 
10 Gigabit Ethernet to the Server 
Servers connected at 10 Gigabit Ethernet benefit from a reduction in the number of Gigabit Ethernet adapters 
needed. They also increase network bandwidth and the possibility to consolidate storage traffic with LAN traffic, as 
described in the section “Unified I/O.” 
Most servers equipped with at least four cores and with 10 Gigabit Ethernet adapters capable of receive-side scaling 
(RSS), large segment offload (LSO), and enough memory can generate close to 9 Gbps of transmit (Tx) traffic with 
memory operations. (I/O operations can, of course, reduce the performance.) Servers with a greater number of cores 
can more easily take advantage of the available bandwidth both in transmit and receive. 
For several applications to take advantage of the available bandwidth, proper tuning is needed in order to increase 
the socket buffer. 
OS tuning with the latest OS versions is not necessary. As an example, Microsoft Windows 2008 by default includes 
auto-tuning of the receive window, with various levels:  
● 
Restricted (up 64-KB TCP Rx [receive] window)  
● 
Highly Restricted (up to 256-KB TCP Rx window) 
● 
Normal (up to 16-MB TCP Rx window) 
● 
Experimental (up to 1-GB TCP Rx window) 
For more information on 10 Gigabit Ethernet tuning on Microsoft Windows servers, please see Chapter 7. 
Virtualized servers can more easily take advantage of the additional bandwidth. Even if applications are often 
throttled by the socket size, with virtualized servers, the individual virtual machine traffic gets aggregated onto the 
same NIC. For this reason, 10 Gigabit Ethernet proves useful even if it’s not necessary to provide enough bandwidth. 
In addition to this, VMware
®
 VMotion™ migration is a pure memory operation; hence it can take advantage of the 
additional bandwidth more easily, which, in turn, may enhance simultaneous VMotion migration of multiple VMs. 
Teaming on the Server 
Servers provide several different options for teaming with names that vary according to the vendor. The most 
common options include: 
● 
Active-standby 
● 
Active-active transmit load balancing: With this option, only one NIC can transmit and receive, and all NICs 
can transmit. This configuration enhances the server transmit performance, but doesn’t improve the receive 
bandwidth. 
● 
Static port channeling: Equivalent to channel-group-mode on, which is PortChannels without any negotiation 
protocol in place. 
● 
IEEE 802.3ad port channeling: This option enables the negotiation of the PortChannel between server and 
switch, thus allowing the server administrator to know if the configuration was successful. Similarly, it gives the 
network administrator information on whether the server administrator configured teaming properly. 
With vPC support on the switch, the last two options can be deployed with network adapters split on two different 
access switches, thus achieving increased bandwidth and redundancy at the same time. 
In addition, the teamed adapters can be virtualized with VLANs, with each VLAN showing on the server as a 
physically separate adapter. This allows the consolidation of multiple adapters for increased aggregated bandwidth. 
The choice of the teaming option depends on the topology and the configuration of the switching infrastructure. A 
vPC-based data center enables both static port channeling and 802.3ad port channeling with or without 802.1q VLAN 
partitioning on the NIC.