Руководство По Проектированию для Cisco Cisco Nexus 5010 Switch

Скачать
Страница из 22
 
Design Guide 
 
© 2010 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. 
Page 21 of 22
 
Often a blade switch is preferred to a pass-through module in order to achieve more cable reduction toward the 
access layer. In a classic aggregation and access design, this need not be the case, as the access layer can provide 
for line-rate 10 Gigabit Ethernet connectivity for both LAN and SAN for all possible blade server configurations. Given 
this fact, it is not uncommon to have an access layer that can be used as a “Layer 2 aggregation” mechanism for 
additional switching that may occur within a blade server.  
These design choices are often intertwined with the cabling choice and with the definition of the pod. 
Data Center Pods 
Each access layer in the data center is “divided” in pods (building blocks), where servers in each pod share similar 
characteristics, whether these characteristics are similar hardware, similar SLAs provided to the customer (internal or 
external), and so on. This is different from building separate silos for each business unit. In a pod design, multiple 
business units may share the same pod, and run on similar server platforms because of the similar level of service 
required.  
A data center pod is constituted by a certain number of racks of servers that are all dependent on the same set of 
access switch hardware. Switch cabinets may either include top-of-rack switch with a fabric extender or a modular 
middle-of-the-row switch. 
A pod can benefit from VLAN isolation to virtualize the LAN infrastructure, the ability to “rack-and-roll” new racks by 
using the fabric extender technology whereby a rack can be precabled and then attached to a Cisco Nexus 5000 
Series Switch from which the fabric extender gets the code and the configuration. 
With fabric extenders, it is not uncommon for customers to build 10 to 12 rack-and-roll pods. 
Cabling Considerations 
At the time of this writing, 10 Gigabit Ethernet connectivity to the server can be provided with the following options: 
● 
SFP+ copper over twinax CX1 cable: This is the most cost-effective option and the most currently choice, 
available with the option of 1, 3, or 5 meters passive cables, although additional lengths may become 
available with active cables. SFP+ twinax, preterminated copper cables also provide the most power-efficient 
option to connect servers at 10 Gigabit Ethernet today. 
● 
X2 connectors with CX4 cables provide a form factor that is less suitable for servers (due to the space taken 
up by X2), and they also consume more power. 
● 
SFP+  short-reach (SR) provides optical connectivity that can span longer distances (33m on OM1, 82m on 
OM2 ,and 300m on OM3 fiber), but it is less cost-effective than copper. 
In this light of these issues,  pods of servers connected using 10 Gigabit Ethernet tend to use twinax SFP+ cabling, 
shown in Figure 11.