Cisco Cisco Prime Virtual Network Analysis Module (vNAM) 6.3 白皮書

下载
页码 64
 
3-16
Cisco Virtualized Multiservice Data Center (VMDC) Virtual Services Architecture (VSA) 1.0
Design Guide
Chapter 3      VMDC VSA 1.0 Design Details
  Network
Typical Data Center, there are only two spine nodes, so each serves as a root. In the Extended Switched 
Data Center, there are multiple spine nodes; two of the dedicated L2 spines serve as roots for the 
FabricPath domain. Should a root fail, the switch with the next highest priority takes over as root.
If devices that are part of non-FabricPath L2 domains (that is, spanning-tree dependent) are attached to 
FabricPath edge nodes using classical Ethernet, this design leverages the best practice recommendation 
to configure edge nodes as spanning tree roots, to avoid inadvertent blocking of redundant paths.
Additional key design aspects of the FabricPath portion of the Typical Data Center design as deployed 
in this release are summarized below:
  •
Two spine nodes, aggregating multiple leaf nodes (i.e., mirroring commonly-deployed hierarchical 
DC topologies).
  •
Leaf nodes (aka access-edge switches) and spine nodes (aka Aggregation-edge nodes) provide pure 
layer two functions, providing transit VLANs for vCE to WAN Edge/PE connectivity. This is in 
contrast to the Typical Data Center model as implemented in VMDC 3.0-3.0.1, where the Spine 
nodes performed routing functions.
  •
FabricPath core ports at the spine (F1s and/or F2/F2Es) provide bridging for East/West intra-VLAN 
traffic flows.
  •
Classical Ethernet edge ports face all hosts.
Note
A FabricPath core port faces the core of the fabric, always forwarding Ethernet frames encapsulated in 
a FabricPath header.
  •
L2 resilience design options in this infrastructure layer comprise using ECMP, port-channels 
between aggregation-edge and access-edge nodes across the FabricPath core; and VPC+ on edge 
nodes for the following options:
1.
Attaching servers with port-channels.
2.
Attaching other Classic Ethernet Switches in vPC mode.
3.
Attaching FEX in Active/Active mode.
Currently, the Nexus 7000 supports three types of FabricPath I/O modules: N7K-F132XP-15 (NX-OS 
5.1); N7K-F248XP-25 (NX-OS 6.0); and the new N7k-F248XP-25E (NX-OS 6.1). These can be used 
for FabricPath core ports. However, the F1 card supports only L2 forwarding, while the F2 and F2E cards 
support L2 and L3 forwarding.
F2 or F2E-only scenarios (that is, performing L2 and L3 forwarding, as in VMDC 3.0.1) also provide 
benefits in terms of ease of deployment, and lower power consumption, but as of this writing, the 16,000 
maximum MAC address constraint applies to this model.
With respect to access-edge (leaf) nodes in the referenced models, Nexus 5548 (or Nexus 5596) having 
FEX 2200s for port expansion provide TOR access. Alternatively, Nexus 7000s having F1 (or F2) line 
cards (and 2232 FEX-based port expansion) can perform this function, for end-of-row (EOR) fabric 
access.
The Nexus 5500 supports up to 24 FEX modules. If using the Nexus 2232PP this would support. 768 
edge ports per Nexus 5500 edge pair. Traffic oversubscription can be greatly impacted with increased 
FEX usage. Currently, four FabricPath core facing port-channels having four members each are supported 
on the Nexus 5500.
Currently, 6200 Series Fabric Interconnects connect to FabricPath edge nodes using vPC host mode 
(vPC-HM). FabricPath is on the roadmap but beyond the scope of this release.