Cisco Cisco Prime Virtual Network Analysis Module (vNAM) 6.3 White Paper
3-25
Cisco Virtualized Multiservice Data Center (VMDC) Virtual Services Architecture (VSA) 1.0
Design Guide
Chapter 3 VMDC VSA 1.0 Design Details
System Level Design Considerations
Scalability
The following lists the most relevant scale concerns for the models discussed in this system release.
•
BGP Scale—At this writing the ASR 9000 supports 5,000 BGP peers and functions as the
centralized PE router for the virtual CE routers in the pod. For non-redundant CSR scenarios, up to
5000 virtual CE peers are supported per ASR 9000.
centralized PE router for the virtual CE routers in the pod. For non-redundant CSR scenarios, up to
5000 virtual CE peers are supported per ASR 9000.
•
VLAN Scale—At this writing (in NX-OS releases 5.2.5 through 6.1), up to 2,000
FabricPath-encapsulated VLANs are supported. This figure will improve in subsequent releases, but
this constraint is expected to be eliminated in NX-OS 6.2, which targets 4000 transit VLANs. In
future, segmentation scale will increase with the use of alternative encapsulations such as VXLANs.
FabricPath-encapsulated VLANs are supported. This figure will improve in subsequent releases, but
this constraint is expected to be eliminated in NX-OS 6.2, which targets 4000 transit VLANs. In
future, segmentation scale will increase with the use of alternative encapsulations such as VXLANs.
•
Switches per FabricPath Domain—NX-OS 5.2 supports up to 64 switch IDs; NX-OS 6.0 up to
128.
128.
•
Port Density per FabricPath Node—At 48 ports per module, F2 line cards provide up to 768 10
GigE 1 GigE ports per switch (N7018), while F1 cards provide up to 512 10 GigE ports (N7018).
These are one-dimensional figures, but serve to give a theoretical maximum in terms of one measure
of capacity. Currently, the Nexus 7000 FabricPath limitation is 256 core ports or 256 edge ports.
GigE 1 GigE ports per switch (N7018), while F1 cards provide up to 512 10 GigE ports (N7018).
These are one-dimensional figures, but serve to give a theoretical maximum in terms of one measure
of capacity. Currently, the Nexus 7000 FabricPath limitation is 256 core ports or 256 edge ports.
•
MAC Address (host) Scale—FabricPath VLANs use conversational MAC address learning
comprising a three-way handshake. Each interface learns MAC addresses only for interested hosts,
rather than all MAC addresses in the VLAN. This selective learning enables the network to scale
beyond the limits of individual switch MAC address tables. Classical Ethernet VLANs use
traditional MAC address learning by default, but CE VLANs can be configured to use
conversational MAC learning. MAC capacity on Nexus 5500 (L2) access-edge nodes is 24,000.
comprising a three-way handshake. Each interface learns MAC addresses only for interested hosts,
rather than all MAC addresses in the VLAN. This selective learning enables the network to scale
beyond the limits of individual switch MAC address tables. Classical Ethernet VLANs use
traditional MAC address learning by default, but CE VLANs can be configured to use
conversational MAC learning. MAC capacity on Nexus 5500 (L2) access-edge nodes is 24,000.
•
Tenancy—The tenancy scope for this validation was 2,000. However, this does not represent the
maximum scale of the architecture models. For the models we addressed, several factors constrain
overall tenancy scale. These are: BGP peers per PE router per DC pod (5,000; end-to-end VLAN
support (currently, 2,000 transit VLANs); VLANs per UCS (1,000, although this constraint can be
minimized through the use of VXLANs for host connectivity); and Nexus 1000V scale (4,000
ports/128 hosts in release 2.2).
maximum scale of the architecture models. For the models we addressed, several factors constrain
overall tenancy scale. These are: BGP peers per PE router per DC pod (5,000; end-to-end VLAN
support (currently, 2,000 transit VLANs); VLANs per UCS (1,000, although this constraint can be
minimized through the use of VXLANs for host connectivity); and Nexus 1000V scale (4,000
ports/128 hosts in release 2.2).
Availability
The following methods are used to achieve HA in the VMDC data center architecture:
•
Routing and NV-edge clustered redundancy at the WAN/IP NGN infrastructure edge, including path
and link redundancy, non-stop forwarding and route optimization.
and link redundancy, non-stop forwarding and route optimization.
•
L2 redundancy technologies are implemented through the FabricPath domain and access tiers of the
infrastructure. This includes Address Resolution Protocol (ARP) synchronization in
vPC/vPC+-enabled topologies to minimize unknown unicast flooding and reconvergence; ECMP;
and port-channel utilization between FabricPath edge/leaf and spine nodes to minimize L2 IS-IS
adjacency recalculations and system reconvergence.
infrastructure. This includes Address Resolution Protocol (ARP) synchronization in
vPC/vPC+-enabled topologies to minimize unknown unicast flooding and reconvergence; ECMP;
and port-channel utilization between FabricPath edge/leaf and spine nodes to minimize L2 IS-IS
adjacency recalculations and system reconvergence.
•
Hardware and fabric redundancy throughout.
•
VEM: Multi-Chassis EtherChannel (MCEC) uplink redundancy and VSM redundancy in the virtual
access tier of the infrastructure.
access tier of the infrastructure.
•
In the compute tier of the infrastructure, HSRP (for CSR redundancy), port-channeling, NIC
teaming, and intra-cluster HA through the use of VMware vMotion, along with Active/Standby
redundant failover for SLB and ASA 1000V virtual appliances.
teaming, and intra-cluster HA through the use of VMware vMotion, along with Active/Standby
redundant failover for SLB and ASA 1000V virtual appliances.