Cisco Cisco Prime Virtual Network Analysis Module (vNAM) 6.3 White Paper
3-15
Cisco Virtualized Multiservice Data Center (VMDC) Virtual Services Architecture (VSA) 1.0
Design Guide
Chapter 3 VMDC VSA 1.0 Design Details
Network
communicate reachability from and to the PE routers. In this model, WAN edge/PE routers effectively
function as an L3 autonomous system boundary router (ASBR) and MPLS VPN gateway, extending the
tenant virtual private cloud container in the public provider Data Center to their IP VPN.
function as an L3 autonomous system boundary router (ASBR) and MPLS VPN gateway, extending the
tenant virtual private cloud container in the public provider Data Center to their IP VPN.
The CSR1000V and ASA1000V are default gateways for all Hosts and routable virtual service
appliances within the tenant containers. The ASR 9000 WAN/PE is the gateway to the Internet and
private customer networks, for all devices in the data center. For the ASA 1000V in the Zinc container,
the ASR9000 is the default gateway to the Internet, via static routing. For the CSR1000V in
Silver/Bronze/Gold containers the ASR9000 is the gateway to the customer networks, which the
ASR9000 advertises to the CSR1000v via eBGP. The ASR9000 can inject specific prefixes via BGP to
the CSR for more granular control of tenant routing. For the CSR1000V in a Gold container with Internet
access, the ASR9000 is the Internet gateway, and advertises a default route to the CSR1000V via eBGP
on the Internet-facing link. The CSR does not have to learn all Internet routes, but can simply route
traffic destined to the Internet toward the default route. Tenant-to-tenant communication may be enabled
through leaking of VRF routes at the centralized PE.
appliances within the tenant containers. The ASR 9000 WAN/PE is the gateway to the Internet and
private customer networks, for all devices in the data center. For the ASA 1000V in the Zinc container,
the ASR9000 is the default gateway to the Internet, via static routing. For the CSR1000V in
Silver/Bronze/Gold containers the ASR9000 is the gateway to the customer networks, which the
ASR9000 advertises to the CSR1000v via eBGP. The ASR9000 can inject specific prefixes via BGP to
the CSR for more granular control of tenant routing. For the CSR1000V in a Gold container with Internet
access, the ASR9000 is the Internet gateway, and advertises a default route to the CSR1000V via eBGP
on the Internet-facing link. The CSR does not have to learn all Internet routes, but can simply route
traffic destined to the Internet toward the default route. Tenant-to-tenant communication may be enabled
through leaking of VRF routes at the centralized PE.
Alternative L3 logical models for addressing tenancy scale not addressed in this system release include
but are not limited to: 1) implementing MPLS Inter-AS Option B at the aggregation switching nodes,
functioning as intra-DC PEs in a traditional hierarchical DC design, and 2) a distributed Virtual PE (vPE)
model, described in
but are not limited to: 1) implementing MPLS Inter-AS Option B at the aggregation switching nodes,
functioning as intra-DC PEs in a traditional hierarchical DC design, and 2) a distributed Virtual PE (vPE)
model, described in
.
It is important to note that the vCE and vPE models are not necessarily mutually exclusive – it is possible
that a Provider might run both models concurrently within a Public Data Center, to meet the differing
needs of their customers. A practical use case which might lead a Provider to implement a vPE model
over a vCE model is one in which the customer or “tenant” requires sub-tenancy – for example, the
customer might be an ISV (Independent Software Vendor), and wish to use their slice of the Public Cloud
to provide granular, differentiated services to their customers. Other practical deployment considerations
include operational consistency and ease of use.
that a Provider might run both models concurrently within a Public Data Center, to meet the differing
needs of their customers. A practical use case which might lead a Provider to implement a vPE model
over a vCE model is one in which the customer or “tenant” requires sub-tenancy – for example, the
customer might be an ISV (Independent Software Vendor), and wish to use their slice of the Public Cloud
to provide granular, differentiated services to their customers. Other practical deployment considerations
include operational consistency and ease of use.
Fabric Path
Cisco FabricPath comprises an L2 data plane alternative to classical Ethernet. FabricPath encapsulates
frames entering the fabric with a header that consists of routable source and destination addresses. These
addresses are the address of the switch on which the frame was received and the address of the destination
switch toward which the frame is heading. For this reason, switch IDs must be unique in the FabricPath
domain; the IDs are either automatically assigned (default) or set manually by the administrator
(recommended). The frame is routed until it reaches the remote switch, where it is de-encapsulated and
delivered in its original Ethernet format.
frames entering the fabric with a header that consists of routable source and destination addresses. These
addresses are the address of the switch on which the frame was received and the address of the destination
switch toward which the frame is heading. For this reason, switch IDs must be unique in the FabricPath
domain; the IDs are either automatically assigned (default) or set manually by the administrator
(recommended). The frame is routed until it reaches the remote switch, where it is de-encapsulated and
delivered in its original Ethernet format.
FabricPath uses an IS-IS control plane to establish L2 adjacencies in the FabricPath core; so equal-cost
multipath (ECMP) is supported and Spanning Tree Protocol (STP) is no longer required for loop
avoidance in this type of L2 fabric. Loop mitigation is addressed using time to live (TTL), decremented
at each switch hop to prevent looping and reverse path forwarding (RPF) checks for multi-destination
traffic. As previously noted, a common initial use case for FabricPath is as part of a strategy to minimize
reliance on STP in the Data Center.
multipath (ECMP) is supported and Spanning Tree Protocol (STP) is no longer required for loop
avoidance in this type of L2 fabric. Loop mitigation is addressed using time to live (TTL), decremented
at each switch hop to prevent looping and reverse path forwarding (RPF) checks for multi-destination
traffic. As previously noted, a common initial use case for FabricPath is as part of a strategy to minimize
reliance on STP in the Data Center.
A FabricPath domain comprises one logical topology. As part of establishing L2 adjacencies across the
logical topology, FabricPath nodes create two multi-destination trees. IS-IS calculations compute the
trees automatically. The highest priority switch is chosen as the root for the first multi-destination tree
(FTAG1), which is used for broadcasts, flooding, and multicast. The second highest priority switch is
chosen as the root for the second multi-destination tree (FTAG2), which is used for multicast. The
designs described in this guide leverage the current best practice recommendation for root selection,
which is to manually define the roots for the FTAG trees. In this case, the logical choice is to set the
roots as the spine nodes, as they have the most direct connectivity across the span of leaf nodes. In the
logical topology, FabricPath nodes create two multi-destination trees. IS-IS calculations compute the
trees automatically. The highest priority switch is chosen as the root for the first multi-destination tree
(FTAG1), which is used for broadcasts, flooding, and multicast. The second highest priority switch is
chosen as the root for the second multi-destination tree (FTAG2), which is used for multicast. The
designs described in this guide leverage the current best practice recommendation for root selection,
which is to manually define the roots for the FTAG trees. In this case, the logical choice is to set the
roots as the spine nodes, as they have the most direct connectivity across the span of leaf nodes. In the