Cisco Cisco Prime Virtual Network Analysis Module (vNAM) 6.2 White Paper
3-7
Cisco Virtualized Multiservice Data Center (VMDC) Virtual Services Architecture (VSA) 1.0
Design Guide
Chapter 3 VMDC VSA 1.0 Design Details
Compute
network. In this context, the network is the unified fabric. FCoE, VM-FEX, vPCs and FabricPath are
Ethernet technologies that have evolved data center fabric design options. These technologies can be used
concurrently over the VMDC Nexus-based infrastructure.
Ethernet technologies that have evolved data center fabric design options. These technologies can be used
concurrently over the VMDC Nexus-based infrastructure.
Note
FCoE uses FSPF (Fabric Shortest Path First) forwarding, which FabricPath does not yet support
(FabricPath uses an IS-IS control plane). FCoE must be transported on separate (classical Ethernet)
VLANs. In VMDC VSA 1.0, we assume that FCoE links are leveraged outside of the FabricPath
domain—such as within the ICS portions of the FabricPath-based pod—to reduce cabling and adapter
expenses and to realize power and space savings.
(FabricPath uses an IS-IS control plane). FCoE must be transported on separate (classical Ethernet)
VLANs. In VMDC VSA 1.0, we assume that FCoE links are leveraged outside of the FabricPath
domain—such as within the ICS portions of the FabricPath-based pod—to reduce cabling and adapter
expenses and to realize power and space savings.
Compute
The VMDC compute architecture assumes, as a baseline premise, a high degree of server virtualization,
driven by data center consolidation, the dynamic resource allocation requirements fundamental to a
"cloud" model, and the need to maximize operational efficiencies while reducing capital expense
(CAPEX). Therefore, the architecture is based upon three key elements:
driven by data center consolidation, the dynamic resource allocation requirements fundamental to a
"cloud" model, and the need to maximize operational efficiencies while reducing capital expense
(CAPEX). Therefore, the architecture is based upon three key elements:
1.
Hypervisor-based Virtualization—In VMDC VSA 1.0, as in previous VMDC releases, VMware
vSphere plays a key role, logically abstracting the server environment in terms of CPU, memory,
and network into multiple virtual software containers to enable VM creation on physical servers. In
this release, vSphere VMs provide the foundation for router and service node virtualization.
vSphere plays a key role, logically abstracting the server environment in terms of CPU, memory,
and network into multiple virtual software containers to enable VM creation on physical servers. In
this release, vSphere VMs provide the foundation for router and service node virtualization.
Note
Separate, interrelated documents address Microsoft Hyper-V and Nexus 1000V integration
for application workloads in VMDC FabricPath systems:
for application workloads in VMDC FabricPath systems:
2.
UCS Network, Server, and I/O Resources in a Converged System—UCS provides a highly
resilient, low-latency unified fabric for integrating lossless 10 Gigabit Ethernet and FCoE functions
using x86 server architectures. UCS provides a stateless compute environment that abstracts I/O
resources, server personality, configuration, and connectivity to facilitate dynamic programmability.
Hardware state abstraction simplifies moving applications and operating systems across server
hardware.
resilient, low-latency unified fabric for integrating lossless 10 Gigabit Ethernet and FCoE functions
using x86 server architectures. UCS provides a stateless compute environment that abstracts I/O
resources, server personality, configuration, and connectivity to facilitate dynamic programmability.
Hardware state abstraction simplifies moving applications and operating systems across server
hardware.
3.
The Nexus 1000V—This virtual switch, which provides a feature-rich alternative to VMware
Distributed Virtual Switch, incorporates software-based VN-link technology to extend network
visibility, QoS, and security policy to the VM level. VMDC VSA 1.0 uses VMware vSphere 5.1 as
the compute virtualization operating system. A complete list of new vSphere 5.1 enhancements is
available
Distributed Virtual Switch, incorporates software-based VN-link technology to extend network
visibility, QoS, and security policy to the VM level. VMDC VSA 1.0 uses VMware vSphere 5.1 as
the compute virtualization operating system. A complete list of new vSphere 5.1 enhancements is
available
Key "baseline" vSphere features leveraged by the system include ESXi boot from
SAN, VMware High Availability (HA), and Distributed Resource Scheduler (DRS). Basic to the
virtualized compute architecture is the notion of clusters; a cluster comprises two or more hosts with
their associated resource pools, VMs, and data stores. Working with vCenter as a compute domain
manager, vSphere advanced functionality, such as HA and DRS, is built around the management of
cluster resources. vSphere supports cluster sizes of up to 32 servers when HA or DRS features are
used. In practice, however, the larger the scale of the compute environment and the higher the
virtualization (VM, network interface, and port) requirements, the more advisable it is to use smaller
cluster sizes to optimize performance and virtual interface port scale and limit the intra-cluster
failure domain. Previously in VMDC large pod simulations, cluster sizes were limited to eight
servers; in smaller pod simulations, cluster sizes of 16 or 32 were used. For VMDC VSA 1.0, cluster
sizes of 16 servers are deployed in the system under test (SUT). As in previous VMDC releases,
virtualized compute architecture is the notion of clusters; a cluster comprises two or more hosts with
their associated resource pools, VMs, and data stores. Working with vCenter as a compute domain
manager, vSphere advanced functionality, such as HA and DRS, is built around the management of
cluster resources. vSphere supports cluster sizes of up to 32 servers when HA or DRS features are
used. In practice, however, the larger the scale of the compute environment and the higher the
virtualization (VM, network interface, and port) requirements, the more advisable it is to use smaller
cluster sizes to optimize performance and virtual interface port scale and limit the intra-cluster
failure domain. Previously in VMDC large pod simulations, cluster sizes were limited to eight
servers; in smaller pod simulations, cluster sizes of 16 or 32 were used. For VMDC VSA 1.0, cluster
sizes of 16 servers are deployed in the system under test (SUT). As in previous VMDC releases,