Cisco Cisco Prime Virtual Network Analysis Module (vNAM) 6.3 White Paper

Page of 64
 
3-9
Cisco Virtualized Multiservice Data Center (VMDC) Virtual Services Architecture (VSA) 1.0
Design Guide
Chapter 3      VMDC VSA 1.0 Design Details
  Storage
  •
Each UCS 6200 Fabric Interconnect aggregates via redundant 10 GigE EtherChannel connections 
into the leaf or “access-edge” switch (Nexus 5500). The number of uplinks provisioned will depend 
upon traffic engineering requirements. For example, to provide an eight-chassis system with an 8:1 
oversubscription ratio for internal fabric bandwidth to FabricPath aggregation-edge bandwidth, a 
total of 160 Gbps (16 x 10 Gbps) of uplink bandwidth capacity must be provided per UCS system.
  •
Four ports from an FC GEM in each 6200 Expansion Slot provide 8 Gbps Fibre Channel to the Cisco 
MDS 9513 SAN switches (for example, 6200 chassis A, 4 x 8 Gbps Fibre Channel to MDS A and 
6200 chassis B, 4 x 8 Gbps Fibre Channel to MDS B). To maximize IOPS, the aggregate link 
bandwidth from the UCS to the MDS should match the processing capability of the storage 
controllers.
  •
The Nexus 1000V functions as the virtual access switching layer, providing per-VM policy and 
policy mobility.
Storage
The VMDC SAN architecture remains unchanged from previous (2.0 and 3.0) programs. It follows 
current best practice guidelines for scalability, high availability, and traffic isolation. Key design aspects 
of the architecture include:
  •
Leveraging Cisco Data Center Unified Fabric to optimize and reduce LAN and SAN cabling costs.
  •
HA through multi-level redundancy (link, port, fabric, Director, RAID).
  •
Risk mitigation through fabric isolation (multiple fabrics, VSANs).
  •
Data store isolation through n-port virtualization (NPV) and n-port identifier virtualization (NPIV) 
techniques, combined with zoning and LUN masking.
In terms of VMDC validation, the focus to date has been on storage as a distributed, pod-based resource. 
This is based on the premise that it is more efficient for performance and traffic flow optimization to 
locate data store resources as close to the tenant hosts and vApps as possible. In this context, we have 
the following methods of attaching FibreChannel storage components into the infrastructure as shown 
in
1.
Models that follow the ICS model of attachment via Nexus 5000 and Nexus 7000, depending upon 
ICS type.
2.
Models that provide for attachment at the UCS Fabric Interconnect.