Cisco Cisco Catalyst Blade Switch 3030 for Dell 백서

다운로드
페이지 29
 
Design Guide 
fault tolerance may be achieved with Layer 3 technologies such as Hot Standby Router Protocol 
(HSRP) or Virtual Router Redundancy Protocol (VRRP). These protocols allow the gateways for 
servers or clients to be virtualized across the physical routing devices in the network. This 
virtualization mitigates the effect of a routing device failure on the availability of data center 
services. Load-balancing services deployed in the aggregation layer allow the network to monitor 
server health and application availability. These devices and features combined produce a more 
resilient application environment. 
Dual-homing a server in relation to separate access layer switches is another method to achieve a 
higher level of availability in the data center. NIC teaming removes the possibility of a single NIC 
failure isolating the server. It requires the server to have two separate NICs that support teaming 
software. Typically, teaming software detects failures over an external network probe between 
members of the team or by monitoring the local status of each NIC in the team. The combination of 
dual-homed servers and a network load balancer provides an even greater level of availability for 
the server and the applications it supports. 
Data centers are the repository of critical business applications that support the continual operation 
of an enterprise. These applications must be accessible throughout the working day during peak 
times, and some on a 24-hour basis. The infrastructure of the data center, network devices, and 
servers must address these diverse requirements. The network infrastructure provides device and 
link redundancy combined with a deterministic topology design to achieve application availability 
requirements. Servers are typically configured with multiple NICs and dual-homed to the access 
layer switches to provide backup connectivity to the business application.  
High availability is an important design consideration in the data center. The Cisco Catalyst Blade 
Switch 3130 has a number of features and characteristics that contribute to a reliable, highly 
available network. 
High Availability for the Blade Server Switching Infrastructure 
High availability between the Cisco Catalyst Blade Switch 3130s in the blade server enclosure and 
the aggregation layer switches requires link redundancy. Each Cisco Catalyst Blade Switch 3130 
offers multiple ports for uplink connectivity to the external network, which allows for redundant 
paths using links to each aggregation layer switch for more redundancy.   However, this introduces 
the possibility of Layer 2 loops; therefore, a mechanism is required to manage the physical 
topology. The implementation of RSTP helps ensure a fast converging, predictable Layer 2 domain 
between the aggregation layer and access switches (the Cisco Catalyst Blade Switch 3130s) when 
redundant paths are present.  For those customers who want to implement the Access layer 
without Spanning Tree, the CBS3130 supports FlexLinks.  FlexLinks associates a “back up” 
interface with each forwarding interface.  There, the customer can maintain a redundant topology 
without the need for spanning tree. 
A recommended design is a triangle topology (as shown in Figure 5 earlier), which delivers a highly 
available environment through redundant links and a spanning tree. It allows for multiple switch or 
link failures without compromising the availability of the data center applications.  
The access layer uplink EtherChannels support the publicly available subnets in the data center 
and traffic between servers. The server-to-server traffic that uses these uplinks is logically 
segmented through VLANs and may use network services available in the aggregation layer.  This 
path provides intra-enclosure connectivity between the servers for VLANs defined locally on the 
blade enclosure switches. Clustering applications that require Layer 2 communication may use this 
 
© 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. 
Page 11 of 29