Cisco Cisco Catalyst Blade Switch 3030 for Dell White Paper
Design Guide
traffic path, as well as mirrored traffic. Each of these port channels is composed of at least two
Gigabit Ethernet or two 10GE ports.
Gigabit Ethernet or two 10GE ports.
RPVST+ is recommended as the method for controlling the Layer 2 domain because of its
predictable behavior and fast convergence. A meshed topology combined with RPVST+ allows only
one active link from each blade switch to the root of the spanning tree domain. This design creates
a highly available server farm through controlled traffic paths and the rapid convergence of the
spanning tree. The details of the recommended design are discussed in a later section.
predictable behavior and fast convergence. A meshed topology combined with RPVST+ allows only
one active link from each blade switch to the root of the spanning tree domain. This design creates
a highly available server farm through controlled traffic paths and the rapid convergence of the
spanning tree. The details of the recommended design are discussed in a later section.
High Availability for the Blade Servers
Blade enclosures provide high availability to blade servers by multihoming each server to the Cisco
Catalyst Blade Switch 3130s. The two Cisco Catalyst Blade Switch 3130s housed in the
interconnect bays are connected to the blade server over the backplane. Six backplane Gigabit
Ethernet connections are available to every blade server: two for the LAN on Motherboard (LOMs)
and four via mezzanine cards.
Catalyst Blade Switch 3130s. The two Cisco Catalyst Blade Switch 3130s housed in the
interconnect bays are connected to the blade server over the backplane. Six backplane Gigabit
Ethernet connections are available to every blade server: two for the LAN on Motherboard (LOMs)
and four via mezzanine cards.
Multihoming the server blades allows the use of a NIC teaming driver, which provides another high-
availability mechanism to fail over and load balance at the server level. Broadcom uses basically
two modes of teaming are supported:
availability mechanism to fail over and load balance at the server level. Broadcom uses basically
two modes of teaming are supported:
●
Smart Load Balancing (SLB)
●
Link Aggregation (with IEEE 802.1ad LACP)
Smart Load Balancing is done based on L3 addresses. On a per flow basis, the Broadcom adapter
will assign the associated MAC address for the particular NIC the traffic will use. Load Balancing is
not done on a packet by packet basis.
will assign the associated MAC address for the particular NIC the traffic will use. Load Balancing is
not done on a packet by packet basis.
LACP based teaming extends the functionality by allowing the team to receive load-balanced traffic
from the network. This requires that the switch can load balance the traffic across the ports
connected to the server NIC team. LACP based load balancing is done on the L2 address. The
team of NICs looks like a larger single NIC to the switch, much like an EtherChannel looks between
switches. Redundancy is built into the protocol. The Cisco Catalyst Blade Switch 3130 supports
the IEEE 802.3ad standard and Gigabit port channels. Servers can now operate in Active/Active
configurations. This means that each server team can provide 2 Gigabit of Ethernet Connectivity to
the Switching fabric. Failover mechanisms are automatically built into the LACP protocol. The pair
of CBS3130s must be in the same ring for the server to support LACP connections. In other words,
the server must see the same switch on both interfaces. Otherwise, the user most likely will use
the SLB mode.
from the network. This requires that the switch can load balance the traffic across the ports
connected to the server NIC team. LACP based load balancing is done on the L2 address. The
team of NICs looks like a larger single NIC to the switch, much like an EtherChannel looks between
switches. Redundancy is built into the protocol. The Cisco Catalyst Blade Switch 3130 supports
the IEEE 802.3ad standard and Gigabit port channels. Servers can now operate in Active/Active
configurations. This means that each server team can provide 2 Gigabit of Ethernet Connectivity to
the Switching fabric. Failover mechanisms are automatically built into the LACP protocol. The pair
of CBS3130s must be in the same ring for the server to support LACP connections. In other words,
the server must see the same switch on both interfaces. Otherwise, the user most likely will use
the SLB mode.
For more information on NIC teaming, please visit:
Scalability
The capability of the data center to adapt to increased demands without compromising its
availability is a crucial design consideration. The aggregation layer infrastructure and the services it
provides must accommodate future growth in the number of servers or subnets it supports.
availability is a crucial design consideration. The aggregation layer infrastructure and the services it
provides must accommodate future growth in the number of servers or subnets it supports.
When deploying blade servers in the data center there are two primary factors to consider:
●
Number of physical ports in the aggregation and access layers
●
Number of slots in the aggregation layer switches
© 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
Page 12 of 29