HP Virtual Connect 4Gb Fibre Channel Module for c-Class BladeSystem 409513-B22 Dépliant
Codici prodotto
409513-B22
2
Executive Summary
HP is revolutionizing the way IT thinks about networking and server management. When combined
with Virtual Connect, the BladeSystem architecture streamlines the typical change processes for
provisioning in the datacenter.
The HP ProLiant BladeSystem Generation 6 servers with Virtual Connect Flex-10 flexible networking
adapters are a beneficial platform for VMware vSphere infrastructure. These servers include
The HP ProLiant BladeSystem Generation 6 servers with Virtual Connect Flex-10 flexible networking
adapters are a beneficial platform for VMware vSphere infrastructure. These servers include
virtualization friendly features such as large memory capacity, dense population, room for additional
mezzanine cards and 8 - 24 (with Intel Hyper-Threading technology enabled) processing cores. The
following ProLiant BL Servers ship standard with a pair of Virtual Connect Flex-10 network adapters
(NC532i):
(NC532i):
BL495 G5/G6
BL460 G6
BL490 G6
BL685 G6
Virtual Connect Flex-10 is the world's first technology to divide and fine-tune 10Gb Ethernet network
bandwidth at the server edge. It carves the capacity of a 10Gb Ethernet connection into four discrete
NIC ports, called FlexNICs, and adds the unique ability to fine-tune each connection to adapt to your
virtual server channels and workloads on-the-fly. The effect of using Flex-10 is a dramatic reduction in
virtual server channels and workloads on-the-fly. The effect of using Flex-10 is a dramatic reduction in
the number of interconnect modules required to uplink outside of the enclosure, while still maintaining
full redundancy across the service console, VMkernel and virtual machine (VM) networks. This
translates to a lower cost infrastructure with fewer management points and cables that can still
achieve a per server increase in bandwidth.
When designing a vSphere Network infrastructure with Virtual Connect Flex-10, there are two
When designing a vSphere Network infrastructure with Virtual Connect Flex-10, there are two
frequent network architectures customers choose. This document describes how to design highly
available Virtual Connect Flex-10 strategy with:
Virtual Connect Managed VLANs - In this design, we are maximizing the management features of
Virtual Connect, while providing customers with the flexibility to provide “any networking to any
Virtual Connect, while providing customers with the flexibility to provide “any networking to any
host” within the Virtual Connect domain. Simply put, this design will not over-provision servers,
while keeping the number of uplinks used to a minimum. This helps reduce infrastructure cost and
complexity by trunking the necessary VLANs (IP Subnets) to the Virtual Connect domain, and
minimizing potentially expensive 10Gb uplink ports.
minimizing potentially expensive 10Gb uplink ports.
Virtual Connect Pass-through VLANs - This design addresses customer requirements to support a
significant number of VLANs for Virtual Machine traffic. The previous design has a limited
number of VLANs it can support. While providing similar server profile network connection
assignments as the previous design, more uplink ports are required, and VLAN Tunneling must be
assignments as the previous design, more uplink ports are required, and VLAN Tunneling must be
enabled within the Virtual Connect domain.
Both designs provide highly available network architecture, and also take into account enclosure level
redundancy and vSphere cluster design. By spreading the cluster scheme across both enclosures,
each can provide local HA in case of network and enclosure failure.
Finally, this document will provide a key design best practice for vSphere 4 network architecture with
each can provide local HA in case of network and enclosure failure.
Finally, this document will provide a key design best practice for vSphere 4 network architecture with
Virtual Connect Flex-10, including:
Local vSwitch design for VMkernel functions
vDS design for Virtual Machine networking
vSwitch and dvPortGroup load balance algorithms