Cisco Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module Libro blanco
Page 5
network convergence, regardless of the progress for end-to-end convergence. With the need for
redundancy, servers today have at least two Ethernet network interface controllers (NICs) and two FC
host bus adaptors (HBAs) and many more in virtualized environments. The ability to reduce to two CNAs
and cables instead can lead to a significant cost reduction, as well as reduce the speed and complexity of
server deployment and ongoing management. Secondary benefits to reducing cables can selectively be
significant if airflow is impeded by cable bulk. Energy used to drive cool air past impediments can be
reduced for entire racks and pods.
redundancy, servers today have at least two Ethernet network interface controllers (NICs) and two FC
host bus adaptors (HBAs) and many more in virtualized environments. The ability to reduce to two CNAs
and cables instead can lead to a significant cost reduction, as well as reduce the speed and complexity of
server deployment and ongoing management. Secondary benefits to reducing cables can selectively be
significant if airflow is impeded by cable bulk. Energy used to drive cool air past impediments can be
reduced for entire racks and pods.
Ethernet is better integrated than Fibre Channel. Fibre Channel almost always comes as a stand-up card or
other specialized adapter. It is not integrated onto motherboards because only components that apply to
100% of the application of the server as intended by the manufacturer will command the real estate and
expense on the motherboard. Ethernet is on 100% of motherboards. For those servers directed at
virtualization and other high performance tasks, 10 gigabit Ethernet (10 GbE) is on the motherboard.
FCoE, as a 10 GbE technology is becoming increasingly available along with the 10 GbE controller. This
means the price for FCoE is dramatically reduced from that of a specialty card. Similarly FCoE drivers are
becoming embedded in operating systems. This results in simpler support and a lower price for the end
user.
100% of the application of the server as intended by the manufacturer will command the real estate and
expense on the motherboard. Ethernet is on 100% of motherboards. For those servers directed at
virtualization and other high performance tasks, 10 gigabit Ethernet (10 GbE) is on the motherboard.
FCoE, as a 10 GbE technology is becoming increasingly available along with the 10 GbE controller. This
means the price for FCoE is dramatically reduced from that of a specialty card. Similarly FCoE drivers are
becoming embedded in operating systems. This results in simpler support and a lower price for the end
user.
FCoE provides better support for server virtualization. One of the more compelling features of server
virtualization is the ability to relocate a running application using virtual machine mobility. Users should
be interested in having the same capabilities available on every system where an application may reside.
“Wiring once” with 10 GbE and FCoE as an Ethernet technology makes it possible to support this
requirement with minimum incremental infrastructure and as a simpler data center standard than Fibre
Channel technology.
be interested in having the same capabilities available on every system where an application may reside.
“Wiring once” with 10 GbE and FCoE as an Ethernet technology makes it possible to support this
requirement with minimum incremental infrastructure and as a simpler data center standard than Fibre
Channel technology.
The benefits above need to be balanced against the risks associated with changing the network to a converged
design. Any change in network configuration requires consideration, but the changes associated with network
configuration should be seen as significant — requiring new topologies, new technologies, and new processes
for management. While the benefits do resonate, some of the barriers seen in the market include the following:
design. Any change in network configuration requires consideration, but the changes associated with network
configuration should be seen as significant — requiring new topologies, new technologies, and new processes
for management. While the benefits do resonate, some of the barriers seen in the market include the following:
Storage and network buyers are conservative. Network teams, especially for storage, are notoriously
conservative, and with good reason: Sustained operation of equipment and environments is critical.
Purchase decision-makers and implementers of these technologies find solutions that work and are
generally resistant to change. While the benefits are not trivial, they are big-picture benefits that will take
time to be realized. Many purchase decision-makers may see risk and high effort in getting to those
benefits — and take a pass.
Purchase decision-makers and implementers of these technologies find solutions that work and are
generally resistant to change. While the benefits are not trivial, they are big-picture benefits that will take
time to be realized. Many purchase decision-makers may see risk and high effort in getting to those
benefits — and take a pass.
Separate organization structures between storage and LAN make convergence harder. Most enterprise
organizations have separate teams that manage SANs and LANs, with different people, methodologies,
tools, and, possibly most importantly, budgets. Getting these teams, which historically have not gotten
tools, and, possibly most importantly, budgets. Getting these teams, which historically have not gotten