Листовка для Cisco Cisco Nexus 5010 Switch
Solution Overview
Network infrastructure ready
for the cloud!
for the cloud!
Challenge
The BMW data centre requires an upgrade in
order to implement a forward-looking, innovative
and efficient IT infrastructure and architecture.
This upgrade should incorporate the
virtualisation of the data centre and the
introduction of a private cloud, as well as
implementing measures to meet the ever
increasing availability and security requirements.
order to implement a forward-looking, innovative
and efficient IT infrastructure and architecture.
This upgrade should incorporate the
virtualisation of the data centre and the
introduction of a private cloud, as well as
implementing measures to meet the ever
increasing availability and security requirements.
Solution
In the BMW data centre, the network
infrastructure was modernised using Nexus
2000, Nexus 5000 and Nexus 7000. The wiring in
the data centre was also improved to enable
more efficient use of available resources such as
electricity and space. Thanks to Nexus 7000 and
NX-OS, the infrastructure is now equipped for
new technologies such as FCoE and for the
cloud.
infrastructure was modernised using Nexus
2000, Nexus 5000 and Nexus 7000. The wiring in
the data centre was also improved to enable
more efficient use of available resources such as
electricity and space. Thanks to Nexus 7000 and
NX-OS, the infrastructure is now equipped for
new technologies such as FCoE and for the
cloud.
Benefits
• Structured wiring for improved use of
resources
• Infrastructure ready for future technologies
such as FCoE
• Less downtime thanks to software update
during live operation (ISSU)
• Modern technology at lower
operating costs than before
A data centre equipped for the future thanks to Cisco Nexus
Cloud computing is becoming an increasingly important topic in the data centres of large
companies. As such, it seems evident that current IT infrastructure measures should also pave the
way towards enabling private cloud computing. Cisco provided support to the BMW AG data
centre in revising the flexibility and modularisation of their network design in order to prepare the
infrastructure for the cloud. This work also included the virtualisation of the data centre and
implementing measures to meet the increasing availability and security requirements. Future
security played an important role: In addition to the optimisation of the existing platform, the data
centre was prepared for the new FCoE protocol (Fibre Channel over Ethernet), which enables the
transport of fibre-channel data streams from memory systems via Ethernet.
companies. As such, it seems evident that current IT infrastructure measures should also pave the
way towards enabling private cloud computing. Cisco provided support to the BMW AG data
centre in revising the flexibility and modularisation of their network design in order to prepare the
infrastructure for the cloud. This work also included the virtualisation of the data centre and
implementing measures to meet the increasing availability and security requirements. Future
security played an important role: In addition to the optimisation of the existing platform, the data
centre was prepared for the new FCoE protocol (Fibre Channel over Ethernet), which enables the
transport of fibre-channel data streams from memory systems via Ethernet.
New structures for the network
The network structures in the data centre are divided into networks for live operation,
administration and backup in the conventional three-layer network design consisting of core,
distribution and access. The servers had five active interfaces in order to physically separate the
infrastructures. The central star formation of the server wiring resulted in long copper patch
channels between the access switches and the servers for the relevant areas of the data centres.
A review of the architecture and the design indicated that a comprehensive replacement of the
hardware was required, as Cisco Catalyst 6509 components, which had since been shut down,
were in use at the distribution level and in access. Furthermore, the wiring in the data centre was to
be restructured to make optimum use of the power supply, air conditioning systems and the
available space in the Enterprise Data Centre.
administration and backup in the conventional three-layer network design consisting of core,
distribution and access. The servers had five active interfaces in order to physically separate the
infrastructures. The central star formation of the server wiring resulted in long copper patch
channels between the access switches and the servers for the relevant areas of the data centres.
A review of the architecture and the design indicated that a comprehensive replacement of the
hardware was required, as Cisco Catalyst 6509 components, which had since been shut down,
were in use at the distribution level and in access. Furthermore, the wiring in the data centre was to
be restructured to make optimum use of the power supply, air conditioning systems and the
available space in the Enterprise Data Centre.
These prerequisites formed the basis of the concrete requirements for the new hardware: It
must be able to rapidly provide 10 gigabits in the infrastructure and be scalable to 40 and 100
gigabits. With regard to access, the new hardware must initially offer non-blocking 1GE
(Gigabit Ethernet) ports with the option of extension to 10GE. The new components had to be
cheaper to operate, demonstrate lower power consumption and be suitable for in-house
operation.
must be able to rapidly provide 10 gigabits in the infrastructure and be scalable to 40 and 100
gigabits. With regard to access, the new hardware must initially offer non-blocking 1GE
(Gigabit Ethernet) ports with the option of extension to 10GE. The new components had to be
cheaper to operate, demonstrate lower power consumption and be suitable for in-house
operation.