Cisco Cisco IP Contact Center Release 4.6.1 Design Guide
C H A P T E R
12-1
Cisco Unified Contact Center Enterprise 7.5 SRND
12
Bandwidth Provisioning and QoS Considerations
This chapter presents an overview of the Unified CCE network architecture, deployment characteristics
of the network, and provisioning requirements of the Unified CCE network. Essential network
architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow
categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements.
Provisioning guidelines are presented for network traffic flows between remote components over the
WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic
flows. For a more detailed description of the Unified CCE architecture and various component
internetworking, see
of the network, and provisioning requirements of the Unified CCE network. Essential network
architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow
categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements.
Provisioning guidelines are presented for network traffic flows between remote components over the
WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic
flows. For a more detailed description of the Unified CCE architecture and various component
internetworking, see
Cisco Unified CCE has traditionally been deployed using private, point-to-point leased-line network
connections for both its private (Central Controller or Peripheral Gateway, side-to-side) as well as public
(Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance
characteristics (and route diversity for the fault tolerant failover mechanisms) are provided to the Unified
CCE application only through dedicated private facilities, redundant IP routers, and appropriate priority
queuing.
connections for both its private (Central Controller or Peripheral Gateway, side-to-side) as well as public
(Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance
characteristics (and route diversity for the fault tolerant failover mechanisms) are provided to the Unified
CCE application only through dedicated private facilities, redundant IP routers, and appropriate priority
queuing.
Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks
offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks
offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
Beginning with Cisco Unified CCE Release 7.0 (provided that the required latency and bandwidth
requirements inherent in the real-time nature of this product are satisfied), Cisco supports Unified CCE
deployments in a convergent QoS-aware public network as well as in a convergent QoS-aware private
network environment. This chapter presents QoS marking, queuing, and shaping recommendations for
both the Unified CCE public and private network traffic.
requirements inherent in the real-time nature of this product are satisfied), Cisco supports Unified CCE
deployments in a convergent QoS-aware public network as well as in a convergent QoS-aware private
network environment. This chapter presents QoS marking, queuing, and shaping recommendations for
both the Unified CCE public and private network traffic.
Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ,
in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied
to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution,
DiffServ is more widely used and accepted. Unified CCE applications are not aware of RSVP, and the
QoS considerations in this chapter are based on DiffServ.
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ,
in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied
to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution,
DiffServ is more widely used and accepted. Unified CCE applications are not aware of RSVP, and the
QoS considerations in this chapter are based on DiffServ.
Adequate bandwidth provisioning and implementation of QoS are critical components in the success of
Unified CCE deployments. Bandwidth guidelines and examples are provided in this chapter to help with
provisioning the required bandwidth.
Unified CCE deployments. Bandwidth guidelines and examples are provided in this chapter to help with
provisioning the required bandwidth.