Cisco Cisco Unified Contact Center Management Portal 11.5(1) Release Note
Cisco Unified Contact Center Enterprise 8.x SRND
314
C
H A P T E R
11
Bandwidth Provisioning and QoS
Considerations
Considerations
This chapter presents an overview of the Unified CCE network architecture, deployment characteristics of
the network, and provisioning requirements of the Unified CCE network. Essential network architecture
concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow categorization, IP-
based prioritization and segmentation, and bandwidth and latency requirements. Provisioning guidelines are
presented for network traffic flows between remote components over the WAN, including
recommendations on how to apply proper Quality of Service (QoS) to WAN traffic flows. For a more
detailed description of the Unified CCE architecture and various component internetworking, see the
the network, and provisioning requirements of the Unified CCE network. Essential network architecture
concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow categorization, IP-
based prioritization and segmentation, and bandwidth and latency requirements. Provisioning guidelines are
presented for network traffic flows between remote components over the WAN, including
recommendations on how to apply proper Quality of Service (QoS) to WAN traffic flows. For a more
detailed description of the Unified CCE architecture and various component internetworking, see the
Cisco Unified CCE has traditionally been deployed using private, point-to-point leased-line network
connections for both its private (Central Controller or Peripheral Gateway, side-to-side) as well as public
(Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance
characteristics (and route diversity for the fault-tolerant failover mechanisms) are provided to the Unified
CCE application only through dedicated private facilities, redundant IP routers, and appropriate priority
queuing.
connections for both its private (Central Controller or Peripheral Gateway, side-to-side) as well as public
(Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance
characteristics (and route diversity for the fault-tolerant failover mechanisms) are provided to the Unified
CCE application only through dedicated private facilities, redundant IP routers, and appropriate priority
queuing.
Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks offer
both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks offer
both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
Provided that the required latency and bandwidth requirements inherent in the real-time nature of this
product are satisfied, Cisco supports Unified CCE deployments in a convergent QoS-aware public network
as well as in a convergent QoS-aware private network environment. This chapter presents QoS marking,
queuing, and shaping recommendations for both the Unified CCE public and private network traffic.
product are satisfied, Cisco supports Unified CCE deployments in a convergent QoS-aware public network
as well as in a convergent QoS-aware private network environment. This chapter presents QoS marking,
queuing, and shaping recommendations for both the Unified CCE public and private network traffic.
Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ, in
contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied to the
traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution, DiffServ is
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ, in
contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied to the
traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution, DiffServ is