Cisco Cisco IP Contact Center Release 4.6.1 Leaflet
12-10
Cisco Unified Contact Center Enterprise 7.5 SRND
Chapter 12 Bandwidth Provisioning and QoS Considerations
Quality of Service
10,000 bytes (80 kb) of data per second to be communicated to the Central Controller. The majority of
this data is sent on the low-priority path. The ratio of low to high path bandwidth varies with the
characteristics of the deployment (most significantly, the degree to which post-routing is performed), but
generally it is roughly 10% to 30%. Each post-route request generates between 200 and 300 additional
bytes of data on the high-priority path. Translation routes incur per-call data flowing in the opposite
direction (Central Controller to PG), and the size of this data is fully dependent upon the amount of call
context presented to the desktop.
this data is sent on the low-priority path. The ratio of low to high path bandwidth varies with the
characteristics of the deployment (most significantly, the degree to which post-routing is performed), but
generally it is roughly 10% to 30%. Each post-route request generates between 200 and 300 additional
bytes of data on the high-priority path. Translation routes incur per-call data flowing in the opposite
direction (Central Controller to PG), and the size of this data is fully dependent upon the amount of call
context presented to the desktop.
A site that has an ACD as well as a VRU has two peripherals, and the bandwidth requirement
calculations should take both peripherals into account. As an example, a site that has 4 peripherals, each
taking 10 calls per second, should generally be configured to have 320 kbps of bandwidth. The
1,000 bytes per call is a rule of thumb, but the actual behavior should be monitored once the system is
operational to ensure that enough bandwidth exists. (Unified ICM meters data transmission statistics at
both the Central Controller and PG sides of each path.)
calculations should take both peripherals into account. As an example, a site that has 4 peripherals, each
taking 10 calls per second, should generally be configured to have 320 kbps of bandwidth. The
1,000 bytes per call is a rule of thumb, but the actual behavior should be monitored once the system is
operational to ensure that enough bandwidth exists. (Unified ICM meters data transmission statistics at
both the Central Controller and PG sides of each path.)
Again, the rule of thumb and example described here apply to Unified ICM prior to Release 5.0, and they
are stated here for reference purpose only. Bandwidth calculators and sizing formulas are supplied for
Unified ICM 5.0 and later releases, and they can project bandwidth requirements far more accurately.
See
are stated here for reference purpose only. Bandwidth calculators and sizing formulas are supplied for
Unified ICM 5.0 and later releases, and they can project bandwidth requirements far more accurately.
See
details.
As with bandwidth, specific latency requirements must be guaranteed in order for the Unified ICM to
function as designed. The side-to-side private network of duplexed Central Controller and PG nodes has
a maximum one-way latency of 100 ms (50 ms preferred). The PG-to-CC path has a maximum one-way
latency of 200 ms in order to perform as designed. Meeting or exceeding these latency requirements is
particularly important in an environment using Unified ICM post-routing and/or translation routes.
function as designed. The side-to-side private network of duplexed Central Controller and PG nodes has
a maximum one-way latency of 100 ms (50 ms preferred). The PG-to-CC path has a maximum one-way
latency of 200 ms in order to perform as designed. Meeting or exceeding these latency requirements is
particularly important in an environment using Unified ICM post-routing and/or translation routes.
As discussed previously, Unified ICM bandwidth and latency design is fully dependent upon an
underlying IP prioritization scheme. Without proper prioritization in place, WAN connections will fail.
The Cisco Unified ICM support team has custom tools (for example, Client/Server) that can be used to
demonstrate proper prioritization and to perform some level of bandwidth utilization modeling for
deployment certification.
underlying IP prioritization scheme. Without proper prioritization in place, WAN connections will fail.
The Cisco Unified ICM support team has custom tools (for example, Client/Server) that can be used to
demonstrate proper prioritization and to perform some level of bandwidth utilization modeling for
deployment certification.
Depending upon the final network design, an IP queuing strategy will be required in a shared network
environment to achieve Unified ICM traffic prioritization concurrent with other non-DNP traffic flows.
This queuing strategy is fully dependent upon traffic profiles and bandwidth availability, and success in
a shared network cannot be guaranteed unless the stringent bandwidth, latency, and prioritization
requirements of the product are met.
environment to achieve Unified ICM traffic prioritization concurrent with other non-DNP traffic flows.
This queuing strategy is fully dependent upon traffic profiles and bandwidth availability, and success in
a shared network cannot be guaranteed unless the stringent bandwidth, latency, and prioritization
requirements of the product are met.
Quality of Service
This section covers the planning and configuration issues to consider when moving to a Unified ICM
QoS solution.
QoS solution.
Where to Mark Traffic
In planning QoS, a question often arises about whether to mark traffic in Unified ICM or at the network
edge. Each option has it pros and cons. Marking traffic in Unified ICM saves the access lists for
classifying traffic in IP routers and switches. Additionally, when deployed with Microsoft Windows
Packet Scheduler, Unified ICM supports traffic shaping and 802.1p marking. The traffic shaping
functionality mitigates the bursty nature of Unified ICM transmissions by smoothing transmission peaks
edge. Each option has it pros and cons. Marking traffic in Unified ICM saves the access lists for
classifying traffic in IP routers and switches. Additionally, when deployed with Microsoft Windows
Packet Scheduler, Unified ICM supports traffic shaping and 802.1p marking. The traffic shaping
functionality mitigates the bursty nature of Unified ICM transmissions by smoothing transmission peaks