Cisco Cisco E-Mail Manager Unity Integration Option Dépliant
3-41
Cisco Unified Contact Center Enterprise 7.5 SRND
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations
•
There is no impact to the agents, calls in progress, or calls in queue because the agents stay
connected to their already established CTI OS Server process connection. The system can continue
to function normally; however; the PGs will be in simplex mode until the private network link is
restored.
connected to their already established CTI OS Server process connection. The system can continue
to function normally; however; the PGs will be in simplex mode until the private network link is
restored.
If the two private network connections are combined into one link, the failures follow the same path;
however, the system runs in simplex mode on both the Call Router and the Peripheral Gateway. If a
second failure were to occur at that point, the system could lose some or all of the call routing and ACD
functionality.
however, the system runs in simplex mode on both the Call Router and the Peripheral Gateway. If a
second failure were to occur at that point, the system could lose some or all of the call routing and ACD
functionality.
Peripheral Gateway Failover Enhancement
Prior to the 7.5(10) Maintenance Release, duplexed Peripheral Gateways (PGs) failed over to side A
when the private network was lost between PG side A and side B. For example:
when the private network was lost between PG side A and side B. For example:
•
PG side B is active (has active PIMs, PGAgent)
•
PG side A and side B lose the private network
•
PG side B unconditionally fails over to side A
This failover caused unnecessary disruption to the contact center, especially to Unified CCE, because
ACD restarts and all agents need to sign back in, thus losing reporting and call control. This failure
condition is more likely to occur when the PGs are split over the WAN.
ACD restarts and all agents need to sign back in, thus losing reporting and call control. This failure
condition is more likely to occur when the PGs are split over the WAN.
This enhancement introduced in the 7.5(10) Maintenance Release uses a side-weight approach, which is
similar—but not identical—to the device majority logic. Router component uses this majority logic.
similar—but not identical—to the device majority logic. Router component uses this majority logic.
In the case of PGs, the OPC process uses a weighted value based on the number of active components
to decide the weight of both sides. This weight value determines which PG side takes over during a PG
private link outage, so that the disruption to the contact center is minimized.
to decide the weight of both sides. This weight value determines which PG side takes over during a PG
private link outage, so that the disruption to the contact center is minimized.
The enhanced OPC process dynamically calculates and maintains the weights for both the sides. OPC
also keeps the MDS processes on both sides updated at all times with the cumulative device weights for
both sides. This allows MDS on both sides to take appropriate action in case of a private link failure.
also keeps the MDS processes on both sides updated at all times with the cumulative device weights for
both sides. This allows MDS on both sides to take appropriate action in case of a private link failure.
The PG predetermines the individual process of device weights on the basis of the impact to the contact
center in case of failover. For example, Agent PIMs have a higher device weight than VRU PIMs and
CTI Server, because if the Agent PIM fails over, the system takes longer to recover and the disruption
to contact center is greater. The device weights for individual processes are not configurable.
center in case of failover. For example, Agent PIMs have a higher device weight than VRU PIMs and
CTI Server, because if the Agent PIM fails over, the system takes longer to recover and the disruption
to contact center is greater. The device weights for individual processes are not configurable.
Scenario 2: Visible Network Failure
The visible network in this design model is the network path between the data center locations where the
main system components (Unified CM subscribers, Peripheral Gateways, Unified IP IVR/Unified CVP
components, and so forth) are located. This network is used to carry all the voice traffic (RTP stream and
call control signaling), Unified ICM CTI (call control signaling) traffic, as well as all typical data
network traffic between the sites. In order to meet the requirements of Unified CM clustering over the
WAN, this link must be highly available with very low latency and sufficient bandwidth. This link is
critical to the Unified CCE design because it is part of the fault-tolerant design of the system, and it must
be highly resilient as well:
main system components (Unified CM subscribers, Peripheral Gateways, Unified IP IVR/Unified CVP
components, and so forth) are located. This network is used to carry all the voice traffic (RTP stream and
call control signaling), Unified ICM CTI (call control signaling) traffic, as well as all typical data
network traffic between the sites. In order to meet the requirements of Unified CM clustering over the
WAN, this link must be highly available with very low latency and sufficient bandwidth. This link is
critical to the Unified CCE design because it is part of the fault-tolerant design of the system, and it must
be highly resilient as well:
•
The highly available (HA) WAN between the central sites must be fully redundant with no single
point of failure. (For information regarding site-to-site redundancy options, refer to the WAN
infrastructure and QoS design guides available at
point of failure. (For information regarding site-to-site redundancy options, refer to the WAN
infrastructure and QoS design guides available at
.) In case of
partial failure of the highly available WAN, the redundant link must be capable of handling the full
central-site load with all QoS parameters. For more information, see the section on
central-site load with all QoS parameters. For more information, see the section on
.