Cisco Headend System Release 2.7 Installation Guide
Appendix E
Perform a DNCS Upgrade in a Disaster Recovery Enabled Network
168
4023737 Rev C
Process Overview
The following provides an overview of the tasks completed as part of the Disaster
Recovery upgrade process.
1 Perform a Disaster Recovery Full Sync. See Perform a Disaster Recovery Full
Recovery upgrade process.
1 Perform a Disaster Recovery Full Sync. See Perform a Disaster Recovery Full
Sync (on page 172).
2 Place all Disaster Recovery jobs on hold. See Place Disaster Recovery Jobs on
Hold (on page 175).
3 Upgrade the Standby DNCS.
4 Re-install the Disaster Recovery triggers and tables. See Install Disaster
4 Re-install the Disaster Recovery triggers and tables. See Install Disaster
Recovery Triggers, Stored Procedures, and Tables (on page 176).
Note: This step can be completed immediately following the DNCS database
conversion step, bldDncsDb, of the DNCS upgrade process.
Note: This step can be completed immediately following the DNCS database
conversion step, bldDncsDb, of the DNCS upgrade process.
5 Log in to the Active monitoring computer (MC) on the Disaster Recovery
platform via command-line.
6 On the Active MC, type:
a cd /export/home/dradmin/dr/app/ui/webroot/reg/engine
b ./test_buildRoutes.php
Note: This step sets up all of the necessary network routes on the Standby DNCS.
The routes are configured to send the DNCS-generated network traffic for the
emulated QAM modulators and Netcrypts to the Standby MC, the local BFS Data
QAM, Test QPSK modulator, and Test DHCT network traffic to the local standby
DBDS isolation network switch, and the production QPSK modulators and
DHCT network traffic to the Disaster Recovery bit-bucket (the default bit-bucket
address is defined as 192.168.1.4).
b ./test_buildRoutes.php
Note: This step sets up all of the necessary network routes on the Standby DNCS.
The routes are configured to send the DNCS-generated network traffic for the
emulated QAM modulators and Netcrypts to the Standby MC, the local BFS Data
QAM, Test QPSK modulator, and Test DHCT network traffic to the local standby
DBDS isolation network switch, and the production QPSK modulators and
DHCT network traffic to the Disaster Recovery bit-bucket (the default bit-bucket
address is defined as 192.168.1.4).
7 Run a DNCS Doctor report and analyze it for issues/anomalies. The production
QPSK modulators and their respective RF subnets will not be reachable and will
be logged as failures in the Doctor PING report due to the re-direction of the
QPSK mod and DHCT traffic into the Disaster Recovery bit-bucket.
be logged as failures in the Doctor PING report due to the re-direction of the
QPSK mod and DHCT traffic into the Disaster Recovery bit-bucket.
8 Verify via the Standby DBDS System Test Hub that a DHCT can boot and can
receive advanced services (VOD, IPG, xOD, SDV, etc.). You will want to boot
and verify SARA, Aptive/Pioneer, and MDN/ODN DHCTs if you have them.
This is the time to troubleshoot any issues discovered on the Standby DBDS
system.
and verify SARA, Aptive/Pioneer, and MDN/ODN DHCTs if you have them.
This is the time to troubleshoot any issues discovered on the Standby DBDS
system.