Configure Data Guard from Physical Standby database to OCI Base Database
Okay, let's migrate our database to cloud, or to new server in our network, or just increase our High Availability. Regardless of your intention for configuring a data guard, there are some hectic process your DBA must do.
Data Guard instantiation, it is a process of creating a standby database from a primary database in Oracle Database. During the instantiation process, data from the primary database is transferred to the standby database to synchronize their contents. This is necessary process for setting up a standby database while it can increase the workload on the primary database due to network overhead, I/O operation activity, additional CPU usage for long term etc.
Well, you maybe say it is only temporary process which occurs during the initial setup. Once the standby database is fully synchronized, the ongoing workload should stabilize, and the primary database can resume its normal operations with minimal additional overhead caused by Data Guard. Yes, this is right.
However, I'd say it is unnecessary performance degradation and totally can be avoided by using ZDM for Data Guard instantiation from active standby database.

By using ZDM for Data Guard instantiation from an active standby database, you can significantly reduce the workload impact on the primary database. Since the active standby is already in sync with the primary, the instantiation process involves minimal data transfer and synchronization tasks. This approach allows for a smoother and more efficient Data Guard setup while minimizing downtime and workload impact.
Enable Target Database Instantiation from the Standby
To request instantiation of the target database from an existing standby, set the following parameters in the physical migration response file. We will modify the below parameters in our example rsp file.
ZDM_USE_EXISTING_STANDBY = TRUE (default FALSE)
ZDM_STANDBY_DB_CONNECT_STRING=connection string for existing standby
Check DG Broker status
We can verify the current dataguard settings between 'zdmprod' and 'zdmstby' servers.
[oracle@zdmstby ~]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Jun 26 09:22:50 2023
Version 19.19.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "zdm_stby"
Connected as SYSDG.
DGMGRL> show configuration
Configuration - broker_conf
Protection Mode: MaxPerformance
Members:
zdm_prod - Primary database
zdm_stby - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 58 seconds ago)
DGMGRL> exit
Check source standby listener
[oracle@zdmstby ~]$ lsnrctl status
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 26-JUN-2023 09:23:01
Copyright (c) 1991, 2023, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date 26-JUN-2023 06:59:46
Uptime 0 days 2 hr. 23 min. 15 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/19.0.0.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/zdmstby/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.239)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
Instance "+APX1", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_RECO" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "zdm_CFG.onprem.example.com" has 1 instance(s).
Instance "zdm", status READY, has 2 handler(s) for this service...
Service "zdm_pdb1.onprem.example.com" has 1 instance(s).
Instance "zdm", status READY, has 2 handler(s) for this service...
Service "zdm_stby.onprem.example.com" has 1 instance(s).
Instance "zdm", status READY, has 2 handler(s) for this service...
Service "zdm_stbyXDB.onprem.example.com" has 1 instance(s).
Instance "zdm", status READY, has 1 handler(s) for this service...
The command completed successfully
We will use 'zdm_CFG.onprem.example.com' service name as our connection string to start the instantiation process from standby. This migration option is only available when you are using direct data transfer with RMAN restore from service.
Let's create the response file for the migration task.
MIGRATION_METHOD=ONLINE_PHYSICAL <---Online, no downtime
DATA_TRANSFER_MEDIUM=DIRECT
ZDM_RMAN_DIRECT_METHOD=RESTORE_FROM_SERVICE
PLATFORM_TYPE=VMDB
TGT_DB_UNIQUE_NAME=zdm_trg19 <--- Target Database Name
ZDM_USE_DG_BROKER=TRUE
ZDM_USE_EXISTING_STANDBY=TRUE <--- To enable instantiation from standby
ZDM_STANDBY_DB_NAME=zdm_stby <--- Standby Database Name
ZDM_STANDBY_DB_CONNECT_STRING=zdmstby:1521/zdm_CFG.onprem.example.com
I saved it as from_standby.rsp in ZDM server and now ready to test it.
Evaluation Step
[zdmuser@zdm21 pilot]$ $ZDM_HOME/bin/zdmcli migrate database /
-rsp /home/zdmuser/pilot/from_standby.rsp -sourcenode zdmprod -sourcedb zdm_prod /
-srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/zdm -srcarg3 sudo_location:/usr/bin/sudo -targetnode zdm19 /
-tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/zdm -tgtarg3 sudo_location:/usr/bin/sudo /
-sourcesyswallet /home/zdmuser/pilot/pilot_sys -tdekeystorewallet /home/zdmuser/pilot/pilot_tde -eval
zdm21.onprem.example.com: Audit ID: 3023
zdm21: 2023-06-26T10:15:13.789Z : Processing response file ...
Operation "zdmcli migrate database" scheduled with the job ID "330".
Check the status and logfile
[zdmuser@zdm21 pilot]$ $ZDM_HOME/bin/zdmcli query job -jobid 330
zdm21.onprem.example.com: Audit ID: 3027
Job ID: 330
User: zdmuser
Client: zdm21
Job Type: "EVAL"
Scheduled job command: "zdmcli migrate database -rsp /home/zdmuser/pilot/from_standby.rsp -sourcenode zdmprod -sourcedb zdm_prod -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/zdm -srcarg3 sudo_location:/usr/bin/sudo -targetnode zdm19 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/zdm -tgtarg3 sudo_location:/usr/bin/sudo -sourcesyswallet /home/zdmuser/pilot/pilot_sys -tdekeystorewallet /home/zdmuser/pilot/pilot_tde -eval"
Scheduled job execution start time: 2023-06-26T10:15:13Z. Equivalent local time: 2023-06-26 10:15:13
Current status: SUCCEEDED
Result file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-330-2023-06-26-10:15:37.log"
Metrics file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-330-2023-06-26-10:15:37.json"
Job execution start time: 2023-06-26 10:15:37
Job execution end time: 2023-06-26 10:21:47
Job execution elapsed time: 3 minutes 25 seconds
ZDM_GET_SRC_INFO ........... PRECHECK_PASSED
ZDM_GET_TGT_INFO ........... PRECHECK_PASSED
ZDM_PRECHECKS_SRC .......... PRECHECK_PASSED
ZDM_PRECHECKS_TGT .......... PRECHECK_PASSED
ZDM_SETUP_SRC .............. PRECHECK_PASSED
ZDM_SETUP_TGT .............. PRECHECK_PASSED
ZDM_PREUSERACTIONS ......... PRECHECK_PASSED
ZDM_PREUSERACTIONS_TGT ..... PRECHECK_PASSED
ZDM_VALIDATE_SRC ........... PRECHECK_PASSED
ZDM_VALIDATE_TGT ........... PRECHECK_PASSED
ZDM_POSTUSERACTIONS ........ PRECHECK_PASSED
ZDM_POSTUSERACTIONS_TGT .... PRECHECK_PASSED
ZDM_CLEANUP_SRC ............ PRECHECK_PASSED
ZDM_CLEANUP_TGT ............ PRECHECK_PASSED
ZDM evaluation step checks and validates that the readiness of the active standby database and new target database for Data Guard instantiation. These checks ensure that the current standby database is in sync with the primary database and meets the necessary requirements.
Migration Step
$ZDM_HOME/bin/zdmcli migrate database /
-rsp /home/zdmuser/pilot/from_standby.rsp /
-sourcenode zdmprod -sourcedb zdm_prod -targetnode zdm19 /
-srcauth zdmauth -srcarg1 user:opc -srcarg2 -identity_file:/home/zdmuser/.ssh/zdm -srcarg3 sudo_location:/usr/bin/sudo /
-tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 -identity_file:/home/zdmuser/.ssh/zdm -tgtarg3 sudo_location:/usr/bin/sudo /
-sourcesyswallet /home/zdmuser/pilot/pilot_sys -tdekeystorewallet /home/zdmuser/pilot/pilot_tde /
-pauseafter ZDM_CONFIGURE_DG_SRC
zdm21.onprem.example.com: Audit ID: 3028
zdm21: 2023-06-26T10:29:01.414Z : Processing response file ...
Operation "zdmcli migrate database" scheduled with the job ID "331".
There are some important things to observe from the log file:
- You must provide correct TDE wallet password.
zdmprod: 2023-06-26T10:30:48.940Z : Validating Oracle TDE setup
zdmprod: 2023-06-26T10:30:56.847Z : Validating Oracle Password file
- Source database must be in PRIMARY role. If you give PHYSICAL STANDBY database as your source in the parameter, ZDM will fail.
zdmprod: 2023-06-26T10:31:03.452Z : Validating database zdm_prod role is PRIMARY...zdmprod: 2023-06-26T10:31:03.753Z : Source environment validated successfully
- Must have SQL*
Net
connectivity between source and target servers.
zdm19: 2023-06-26T10:31:18.992Z : Verifying SQL*Net connectivity to source standby database ...
- Copying your TDE wallet to target database.
zdm21: 2023-06-26T10:32:35.955Z : Setting up Oracle Transparent Data Encryption (TDE) keystore on the target node zdm19 ...
zdm19: 2023-06-26T10:32:46.740Z : target environment Oracle Transparent Data Encryption (TDE) set up successfully
- Drops the current database and re-creates with same name but using different spfile. This stage also checks if your password for wallet matches.
zdm19: 2023-06-26T10:32:58.877Z : Dropping database zdm_trg19 ...
zdm19: 2023-06-26T10:34:34.742Z : database zdm_trg19 dropped successfully
zdm19: 2023-06-26T10:35:17.496Z : Target database "zdm_trg19" credentials staged successfully on node "zdm19"
zdm19: 2023-06-26T10:35:29.384Z : Registering database "zdm_trg19" as a cluster resource...
zdm19: 2023-06-26T10:35:30.586Z : Restoring SPFILE ...
zdm19: 2023-06-26T10:36:09.214Z : SPFILE restored to +DATA/ZDM_TRG19/spfilezdm_trg19.ora successfully
-
Observe and track RMAN process. It successfully recovers new datafiles and recover all the log files.
-
Modify existing DG Broker and adds new target zdm19 server.
zdm21: 2023-06-26T10:46:24.657Z : Executing phase ZDM_CONFIGURE_DG_SRC
zdm21: 2023-06-26T10:46:24.658Z : Finalize steps done on the source node zdmprod for creating standby on the target node zdm19 ...
zdmprod: 2023-06-26T10:46:39.772Z : Configuring Oracle Data Guard Broker on "zdm_prod" ...
zdmprod: 2023-06-26T10:47:38.112Z : Oracle Data Guard Broker configured successfully on "zdm_prod"
zdmprod: 2023-06-26T10:47:38.413Z : Source database updated successfully
zdm21: 2023-06-26T10:47:38.422Z : Execution of phase ZDM_CONFIGURE_DG_SRC completed
We issued migration task until ZDM_CONFIGURE_DG_SRC phase and job is successfully completed. Now it is time to check dg broker settings.
[oracle@zdmstby ~]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Jun 26 10:48:01 2023
Version 19.19.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "zdm_stby"
Connected as SYSDG.
DGMGRL> show configuration
Configuration - ZDM_zdm_prod
Protection Mode: MaxPerformance
Members:
zdm_prod - Primary database
zdm_stby - Physical standby database
zdm_trg19 - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 27 seconds ago)
There are now two physical standby databases in the dataguard broker settings, and we instantiated zdm_trg19 database from zdm_stdby database.

Now only one command left to execute, switchover!
Happy Migration everyone, if you need some help, reach us.