Zero Downtime Migration (ZDM) 21c is available for download! A new version released in late March. However, the software had been there since 2020?
As usual, I have been researching a migration way from on-premise to database system in oracle cloud infrastructure for a customer who has the below requirement:
- Source database is Standard Edition
- It is non-container DB
- Database 12.2.0.1 version
- Datafiles size are 4TB
- Ageing hardware and not enough storage
When you have standard edition as your source database, the data guard option is automatically out of the table. Concluding that migration has to go offline or with GoldenGate if you want it online.
Traditional export and import processes for 4TB of a database are not ideal, or maybe it shouldn't be an option of migration method for production database. Because you will not boost export speed using parallel in standard edition and if your on-premises network bandwidth is very limited due to budget consideration, you can not easily upload them to Object Storage.
With a good network of Oracle, I was able to find experienced migration experts such as Daniel Overby Hansen and Ricardo Gonzalez from Oracle's Cloud Migration PM team. Their advice was valuable and very helpful for me to choose ZDM over other migration methods.
There are 4 different options for your considerations:
- Online Logical migration: it uses a combination of Goldengate and DataPump methods because it is logical migration limits you cannot migrate AWR, SQL profiles, baselines, Database Links etc.
- Offline Logical migration: pretty much the same as previous.
- Offline Physical migration: this is similar to cloning a database, RMAN backup and restore process.
- Online Physical migration: ZDM harnesses Oracle Data Guard to perform an online physical migration, so not for my case.
I decided to use "Offline Physical" migration because it is a simple RMAN backup and restore without GG complexity. I did some research in ZDM documentation and had to try several attempts in order to succeed because you need to read documentation extremely carefully.
ZDM offline physical migration does the following, nothing fancy:
- Backs up the source database to the specified data transfer medium
- Instantiates a new database from this back up to the target environment
Prerequisites:
- Oracle Cloud Account.
- Download ZDM software.
- A Compute instance for ZDM software to run (optional, can be installed on the same database machine).
- A Source Oracle Database supported version, of course.
- A Target Cloud Oracle Database Machine, same OS, DB version, patchset etc.
- OCI Object Storage for intermediate data transfer medium for storing backups.
- Fully working OCI CLI from all servers with the same access to OCI.
1. ZDM Server
CREATE ZDM USER and Groups using ROOT
As I mentioned in prerequisites, ZDM software can have a dedicated server to work or it can be installed on the same database machine. For example, I have a small compute virtual machine 2.1 in OCI to isolate my zdm software's workload, in any case, do following:
groupadd zdm
useradd zdmuser
passwd zdmuser
usermod zdmuser -g zdm
id zdmuser
yum install glibc-devel, expect -y
SUDOER and SSH FIX using ROOT
Then add zdmuser ALL=(ALL) NOPASSWD:ALL
in below file:
/etc/sudoers.d/90-cloud-init-users
Also add %zdmuser ALL=(ALL) NOPASSWD: ALL
in below file:
/etc/sudoers
After that restart sshd service to take effect:
/sbin/service sshd restart
Add SOURCE and TARGET address in /etc/hosts file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
xxx.xxx.xxx.xxx myzdmserver.cloudness.net myzdmserver
xxx.xxx.xxx.xxx sourcedb.cloudness.net source
xxx.xxx.xxx.xxx sub08260912110.frankfurt.oraclevcn.com target_fra1mg
Edit Security List of your VCN
Add ingress ICMP, 1521, 22, 443 ports and egress ICMP, 1521, 22, 443 ports too.
Enable linux repository
Linux repository may have been disabled in your ZDM server, this will enable it.
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/oci_dbaas_ol7repo
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/versionlock.list
sudo mv /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo
sudo mv /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list
yum repolist
Install OCI CLI for ZDMUSER
Oracle OCI CLI is a command line interface to interact with your cloud account. Install it and configure it. oci setup config
command is self explanatory, in case you don't know it, follow this guide.
sudo su - zdmuser
sudo yum install python36-oci-cli
oci setup config
Download ZDM software
Download the Zero Downtime Migration software kit from https://www.oracle.com/database/technologies/rac/zdm-downloads.html to the Zero Downtime Migration service host.
mkdir -p /home/zdmuser/app/zdmbase
mkdir -p /home/zdmuser/app/zdmhome
export ZDM_HOME=/home/zdmuser/app/zdmhome
cd zdm_download_directory
unzip zdm21.1.zip
./zdminstall.sh setup oraclehome=/home/zdmuser/app/zdmhome oraclebase=/home/zdmuser/app/zdmbase ziploc=/home/zdmuser/zdm21.1/zdm_home.zip
$ZDM_HOME/bin/zdmservice start
Server started successfully.
2. Source on-premise database server
Some organizations usually do not disable oracle user access. If it was disabled, I'd ask you to enable ssh access and give sudo permission to oracle user. In a standard Oracle Linux 7 server, the following steps should work, but maybe different for a more secure environment. So tailor it for your needs.
SUDOER and SSH FIX using ROOT
You should add oracle ALL=(ALL) NOPASSWD:ALL
in below file:
/etc/sudoers.d/90-cloud-init-users
Then also add %oracle ALL=(ALL) NOPASSWD: ALL
in below file:
/etc/sudoers
Finally restart sshd service:
/sbin/service sshd restart
Enable SSH access for ORACLE
sudo su - oracle
ssh-keygen
We should add ssh public key file entry from ZDM server into authorized_keys file, then your ZDM server will have access to source database using ssh private keys.
vi /home/oracle/.ssh/authorized_keys
Enable linux repository
You may have different linux repository, that is fine. Then skip this step.
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/oci_dbaas_ol7repo
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/versionlock.list
sudo mv /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo
sudo mv /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list
yum repolist
Install OCI CLI for ORACLE user
Oracle OCI CLI is a command line interface to interact with your cloud account. Install it and configure it. oci setup config
command is self explanatory, in case you don't know it, follow this guide.
sudo yum -y update
sudo yum -y groupinstall "Development Tools"
sudo yum -y install gcc wget openssl-devel bzip2-devel libffi-devel
sudo yum install python36
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
oci setup config
SOURCE DB preparation scripts
The source database must be running in ARCHIVELOG mode and you need to set streams pool size correctly. For offline logical migrations, for optimal Data Pump performance, it is recommended that you set STREAMS_POOL_SIZE to a minimum of 256MB-350MB. For online logical migrations, set STREAMS_POOL_SIZE to at least 2GB.
[oracle@source ~]$ sqlplus / as sysdba
ALTER SYSTEM SET STREAMS_POOL_SIZE=2147483647 SCOPE=SPFILE;
shu immediate;
STARTUP MOUNT
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
If RMAN is not already configured to automatically back up the control file and SPFILE, then set CONFIGURE CONTROLFILE AUTOBACKUP to ON and revert the setting back to OFF after migration is complete.
[oracle@source ~]$ rman target /
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
For Oracle Database 12c Release 2 and later, if the source database does not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE keystore before migration begins. The wallet status on both source and target databases must be set to OPEN, also WALLET_TYPE can be AUTOLOGIN, for an auto-login keystore (preferred), or PASSWORD, for a password-based keystore.
[oracle@source ~]$ sqlplus / as sysdba
select* from v$encryption_wallet;
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/' identified by password;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password;
ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup;
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/' IDENTIFIED BY password;
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY password;
select status, wallet_type from v$encryption_wallet;
STATUS WALLET_TYPE
------------------------------ --------------------
OPEN AUTOLOGIN
In case if you are planning to use online logical migration, you need to enable goldengate configuration on the source database:
alter database force logging;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
alter system set enable_goldengate_replication = true scope=both;
create user ggadmin identified by password default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
alter user ggadmin quota 100M ON USERS;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');
grant DATAPUMP_EXP_FULL_DATABASE to ggadmin;
grant DATAPUMP_IMP_FULL_DATABASE to ggadmin;
3. Target Oracle Cloud Database server
SUDOER and SSH FIX using ROOT
Same as you did for ZDM server, you need to add oracle ALL=(ALL) NOPASSWD:ALL
in below file:
/etc/sudoers.d/90-cloud-init-users
Also add %oracle ALL=(ALL) NOPASSWD: ALL
in below file:
/etc/sudoers
Then restart sshd service:
/sbin/service sshd restart
Enable SSH access for ORACLE
Also we need to enable ssh access for oracle user.
sudo su - oracle
ssh-keygen
Now you should add public ssh key entry from ZDM server to authorized_keys file, meaning that your ZDM software can access to target database server.
vi /home/oracle/.ssh/authorized_keys
Enable linux repository
Linux repositories are disabled by default in Virtual Machine database systems in OCI, therefore you must enable it.
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/oci_dbaas_ol7repo
wget https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/versionlock.list
sudo mv /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo
sudo mv /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list
yum repolist
Install OCI CLI for ORACLE user
Oracle OCI CLI is a command line interface to interact with your cloud account. Install it and configure it. oci setup config
command is self explanatory, in case you don't know it, follow this guide.
sudo yum -y update
sudo yum -y groupinstall "Development Tools"
sudo yum -y install gcc wget openssl-devel bzip2-devel libffi-devel
sudo yum install python36
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
oci setup config
TARGET DB Scripts
There are not many scripts to run in the target database because everything is hardened and secure enough to migrate. Just for your reference, if you'd want online logical migration, the goldengate user needs to be enabled.
[oracle@target ~]$ sqlplus / as sysdba
alter system set enable_goldengate_replication = true scope=both;
show pdbs;
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 TARGET_PDB1 READ WRITE NO
alter session set container=target_pdb1;
create user ggadmin identified by PA##w0rd12345 default tablespace users temporary tablespace temp;
grant connect, resource to ggadmin;
alter user ggadmin quota 100M ON USERS;
grant unlimited tablespace to ggadmin;
grant select any dictionary to ggadmin;
grant create view to ggadmin;
grant execute on dbms_lock to ggadmin;
exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');
grant DATAPUMP_EXP_FULL_DATABASE to ggadmin;
grant DATAPUMP_IMP_FULL_DATABASE to ggadmin;
4. Response file creation
In official document, you can edit response file template, $ZDM_HOME/rhp/zdm/template/zdm_template.rsp.
There are many parameters that already have default inputs, which fulfil your need in most cases, those will not be mentioned here. However I'd like to mention one parameter SHUTDOWN_SRC, specifies whether or not to shut down the source database after the migration completes. It will allow your source and target will be both up and running in the end.
Here is my template rsp file look like:
MIGRATION_METHOD=OFFLINE_PHYSICAL
DATA_TRANSFER_MEDIUM=OSS
HOST=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/my123cloud
OPC_CONTAINER = ZDM
ZDM_LOG_OSS_PAR_URL=https://objectstorage.eu-frankfurt-1.oraclecloud.com/p/pre_authenticated_object_storage_access/n/my123cloud/b/ZDM/o/
PLATFORM_TYPE= VMDB
TGT_DB_UNIQUE_NAME=target_fra1mg
NONCDBTOPDB_CONVERSION=FALSE
- MIGRATION_METHOD is offline physical, typical RMAN backup and restore.
- DATA_TRANSFER_MEDIUM is OSS, it means that migration will keep RMAN backup files and restore them through Object Storage.
- HOST is your object storage URL
- OPC_CONTAINER is name of your bucket in object storage to keep your RMAN backup and log files.
- ZDM_LOG_OSS_PAR_URL is pre-authenticated URL for you object storage bucket, where you can store ZDM server log for easy access. Why? in case if error occurs, you need to access to at least 3 servers to gather logs.
- PLATFORM_TYPE is your target database type, for my case this is Virtual Machine database.
- TGT_DB_UNIQUE_NAME is name of your CDB in target database.
- NONCDBTOPDB_CONVERSION to convert a source database from non-CDB to PDB or skip the conversion. I kept it as FALSE to do like-to-like migration.
5. Evaluation step
ZDM provides options and tools for evaluating the migration job before you run it against the production database. ZDM does migration in phases, however, this evaluation checks only certain phases, and don't take it for granted if you see "PRECHECK_PASSED". There may be additional phases you would get an error, such as TDE related. Because I spend so much time on a small mistake :) in actual migration step.
Okay here is my evaluation command to run is shown the below with -eval flag:
[zdmuser@myzdmserver ~]$ zdmcli migrate database -rsp /home/zdmuser/physical.rsp -sourcenode source -tdekeystorepasswd -sourcedb orcl -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/oci -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_fra1mg -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/oci -tgtarg3 sudo_location:/usr/bin/sudo -backupuser oracleidentitycloudservice/myemail@oracle.com -eval
As you can see migrate step has many FLAGs and they are mandatory, please check this link.
You can check the result with $ZDM_HOME/bin/zdmcli query job -jobid number
command.
Result is shown in the below:
[zdmuser@myzdmserver ~]$ $ZDM_HOME/bin/zdmcli query job -jobid 35
zdm-instance.sub03260912110.frankfurt.oraclevcn.com: Audit ID: 198
Job ID: 35
User: zdmuser
Client: zdm-instance
Job Type: "EVAL"
Scheduled job command: "zdmcli migrate database -rsp /home/zdmuser/physical.rsp -sourcenode source -tdekeystorepasswd -sourcedb orcl -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/oci -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_fra1mg -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/oci -tgtarg3 sudo_location:/usr/bin/sudo -backupuser oracleidentitycloudservice/myemail@oracle.com -eval"
Scheduled job execution start time: 2021-04-13T07:21:14Z. Equivalent local time: 2021-04-13 07:21:14
Current status: SUCCEEDED
Result file path: "/home/zdmuser/app/zdmbase/chkbase/scheduled/job-35-2021-04-13-07:21:44.log"
Metrics file path: "/home/zdmuser/app/zdmbase/chkbase/scheduled/job-35-2021-04-13-07:21:44.json"
Job execution start time: 2021-04-13 07:21:44
Job execution end time: 2021-04-13 07:25:08
Job execution elapsed time: 3 minutes 24 seconds
ZDM_GET_SRC_INFO ........... PRECHECK_PASSED
ZDM_GET_TGT_INFO ........... PRECHECK_PASSED
ZDM_PRECHECKS_SRC .......... PRECHECK_PASSED
ZDM_PRECHECKS_TGT .......... PRECHECK_PASSED
ZDM_SETUP_SRC .............. PRECHECK_PASSED
ZDM_SETUP_TGT .............. PRECHECK_PASSED
ZDM_PREUSERACTIONS ......... PRECHECK_PASSED
ZDM_PREUSERACTIONS_TGT ..... PRECHECK_PASSED
ZDM_VALIDATE_SRC ........... PRECHECK_PASSED
ZDM_VALIDATE_TGT ........... PRECHECK_PASSED
ZDM_POSTUSERACTIONS ........ PRECHECK_PASSED
ZDM_POSTUSERACTIONS_TGT .... PRECHECK_PASSED
ZDM_CLEANUP_SRC ............ PRECHECK_PASSED
ZDM_CLEANUP_TGT ............ PRECHECK_PASSED
6. Migration step
As I said earlier, ZDM has so many default parameters that meet most people's need. For example, DISKGROUP name, RETRY attempt etc. I checked the official document. It seemed that defaults are good for me and my precheck result was successful.
It is same command without -eval flag, and you will be prompted to enter source SYS user password, a token for CLI user, and lastly TDE password:
[zdmuser@myzdmserver ~]$ $ZDM_HOME/bin/zdmcli migrate database -rsp /home/zdmuser/physical.rsp -sourcenode source -tdekeystorepasswd -sourcedb orcl -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/oci -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_fra1mg -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/oci -tgtarg3 sudo_location:/usr/bin/sudo -backupuser oracleidentitycloudservice/myemail@oracle.com
zdm-instance.sub03260912110.frankfurt.oraclevcn.com: Audit ID: 241
Enter source database orcl SYS password:
Enter user "oracleidentitycloudservice/bilegt.bat.ochir@oracle.com" password:
Enter source database orcl TDE keystore password:
I was checking my migration phases closely, you can get details with the command below:
[zdmuser@myzdmserver ~]$ $ZDM_HOME/bin/zdmcli query job -jobid 40
zdm-instance.sub03260912110.frankfurt.oraclevcn.com: Audit ID: 257
Job ID: 40
User: zdmuser
Client: zdm-instance
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -rsp /home/zdmuser/physical.rsp -sourcenode source -tdekeystorepasswd -sourcedb orcl -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/zdmuser/.ssh/oci -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_fra1mg -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/oci -tgtarg3 sudo_location:/usr/bin/sudo -backupuser oracleidentitycloudservice/myemail@oracle.com"
Scheduled job execution start time: 2021-04-13T13:00:05Z. Equivalent local time: 2021-04-13 13:00:05
Current status: SUCCEEDED
Result file path: "/home/zdmuser/app/zdmbase/chkbase/scheduled/job-40-2021-04-13-13:00:18.log"
You should tail the log file and check log details, for example you'd see below in SOURCE_BACKUP_FULL phase:
STARTTIME ENDTIME BACKUPTIME(MINS) INPUT(GB) OUTPUT(GB) OUTPUT BYTES/SEC
-----------------------------------------------------------------------------------
2021-04-13T13:04:16Z 2021-04-13T13:05:03Z .78 1.79 .5 10.89M
And of course all the phases are completed without an issue:
ZDM_GET_SRC_INFO ............... COMPLETED
ZDM_GET_TGT_INFO ............... COMPLETED
ZDM_PRECHECKS_SRC .............. COMPLETED
ZDM_PRECHECKS_TGT .............. COMPLETED
ZDM_SETUP_SRC .................. COMPLETED
ZDM_SETUP_TGT .................. COMPLETED
ZDM_PREUSERACTIONS ............. COMPLETED
ZDM_PREUSERACTIONS_TGT ......... COMPLETED
ZDM_VALIDATE_SRC ............... COMPLETED
ZDM_VALIDATE_TGT ............... COMPLETED
ZDM_OBC_INST_SRC ............... COMPLETED
ZDM_OBC_INST_TGT ............... COMPLETED
ZDM_BACKUP_FULL_SRC ............ COMPLETED
ZDM_BACKUP_INCREMENTAL_SRC ..... COMPLETED
ZDM_DISCOVER_SRC ............... COMPLETED
ZDM_COPYFILES .................. COMPLETED
ZDM_PREPARE_TGT ................ COMPLETED
ZDM_SETUP_TDE_TGT .............. COMPLETED
ZDM_OSS_RESTORE_TGT ............ COMPLETED
ZDM_BACKUP_DIFFERENTIAL_SRC .... COMPLETED
ZDM_OSS_RECOVER_TGT ............ COMPLETED
ZDM_FINALIZE_TGT ............... COMPLETED
ZDM_POST_DATABASE_OPEN_TGT ..... COMPLETED
ZDM_DATAPATCH_TGT .............. COMPLETED
ZDM_POST_MIGRATE_TGT ........... COMPLETED
ZDM_POSTUSERACTIONS ............ COMPLETED
ZDM_POSTUSERACTIONS_TGT ........ COMPLETED
ZDM_CLEANUP_SRC ................ COMPLETED
ZDM_CLEANUP_TGT ................ COMPLETED
7. Conclusion
The actual datafile backup process, shipping to object storage, and restore processes took about 4 minutes according to the alert log and backup log. Total process of 2GB of the migration step took 26 minutes.
So what does it mean for my 4TB database? It would take roughly 5 days based on my 10Mbps speed and medium compression. If I set it to HIGH compression, it should be even faster.
To reduce the downtime of migration, we also can break down the migration step into phases. It is a flag called -pauseafter, a valid phase to be paused after. If I issue it with -pauseafter ZDM_BACKUP_DIFFERENTIAL_SRC, this will allow me to wait until full/incremental backup and restore finishes at my target database. Imagine the whole process takes 5 days, and you have some more days until your downtime starts? You should wait for it. So this would be a good approach for this. Just make sure that all of the archived logs generated during and after the ZDM_BACKUP_INCREMENTAL_SRC phase are available until then.
If your source database is in ARCHIVELOG mode, ZDM software can make the RMAN backup of the data files to Object Storage and then take archive log backups afterwards. I was discussing with Daniel how my experience was and he said "This approach was actually quite smart - if you want to do a test migration of production database - even with offline mode - you can still keep the source database up and running".
Yes, you read it correctly, even in OFFLINE PHYSICAL migration, your source production database won't have to shut down, because it is RMAN. So your source database can stay ONLINE during the entire migration or mock migration process. However, there can be data inconsistency, if you still have to write operations in the source database. However, if you can afford downtime during the weekend, I'd highly recommend this OFFLINE PHYSICAL migration method. Because it is a very easy and efficient method.
Lastly, error documentation is not very clear in MyOracleSupport, I'd say start with known issues page.
That is it for today. I cannot wait for migration day.
Cloud is future and Oracle is your choice.