Pythian Blog: Technical Track

Creating a Physical Standby with RMAN Active Duplicate in 11.2.0.3

Other DBAs have written about this topic, but I wanted it to be available on Pythian's blog. When I searched for how this was done, other sites were either not very clear on the steps they did, assumed that you already knew what you are doing, or went through the steps too quickly. If this is your first time building a standby, there is some terminology you need to know before going into any of the steps in creating your physical standby. It will help you to better understand your dataguard environment and what is being done, instead of simply copying a number of steps. These are just the definitions in Oracle's documentation, but they will help you avoid the arduous search.
  • LOG_ARCHIVE_DEST_n .- It controls different aspects of how redo transport services transfer redo data from primary database destination to a standby. This parameter has several attributes that are needed to setup your Dataguard environment, I will only mention the critical ones:
    • ASYNC .-This is the default, the redo data generated by a transaction need not have been received at a destination which has this attribute before that transaction can commit. or
    • SYNC .-The redo data generated by a transaction must have been received by every enabled destination that has this attribute before that transaction can commit.
    • AFFIRM and NOAFFIRM .- Control whether a redo transport destination acknowledges received redo data before or after writing it to the standby redo log. The default is NOAFFIRM.
    • DB_UNIQUE_NAME .- Specifies a unique name for the database at this destination. You must specify a name; there is no default value.
    • VALID_FOR .-Identifies when redo transport services can transmit redo data to destinations based on the following factors:
      • redo_log_type .-whether online redo log files, standby redo log files, or both are currently being archived on the database at this destination
      • database_role .-whether the database is currently running in the primary or the standby role
  • FAL_SERVER .-Specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name.
  • FAL_CLIENT .-Specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVER initialization parameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database).
  • LOG_ARCHIVE_CONFIG .- Enables or disables the sending of redo logs to remote destinations and the receipt of remote redo logs. This parameter has several attributes, the most important for this exercise is below:
    • DG_CONFIG .- Specifies a list of up to 30 unique database names (defined with the DB_UNIQUE_NAME initialization parameter) for all of the databases in the Data Guard configuration.
Now that we have the definitions out of the way (which you can find at Oracle 11.2. Documentation), we will continue with the setup of our Physical Standby. For this exercise, I have the following :
  • Primary : testgg1 Server : dlabvm13
  • Standby : testgg2 Server : dlabvm14
The first thing that we need to do is find where the Redo Logs and Datafiles reside in the Primary and where will they reside in the Standby so that you can set your parameters LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT properly. Make sure that these directories have the necessary space to hold the Primary database. If you don't have this space, then do not continue. Next, assure that you are in archivelog mode, and that force logging is enabled in your primary. If your database is not in archive log mode, do the following: [code language="sql"] SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 534462464 bytes Fixed Size 2230072 bytes Variable Size 163580104 bytes Database Buffers 364904448 bytes Redo Buffers 3747840 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> alter database force logging; Database altered. SQL> select log_mode,force_logging from v$database; LOG_MODE FOR ------------ --- ARCHIVELOG YES [/code] Now that we are running in archive log mode and force logging is set for the primary, make sure that the Listener/ Tns entries are set correctly and that you can tnsping them both from the primary/standby. For more details on how to set them up, go to Kamran's blog entry ( Step 2 ). The result in both the primary and standby servers should be similar to below: [code language="sql"] oracle@dlabvm13.dlab.pythian.com [testgg1] /home/oracle/bin oracle $ tnsping testgg1 TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 14-MAY-2013 09:44:34 Copyright (c) 1997, 2011, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dlabvm13.dlab.pythian.com)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = testgg1))) OK (0 msec) oracle@dlabvm13.dlab.pythian.com [testgg1] /home/oracle/bin oracle $ tnsping testgg2 TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 14-MAY-2013 09:44:40 Copyright (c) 1997, 2011, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dlabvm14.dlab.pythian.com)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = testgg2))) OK (10 msec) [/code] Then, create and replicate the password file from the primary $ORACLE_HOME/dbs, and rename it to the standby database name. The password file name must match the ORACLE_SID used at the standby site, not the DB_NAME. [code language="bash"] oracle@dlabvm13.dlab.pythian.com [testgg1] /u01/app/oracle/product/11.2.0/dbhome_1/dbs oracle $ orapwd file=orapwtestgg1 password=test entries=5 oracle@dlabvm13.dlab.pythian.com [testgg1] /u01/app/oracle/product/11.2.0/dbhome_1/dbs oracle $ scp orapwtestgg1 dlabvm14:/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestgg2 orapwtestgg1 100% 2048 2.0KB/s 00:00 [/code] Now that the password file has been created, you can setup the init file for the Primary Database. As you can see at the end, it's the parameters we explained at the beginning of this entry. [code language="sql"] SQL> !cat dg.ora *.audit_file_dest='/u01/app/oracle/oradata/dump/testgg1/adump' *.audit_trail='db' *.compatible='11.2.0' *.control_files='/u01/app/oracle/oradata/testgg1/control/control01.ctl','/u01/app/oracle/oradata/testgg1/control/control02.ctl' *.db_block_size=8192 *.db_cache_size=128M *.db_domain='' *.db_name='testgg1' *.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' *.db_recovery_file_dest_size=2G *.db_unique_name='testgg1' *.diagnostic_dest='/u01/app/oracle/oradata/dump/testgg1' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testgg1' *.log_archive_dest_2='SERVICE=testgg2 ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testgg2' *.log_archive_dest_state_1='enable' *.log_archive_dest_state_2='enable' *.open_cursors=300 *.pga_aggregate_target=128M *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=512M testgg11.UNDO_TABLESPACE='UNDOTBS1' FAL_SERVER=testgg2; FAL_CLIENT=testgg1; DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/','/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/' LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/testgg2/TESTGG2/onlinelog/','/u01/app/oracle/oradata/testgg1/TESTGG1/onlinelog/' LOG_ARCHIVE_CONFIG='DG_CONFIG=(testgg1,testgg2)' [/code] One of the coolest things about this method is that almost all of the work we will do will be in the Primary database server. The only thing you have to do in the Standby Server is create the locations of my diagnostic files/redo/datafiles/control files, verify the connectivity between the Primary and the Standby and just start the Standby Instance, which is our next step. The next step is to set the ORACLE_SID, ORACLE_HOME, and ORACLE_BASE for the Standby Instance and open it with minimal options: [code language="sql"] oracle@dlabvm14.dlab.pythian.com [testgg2] /home/oracle/bin oracle $ export ORACLE_HOME="/u01/app/oracle/product/11.2.0/dbhome_1" oracle@dlabvm14.dlab.pythian.com [testgg2] /home/oracle/bin oracle $ export ORACLE_BASE=/u01/app/oracle oracle@dlabvm14.dlab.pythian.com [testgg2] /home/oracle/bin oracle $ export ORACLE_SID=testgg2 oracle@dlabvm14.dlab.pythian.com [testgg2] /home/oracle/bin oracle $ cat dg.ora DB_NAME=testgg1 DB_UNIQUE_NAME=testgg2 DB_BLOCK_SIZE=8192 oracle@dlabvm14.dlab.pythian.com [testgg2] /home/oracle/bin oracle $ sqlplus SQL*Plus: Release 11.2.0.3.0 Production on Tue May 14 09:53:30 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Enter user-name: /as sysdba Connected to an idle instance. SQL> startup nomount pfile='/home/oracle/bin/dg.ora'; ORACLE instance started. Total System Global Area 238034944 bytes Fixed Size 2227136 bytes Variable Size 180356160 bytes Database Buffers 50331648 bytes Redo Buffers 5120000 bytes SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [/code] Now let's get back to the Primary database server, where the next thing to do is add the standby logs. In this case, I created two with the same size as the Online Redo Logs that I have in my Primary. [code language="sql"] SQL> select instance_name,host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- testgg1 dlabvm13.dlab.pythian.com SQL> select name from v$database; NAME --------- TESTGG1 SQL> select GROUP#,THREAD#,MEMBERS,BYTES FROM V$LOG; GROUP# THREAD# MEMBERS BYTES ---------- ---------- ---------- ---------- 1 1 1 104857600 2 1 1 104857600 SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 104857600; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE SIZE 104857600; Database altered. SQL> SELECT GROUP#, BYTES FROM V$STANDBY_LOG; GROUP# BYTES ---------- ---------- 3 104857600 4 104857600 [/code] Once we have created the Standby Logs in my Primary Database, we will create the following RMAN run commands and the changes to the standby's spfile in this RMAN block. [code language="sql"] oracle@dlabvm13.dlab.pythian.com [testgg1] /home/oracle/bin oracle $ cat duplicate.rmn run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database spfile parameter_value_convert 'testgg1','testgg2' set db_unique_name='testgg2' set db_file_name_convert='/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/','/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/' set log_file_name_convert='/u01/app/oracle/oradata/testgg1/TESTGG1/onlinelog/','/u01/app/oracle/oradata/testgg2/TESTGG2/onlinelog/' set control_files='/u01/app/oracle/oradata/testgg2/control/control01.ctl','/u01/app/oracle/oradata/testgg2/control/control02.ctl' set log_archive_max_processes='10' set fal_client='testgg2' set fal_server='testgg1' set standby_file_management='AUTO' set log_archive_config='dg_config=(testgg1,testgg2)' set log_archive_dest_2='service=testgg1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=testgg1' ; } [/code] Now that the the RMAN command file has been created, you just need to run this command in the target database, which is the Primary Database , connecting to the Standby as the auxiliary. [code language="sql"] oracle@dlabvm13.dlab.pythian.com [testgg1] /home/oracle/bin oracle $ rman target sys/test@testgg1 auxiliary sys/test@testgg2 Recovery Manager: Release 11.2.0.3.0 - Production on Thu May 16 08:42:46 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: TESTGG1 (DBID=839852638) connected to auxiliary database: TESTGG1 (not mounted) RMAN> @duplicate.rmn RMAN> RMAN> run { 2> allocate channel prmy1 type disk; 3> allocate channel prmy2 type disk; 4> allocate channel prmy3 type disk; 5> allocate channel prmy4 type disk; 6> allocate auxiliary channel stby type disk; 7> duplicate target database for standby from active database 8> spfile 9> parameter_value_convert 'testgg1','testgg2' 10> set db_unique_name='testgg2' 11> set db_file_name_convert='/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/','/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/' 12> set log_file_name_convert='/u01/app/oracle/oradata/testgg1/TESTGG1/onlinelog/','/u01/app/oracle/oradata/testgg2/TESTGG2/onlinelog/' 13> set control_files='/u01/app/oracle/oradata/testgg2/control/control01.ctl','/u01/app/oracle/oradata/testgg2/control/control02.ctl' 14> set log_archive_max_processes='10' 15> set fal_client='testgg2' 16> set fal_server='testgg1' 17> set standby_file_management='AUTO' 18> set log_archive_config='dg_config=(testgg1,testgg2)' 19> set log_archive_dest_2='service=testgg1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=testgg1' 20> ; 21> } using target database control file instead of recovery catalog allocated channel: prmy1 channel prmy1: SID=32 device type=DISK allocated channel: prmy2 channel prmy2: SID=33 device type=DISK allocated channel: prmy3 channel prmy3: SID=34 device type=DISK allocated channel: prmy4 channel prmy4: SID=35 device type=DISK allocated channel: stby channel stby: SID=19 device type=DISK Starting Duplicate Db at 16-MAY-13 contents of Memory Script: { backup as copy reuse targetfile '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestgg1' auxiliary format '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestgg2' targetfile '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestgg1.ora' auxiliary format '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestgg2.ora' ; sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestgg2.ora''"; } executing Memory Script Starting backup at 16-MAY-13 Finished backup at 16-MAY-13 sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestgg2.ora'' contents of Memory Script: { sql clone "alter system set audit_file_dest = ''/u01/app/oracle/oradata/dump/testgg2/adump'' comment= '''' scope=spfile"; sql clone "alter system set diagnostic_dest = ''/u01/app/oracle/oradata/dump/testgg2'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_dest_1 = ''location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testgg2'' comment= '''' scope=spfile"; sql clone "alter system set db_unique_name = ''testgg2'' comment= '''' scope=spfile"; sql clone "alter system set db_file_name_convert = ''/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/'', ''/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/'' comment= '''' scope=spfile"; sql clone "alter system set log_file_name_convert = ''/u01/app/oracle/oradata/testgg1/TESTGG1/onlinelog/'', ''/u01/app/oracle/oradata/testgg2/TESTGG2/onlinelog/'' comment= '''' scope=spfile"; sql clone "alter system set control_files = ''/u01/app/oracle/oradata/testgg2/control/control01.ctl'', ''/u01/app/oracle/oradata/testgg2/control/control02.ctl'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_max_processes = 10 comment= '''' scope=spfile"; sql clone "alter system set fal_client = ''testgg2'' comment= '''' scope=spfile"; sql clone "alter system set fal_server = ''testgg1'' comment= '''' scope=spfile"; sql clone "alter system set standby_file_management = ''AUTO'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_config = ''dg_config=(testgg1,testgg2)'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_dest_2 = ''service=testgg1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=testgg1'' comment= '''' scope=spfile"; shutdown clone immediate; startup clone nomount; } executing Memory Script sql statement: alter system set audit_file_dest = ''/u01/app/oracle/oradata/dump/testgg2/adump'' comment= '''' scope=spfile sql statement: alter system set diagnostic_dest = ''/u01/app/oracle/oradata/dump/testgg2'' comment= '''' scope=spfile sql statement: alter system set log_archive_dest_1 = ''location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testgg2'' comment= '''' scope=spfile sql statement: alter system set db_unique_name = ''testgg2'' comment= '''' scope=spfile sql statement: alter system set db_file_name_convert = ''/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/'', ''/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/'' comment= '''' scope=spfile sql statement: alter system set log_file_name_convert = ''/u01/app/oracle/oradata/testgg1/TESTGG1/onlinelog/'', ''/u01/app/oracle/oradata/testgg2/TESTGG2/onlinelog/'' comment= '''' scope=spfile sql statement: alter system set control_files = ''/u01/app/oracle/oradata/testgg2/control/control01.ctl'', ''/u01/app/oracle/oradata/testgg2/control/control02.ctl'' comment= '''' scope=spfile sql statement: alter system set log_archive_max_processes = 10 comment= '''' scope=spfile sql statement: alter system set fal_client = ''testgg2'' comment= '''' scope=spfile sql statement: alter system set fal_server = ''testgg1'' comment= '''' scope=spfile sql statement: alter system set standby_file_management = ''AUTO'' comment= '''' scope=spfile sql statement: alter system set log_archive_config = ''dg_config=(testgg1,testgg2)'' comment= '''' scope=spfile sql statement: alter system set log_archive_dest_2 = ''service=testgg1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=testgg1'' comment= '''' scope=spfile Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 534462464 bytes Fixed Size 2230072 bytes Variable Size 163580104 bytes Database Buffers 364904448 bytes Redo Buffers 3747840 bytes allocated channel: stby channel stby: SID=18 device type=DISK contents of Memory Script: { backup as copy current controlfile for standby auxiliary format '/u01/app/oracle/oradata/testgg2/control/control01.ctl'; restore clone controlfile to '/u01/app/oracle/oradata/testgg2/control/control02.ctl' from '/u01/app/oracle/oradata/testgg2/control/control01.ctl'; } executing Memory Script Starting backup at 16-MAY-13 channel prmy1: starting datafile copy copying standby control file output file name=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_testgg1.f tag=TAG20130516T084319 RECID=7 STAMP=815561001 channel prmy1: datafile copy complete, elapsed time: 00:00:07 Finished backup at 16-MAY-13 Starting restore at 16-MAY-13 channel stby: copied control file copy Finished restore at 16-MAY-13 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database contents of Memory Script: { set newname for tempfile 1 to "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_temp_8qbok7fk_.tmp"; switch clone tempfile all; set newname for datafile 1 to "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_system_8ocl7bho_.dbf"; set newname for datafile 2 to "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sysaux_8ocl7k6n_.dbf"; set newname for datafile 3 to "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sys_undo_8ocl7q6c_.dbf"; set newname for datafile 4 to "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_users_8qbolh3g_.dbf"; backup as copy reuse datafile 1 auxiliary format "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_system_8ocl7bho_.dbf" datafile 2 auxiliary format "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sysaux_8ocl7k6n_.dbf" datafile 3 auxiliary format "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sys_undo_8ocl7q6c_.dbf" datafile 4 auxiliary format "/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_users_8qbolh3g_.dbf" ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to /u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_temp_8qbok7fk_.tmp in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting backup at 16-MAY-13 channel prmy1: starting datafile copy input datafile file number=00001 name=/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/o1_mf_system_8ocl7bho_.dbf channel prmy2: starting datafile copy input datafile file number=00002 name=/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/o1_mf_sysaux_8ocl7k6n_.dbf channel prmy3: starting datafile copy input datafile file number=00003 name=/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/o1_mf_sys_undo_8ocl7q6c_.dbf channel prmy4: starting datafile copy input datafile file number=00004 name=/u01/app/oracle/oradata/testgg1/TESTGG1/datafile/o1_mf_users_8qbolh3g_.dbf output file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sys_undo_8ocl7q6c_.dbf tag=TAG20130516T084335 channel prmy3: datafile copy complete, elapsed time: 00:00:45 output file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_users_8qbolh3g_.dbf tag=TAG20130516T084335 channel prmy4: datafile copy complete, elapsed time: 00:00:55 output file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_system_8ocl7bho_.dbf tag=TAG20130516T084335 channel prmy1: datafile copy complete, elapsed time: 00:01:05 output file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sysaux_8ocl7k6n_.dbf tag=TAG20130516T084335 channel prmy2: datafile copy complete, elapsed time: 00:01:05 Finished backup at 16-MAY-13 sql statement: alter system archive log current contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=7 STAMP=815561103 file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_system_8ocl7bho_.dbf datafile 2 switched to datafile copy input datafile copy RECID=8 STAMP=815561104 file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sysaux_8ocl7k6n_.dbf datafile 3 switched to datafile copy input datafile copy RECID=9 STAMP=815561104 file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_sys_undo_8ocl7q6c_.dbf datafile 4 switched to datafile copy input datafile copy RECID=10 STAMP=815561104 file name=/u01/app/oracle/oradata/testgg2/TESTGG2/datafile/o1_mf_users_8qbolh3g_.dbf Finished Duplicate Db at 16-MAY-13 released channel: prmy1 released channel: prmy2 released channel: prmy3 released channel: prmy4 released channel: stby RMAN> **end-of-file** RMAN> exit [/code] As you can see, that was as easy as pie. Now we can just start the recovery process in the Standby Database. In this case, I used Active Dataguard so that I could show you that it is actually working, but just be aware that this is a Licensable Option. [code language="sql"] SQL> alter database open; Database altered. SQL> alter database recover managed standby database disconnect; Database altered. SQL> select protection_mode,protection_level,database_role,name from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE NAME -------------------- -------------------- ---------------- --------- MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY TESTGG1 SQL> select instance_name,host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- testgg2 dlabvm14.dlab.pythian.com SQL> select * from sender.test; select * from sender.test * ERROR at line 1: ORA-00942: table or view does not exist [/code] Now, I go back to the primary and create a table called test in the schema sender. As you can see above, this table doesn't exist in the Standby database. [code language="sql"] SQL> select instance_name,host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- testgg1 dlabvm13.dlab.pythian.com SQL> select protection_mode,protection_level,database_role,name from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE NAME -------------------- -------------------- ---------------- --------- MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY TESTGG1 SQL> create table sender.test(id number); Table created. SQL> insert into sender.test values(1); 1 row created. SQL> commit; Commit complete. SQL> alter system switch logfile; System altered. [/code] I go back to the Standby, and we can see that we have the table that we just created. [code language="sql"] SQL> select protection_mode,protection_level,database_role,name from v$database; PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE NAME -------------------- -------------------- ---------------- --------- MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY TESTGG1 SQL> select instance_name,host_name from v$instance; INSTANCE_NAME HOST_NAME ---------------- ---------------------------------------------------------------- testgg2 dlabvm14.dlab.pythian.com SQL> select * from sender.test; ID ---------- 1 [/code] There are various ways to see if this is working or not. You can use it in the following query to see what was the last archive received/applied in the Standby Database. [code language="sql"] SQL> SELECT 'Last Applied : ' Logs, TO_CHAR(next_time,'DD-MON-YY:HH24:MI:SS') TIME,thread#,sequence# FROM v$archived_log WHERE sequence# = (SELECT MAX(sequence#) FROM v$archived_log WHERE applied='YES' ) UNION SELECT 'Last Received : ' Logs, TO_CHAR(next_time,'DD-MON-YY:HH24:MI:SS') TIME,thread#,sequence# FROM v$archived_log WHERE sequence# = (SELECT MAX(sequence#) FROM v$archived_log ); LOGS TIME THREAD# SEQUENCE# ---------------- ------------------ ---------- ---------- Last Applied : 16-MAY-13:11:42:52 1 65 Last Received : 16-MAY-13:11:42:52 1 65 [/code] While in the Primary Database, you can check what is the current sequence and what is the pending sequence to be applied. [code language="sql"] SQL> SELECT (SELECT name FROM V$DATABASE ) name, (SELECT MAX (sequence#) FROM v$archived_log WHERE dest_id = 1 ) Current_primary_seq, (SELECT MAX (sequence#) FROM v$archived_log WHERE TRUNC(next_time) > SYSDATE - 1 AND dest_id = 2 ) max_stby, (SELECT NVL ( (SELECT MAX (sequence#) - MIN (sequence#) FROM v$archived_log WHERE TRUNC(next_time) > SYSDATE - 1 AND dest_id = 2 AND applied = 'NO' ), 0) FROM DUAL ) "To be applied", ( (SELECT MAX (sequence#) FROM v$archived_log WHERE dest_id = 1 ) - (SELECT MAX (sequence#) FROM v$archived_log WHERE dest_id = 2 )) "To be Shipped" FROM DUAL; NAME CURRENT_PRIMARY_SEQ MAX_STBY To be applied To be Shipped --------- ------------------- ---------- ------------- ------------- TESTGG1 65 65 0 0 [/code] I hope this little guide helps you out when you are trying to build your Physical Standby from an Active Duplicate. As always, test anything that I have said or mentioned before trying it in a production environment.

No Comments Yet

Let us know what you think

Subscribe by email