Monday 4 July 2011

RESTORING A PHYSICAL STANDBY TO DIFFERENT LOCATION

Restoring a PHYSICAL STANDBY database to a different location …

First make memory usage of your system
You do this, in the case we need to remove shared memory or semaphore.
# ipcs -b

Assuming that you have already use RMAN to backup your standby database into flashback recovery area (db_recovery_file_dest).
                             
So, if your  db_recovery_file_dest=’/recovery_area’, then your backup will be at the following location by default. On the new server create the

recovery area and restore the backup.

Default:
/recovery_area//backupset
/recovery_area//autobackup

On the empty server, we assume you have already cloned the Oracle binaries and set your global variables such as ORACLE_HOME, ORACLE_SID, etc.

Create external password:
# orapwd file=@ORACLE_HOME/dbs/orapw password=your-sys-pass entries=3

Run the RMAN tool and login as sys. Then set your DBID to restore spfile and controlfiles.  If you don’t know where your controlfiles are located

first restore spfile then create pfile and check the location and then build it if they are not exist.

# rman target /
RMAN> SET DBID xxxxxxxxxxxx;
RMAN> STARTUP FORCE NOMOUNT;
RMAN> show all;    --  to display default rman configurations.

To restore spfile:
RMAN> restore spfile from autobackup
            db_recovery_file_dest=’/recovery_area’ db_unique_name=’xxxx’_stdby;
--or—
RMAN> restore spfile from autobackup
            db_recovery_file_dest=’/recovery_area’ db_name=’xxxx’;
Now, you can goto SQLPLUS and create your parameter file.
--or—
If your spfile and controlfile were backup on $ORACLE_HOME/dbs, then do the following:
RMAN> restore spfile from autobackup;
SQL> create pfile from spfile;


To restore controlfiles:
Shutdown, make sure all the directories including audit file and then startup with no mount.
RMAN> shutdown immediate;
--Make sure all directories exist including “adump.”
RMAN> startup nomount;
RMAN> restore controlfile from autobackup
           db_recovery_file_dest=’/recovery_area’ db_unique_name=’xxxx_stdby’;
--or--
RMAN> restore controlfile from autobackup
          db_recovery_file_dest=’/recovery_area’ db_name=’xxxx’;
--or--
If your spfile and controlfile were backup on $ORACLE_HOME/dbs, then do the following:
RMAN> restore controlfile from autobackup;

Mout the instance:
SQL> shutdown immediate;
SQL> startup mount;

Locate the database files destinations:
Note: if your controlfiles’ destination were not set, rman will copy one to $ORACLE_HOME/dbs/cntrl.dbf.
Now, from parameter file, check the diagnostic_dest, audit_file_dest, db_recovery_file_dest, etc.
Use the trace file by backing controlfile to trace to locate other data files destinations.
SQL> ALTER DATABASE BACKUP controlfile TO TRACE; -- located at udump.

Restore the database:
If the database is not mounted:
SQL> alter database mount;
RMAN> restore database;

If you have missing logs do the following:
Now go to your archivelog location and check your last archivelog sequence number. It should be more than value from the following query.
SQL> SELELCT * FROM v$log;

Then recover until the last sequence number;
RMAN> recover database until sequence your-number;

If you have get flashback recovery on/off error message, try to turn it off.
SQL> ALTER DATABASE flashback OFF;
SQL> shutdown immediate;
SQL> startup mount;
--or--
SQL> recover database;  --if you have all logs
--or--
SQL> recover database until sequence 52;
--or--
SQL> recover database using backup controlfile until sequence 52;
--or--
SQL> recover database using backup controlfile until cancel;

Important:
After the recover make sure that you have all your redo logs plus standby.
SQL> SELECT * FROM gv$logfile;
To drop it we you are on the mount mode:
SQL>ALTER DATABASE DROP LOGFILE group nnn;

Add them if there are not there in directory. You may see them on query but not physically on disk.
Example:
SQL> alter database add standby logfile
  2  '/oradata/your-db_name/stndby_redo/stndby04.log' size 50m reuse;


SQL> ALTER DATABASE OPEN resetlogs;

If you hav all your archivelogs:
Go to primary database and switch logfile and copy all archivelogs to the new server. Then create trace file with noresetlogs option.
# scp arch*.log oracle@yournew-server:$DB_RECOVERY_FILE_AREA//archivelog/.

On SQLPLUS:
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE as '/home/oracle/myprim-cntl.sql' NORESETLOGS;

Copy the myprim-cntl.sql file to your new server. And edit it to replace the existing controlfiles.

Shutdown the database.
SQL> shutdown immediate;
SQL> @myprim-cntl.sql

Notice:
If you get hange, make sure you check the memory usage:
# ipcs –b

Use the following command to remove them from shared memory or semaphores if needed.
# ipcrm –m
# ipcrm –s

Friday 1 July 2011

SOME ASM COMMANDS


ASM packages installed a service called oracleasm which can be called by running these commands

# /etc/init.d/oracleasm
# service oracleasm



/etc/init.d/oracleasm script takes the following options:
configure: Configure ASM.
start: Start the ASM service.
stop: Stop the ASM service.
status: Print the status of ASM service.
createdisk: Create an ASM disk.
scandisks: Search for ASM disks.
listdisks: List ASM disks.
querydisk: Query the current status of an ASM disk.
deletedisk: Delete an ASM disk.
restart: Restart the ASM service.
link: Load the oracleasm module.
enable: Enable the ASM service.
disable: Disable the ASM service.



Starting ASM service

# /etc/init.d/oracleasm start

To stop ASM service

# /etc/init.d/oracleasm stop

To check status of ASM service
# /etc/init.d/oracleasm status

CREATEDISK
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
Marking disk "/dev/sdb1" as an ASM disk [ OK ]
 # /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk [ OK ]
 # /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM disk [ OK ]


SCANDISK
# /etc/init.d/oracleasm scandisks


LISTDISK to verify the disks are available
# /etc/init.d/oracleasm listdisks

To query the status of existing disk
# /etc/init.d/oracleasm querydisk VOL1

DELETEDISK
# /etc/init.d/oracleasm deletedisk VOL1


Deleting an ASM Instance Manually
delete the ASM instance from the OCR:

 $ srvctl remove asm -n DB1 -i +ASM1
 $ srvctl remove asm -n DB2 -i +ASM2


On each node, delete the ASM instance from /etc/oratab. Then, on each node, delete the
parameter file, as in the following example:
[oracle@london1 oracle] $ rm $ORACLE_HOME/dbs/init+ASM.ora

Finally, on each node, delete the password file:
[oracle@london1 oracle] $ rm $ORACLE_HOME/dbs/orapw+ASM1


Col name form a30
Col failgroup form a30
Select name,failgroup from v$asm_disk
Where group_number=(
Select group_number from v$asm_diskgroup
Where name=’G3_RECOVERY_AREA’)
/

Checking the operation of rebalancing
Select * from gv$asm_operation;



################################################################
# Adding/Removing/Managing configuration about existing ASM instances################################################################

--Use the following syntax to add configuration information about an existing ASM instance:
srvctl add asm -n node_name -i +asm_instance_name -o oracle_home

--Use the following syntax to remove an ASM instance:
srvctl remove asm -n node_name [-i +asm_instance_name]

--Use the following syntax to enable an ASM instance:
srvctl enable asm -n node_name [-i ] +asm_instance_name

--Use the following syntax to disable an ASM instance:
srvctl disable asm -n node_name [-i +asm_instance_name]

--Use the following syntax to start an ASM instance:
srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options]

--Use the following syntax to stop an ASM instance:
srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options]

--Use the following syntax to show the configuration of an ASM instance:
srvctl config asm -n node_name

--Use the following syntax to obtain the status of an ASM instance:
srvctl status asm -n node_name

--P.S.:
--For all of the SRVCTL commands in this section for which the
--option is not required, if you do not specify an instance name, then -i
--the command applies to all of the ASM instances on the node.

###################################
# Managing DiskGroup inside ASM:
###################################

--Note that adding or dropping disks will initiate a rebalance of the data on the disks.
--The status of these processes can be shown by selecting from v$asm_operation.


--Quering ASM Disk Groups
col name format a25
col DATABASE_COMPATIBILITY format a10
col COMPATIBILITY format a10
select * from v$asm_diskgroup;
--or
select name, state, type, total_mb, free_mb from v$asm_diskgroup;

--Quering ASM Disks
col PATH format a55
col name format a25
select name, path, group_number, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME,
WRITE_TIME from v$asm_disk order by 3,1;
--or
col PATH format a50
col HEADER_STATUS  format a12
col name format a25
--select INCARNATION,
select name, path, MOUNT_STATUS,HEADER_STATUS, MODE_STATUS, STATE, group_number,
OS_MB, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME, WRITE_TIME, BYTES_READ,
BYTES_WRITTEN, REPAIR_TIMER, MOUNT_DATE, CREATE_DATE from v$asm_disk;

###################################
#TUNING and Analisys###################################
--Only Performance Statistics
--N.B Time in Hundredseconds!
col READ_TIME format 9999999999.99
col WRITE_TIME format 9999999999.99
col BYTES_READ format 99999999999999.99
col BYTES_WRITTEN  format 99999999999999.99
select name, STATE, group_number, TOTAL_MB, FREE_MB,READS, WRITES, READ_TIME,
 WRITE_TIME, BYTES_READ, BYTES_WRITTEN, REPAIR_TIMER,MOUNT_DATE
from v$asm_disk order by group_number, name;


--Check the Num of Extents in use per Disk inside one Disk Group.
select max(substr(name,1,30)) group_name, count(PXN_KFFXP) extents_per_disk,
DISK_KFFXP, GROUP_KFFXP from x$kffxp, v$ASM_DISKGROUP gr
where GROUP_KFFXP=&group_nr and GROUP_KFFXP=GROUP_NUMBER
group by GROUP_KFFXP, DISK_KFFXP order by GROUP_KFFXP, DISK_KFFXP;


--Find The File distribution Between Disks
SELECT * FROM v$asm_alias  WHERE  name='PWX_DATA.272.669293645';

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
FROM   X$KFFXP WHERE  number_kffxp=(SELECT file_number FROM v$asm_alias
WHERE name='PWX_DATA.272.669293645');

--or

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
FROM X$KFFXP WHERE  number_kffxp=&DataFile_Number;

--or
select d.name, XV.GROUP_KFFXP Group#, XV.DISK_KFFXP Disk#,
XV.NUMBER_KFFXP File_Number, XV.AU_KFFXP AU#, XV.XNUM_KFFXP Extent#,
XV.ADDR, XV.INDX, XV.INST_ID, XV.COMPOUND_KFFXP, XV.INCARN_KFFXP,
XV.PXN_KFFXP, XV.XNUM_KFFXP,XV.LXN_KFFXP, XV.FLAGS_KFFXP,
XV.CHK_KFFXP, XV.SIZE_KFFXP from v$asm_disk d, X$KFFXP XV
where d.GROUP_NUMBER=XV.GROUP_KFFXP and d.DISK_NUMBER=XV.DISK_KFFXP
and number_kffxp=&File_NUM order by 2,3,4;


--List the hierarchical tree of files stored in the diskgroup
SELECT concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
START WITH (mod(pindex, power(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex;
################################################################

###################################
#Create and Modify Disk Group
###################################

create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

alter diskgroup FRA1  check all;

--on +ASM2 :
alter diskgroup FRA1 mount;


--Add a second disk:
alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra2';


--Add several disks with a wildcard:
alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra*';


--Remove a disk from a diskgroup:
alter diskgroup FRA1 drop disk 'FRA1_0002';


--Drop the entire DiskGroup
drop diskgroup DATA1 including contents;

--How to DROP the entire DiskGroup when it is in NOMOUNT Status
--Generate the dd command which will reset the header of all the
--disks belong the GROUP_NUMBER=0!!!!
select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
where GROUP_NUMBER=0;


select * from v$asm_operation;

---------------------------------------------------------------------------------------

alter diskgroup FRA1 drop disk 'FRA1_0002';
alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra3';

alter diskgroup FRA1 drop disk 'FRA1_0003';
alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra4';
--When a new diskgroup is created, it is only mounted on the local instance,
--and only the instance-specific entry for the asm_diskgroups parameter is updated.
--By manually mounting the diskgroup on other instances, the asm_diskgroups parameter
--on those instances are updated.

--on +ASM1 :
create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/fradg/fra1'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

--on +ASM2 :
alter diskgroup FRA1 mount;


--It works even for on going balances!!!
alter diskgroup DATA1 rebalance power 10;
################################################################
# New ASM Command Line Utility (ASMCMD) Commands and Options################################################################

ASMCMD Command Reference:

Command Description
--------------------
- cd Command Changes the current directory to the specified directory.
- cp Command Enables you to copy files between ASM disk groups on a local instance and remote instances.
- du Command Displays the total disk space occupied by ASM files in the specified
- ASM directory and all of its subdirectories, recursively.
- exit Command Exits ASMCMD.
- find Command Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
- help Command Displays the syntax and description of ASMCMD commands.
- ls Command Lists the contents of an ASM directory, the attributes of the specified
- file, or the names and attributes of all disk groups.
- lsct Command Lists information about current ASM clients.
- lsdg Command Lists all disk groups and their attributes.
- lsdsk Command Lists disks visible to ASM.
- md_backup Command Creates a backup of all of the mounted disk groups.
- md_restore Command Restores disk groups from a backup.
- mkalias Command Creates an alias for system-generated filenames.
- mkdir Command Creates ASM directories.
- pwd Command Displays the path of the current ASM directory.
- remap Command Repairs a range of physical blocks on a disk.
- rm Command Deletes the specified ASM files or directories.
- rmalias Command Deletes the specified alias, retaining the file that the alias points to.

--------
-- kfed tool From Unix Prompt for reading ASM disk header.
kfed read /dev/vx/rdsk/fra1dg/fra1


################################################################
# CREATE and Manage Tablespaces and Datafiles on ASM################################################################

CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;

ALTER TABLESPACE sysaux ADD DATAFILE '+disk_group_1' SIZE 100M;

ALTER DATABASE DATAFILE '+DATA1/dbname/datafile/audit.259.668957419' RESIZE 150M;


-------------------------
create diskgroup DATA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';


select 'alter diskgroup DATA1 add disk ''' || PATH || ''';' from v$asm_disk
where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

select 'alter diskgroup FRA1 add disk ''' || PATH || ''';' from v$asm_disk
where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;


--Remove ASM header
select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
where GROUP_NUMBER=0;







SET LINESIZE  200
SET PAGESIZE  9999
SET TRIMSPOOL on
SET VERIFY    off

COLUMN disk_group_name        FORMAT a10           HEAD 'Disk Group Name'
COLUMN disk_file_path         FORMAT a30           HEAD 'Path'
COLUMN disk_file_name         FORMAT a10           HEAD 'File Name'
COLUMN disk_file_fail_group   FORMAT a10           HEAD 'Fail Group'
COLUMN total_mb               FORMAT 999,999,999   HEAD 'File Size (MB)'
COLUMN used_mb                FORMAT 999,999,999   HEAD 'Used Size (MB)'
COLUMN pct_used               FORMAT 999.99        HEAD 'Pct. Used'

break on report on disk_group_name skip 1
compute sum label ""              of total_mb used_mb on disk_group_name
compute sum label "Total: "       of total_mb used_mb on report

SELECT
  a.group_number,
  a.name,
  a.state
FROM
    v$asm_diskgroup a
ORDER BY
    a.group_number
;

SELECT
  b.group_number                                   disk_group_number,
  b.path                                           disk_file_path,
  b.name                                           disk_file_name,
  b.failgroup                                      disk_file_fail_group,
  b.total_mb                                       total_mb,
  (b.total_mb - b.free_mb)                         used_mb,
  decode(b.total_mb,0,0,(ROUND((1- (b.free_mb / b.total_mb))*100, 2)))      pct_used
FROM
    v$asm_disk b
WHERE b.header_status = 'MEMBER'
ORDER BY
  b.group_number
;

Grid Control Repository: Steps to Migrate the 10g Repository from One Machine to Another on the Same Platform

Grid Control Repository: Steps to Migrate the 10g Repository from One Machine to Another on the Same Platform [Oracle Doc ID 1068841.1]

Modified 14-APR-2011     Type HOWTO     Status PUBLISHED
In this Document
  Goal
  Solution
     Pre-Migration EM-Specific Steps
     Repository Database Migration
     Post-Migration EM-Specific Steps
  References




Applies to:

Enterprise Manager Grid Control - Version: 10.2.0.1 to 10.2.0.5 - Release: 10.2 to 10.2
Information in this document applies to any platform.

Goal

This note describes the steps to be followed for migrating the Grid Control 10G Repository Database from one machine to another machine, both having the same Operating System and version.

The scenario outlined assumes the following:

- The source and destination Operating System (platform) and version are the same.
- Valid Operating System level backups are taken for the , and the (on the OMS machine).
- Adequate down-time is scheduled for the Grid Control setup during the migration.
- The following steps have been pre-attempted on a Test setup before being tried with a Production Grid Control setup.

  • For Cross-platform migration of the 10g Repository, refer to:
    Note 388090.1: Grid Control Repository: Steps for Cross Platform Migration of the Grid Control Repository/OMS
  • For the 11g Grid Control repository migration, refer to
    Note 1302281.1: 11g Grid Control: Steps for Migrating the 11g Grid Control Repository from One Database to Another.


Solution

Pre-Migration EM-Specific Steps

  • Stop the HTTP Server component of the OMS, to stop the Agents from uploading any further xml files during the Repository Database migration:
    cd /opmn/bin
    opmnctl stopproc process-type=HTTP_Server

    If there are multiple OMS, then the above step should be performed on all OMS machines.

    Wait for sometime and check whether there are any pending xml files in the /sysman/recv directory or the value of the ReceiveDir parameter (if this has been manually modified from the default value) in the <>/sysman/config/emoms.properties file
    Wait for all the xml files to be cleared up. A small number of xml backlog is acceptable but having large number of files may cause performance issues when the OMS is re-started after the repository database migration (pending backlog before migration + backlog created during migration).

    After this, stop the OMS:

    cd /opmn/bin
    opmnctl stopall

  • Login to the repository database and run the following SQL statements:

    SQL> sqlplus "/as sysdba"
    SQL> show parameter job_queue_processes
    SQL> alter system set job_queue_processes=0;
           Make a note of the value that was set for the job_queue_processes initialization parameter.
           Verify from the database alert.log file that the database is running fine without any major issues.
  • Login to the database as the sysman user and execute:

    SQL> exec emd_maintenance.partition_maintenance; (This will drop the old partitions);
    SQL> exec emd_maintenance.remove_em_dbms_jobs;


Repository Database Migration

  • Install the RDBMS software on destination machine. Ensure sure that the RDBMS software version and one-off patches applied are same on both the destination and source machines. Also, install a grid Agent installed on this machine if not present already.

  • Clone / Migrate the repository database to the new machine using any of database cloning methods (using RMAN, DBCA etc).

    References:
    Note 259694.1: Oracle10G RMAN Database Duplication
    Note 388431.1: Creating a Duplicate Database on a New Host.

    Note 733530.1: Grid Control Repository: How to Move Grid Control Repository from One Machine to Another Using DBCA (on the same Platform)
    Note 954615.1: Grid Control Repository: How to use RMAN to Move and Upgrade the Repository Database in a RAC Cluster Configuration?
    Note 602955.1: Grid Control Repository: Steps To Move the Repository Using an Inconsistent (Hot) Database Backup

  • If DBCA is used for the new database, then all the schema users will be locked and their password expired. So, the accounts need to be unlocked and their password reset manually.
    Execute the following for EM-specific users :

    SQL> sqlplus "/as sysdba"
    SQL> alter user SYSMAN identified by account unlock ;
    SQL> alter user DBSNMP identified by account unlock ;
    SQL> alter user MGMT_VIEW account unlock ;

    Note:
    - The password for the sysman and dbsnmp users should be same as the password in the original repository database.
    - For list of other users run "select username from dba_users;"
    - Unlock and re-set the passwords for the other users using similar 'alter user' command as the above.

Post-Migration EM-Specific Steps

  • Login to new Repository database as sysman and execute:

    SQL> exec emd_maintenance.analyze_emd_schema('SYSMAN');

  • Login to repository database as sysdba, restore the original value of job_queue_processes initialization parameter and submit the EM jobs:
    SQL> alter system set job_queue_processes=10 scope=both;
    SQL> connect sysman/
    SQL> exec emd_maintenance.submit_em_dbms_jobs;

  • On the OMS machine:
    - Take a backup of the /sysman/config/emoms.properties
    - In the emoms.properties file, look for the parameter oracle.sysman.eml.mntr.emdRepConnectDescriptor.
    - Update the details: host name / listener port / Repository SID in this parameter value with the connection details of the new repository. 
    - The above steps should be performed on all the OMS connecting to the repository database.

  • Startup OMS

    cd /opmn/bin
    opmnctl startall

  • Connect to the Grid Console from a browser. Navigate to setup -> Management Services and Repository -> Monitoring Configuration. Update the connection details for the "Management Services and repository" target to point the new Repository.

  • From a machine, where emcli is configured, relocate the repository database target from the old machine to the new database machine:

    emcli relocate_targets -src_agent=""
    -dest_agent="" -target_name="" -target_type=oracle_database -copy_from_src -force

    Reference:
    Note 577443.1: How to Setup and Configure Target Relocate using EMCLI 

References

NOTE:781314.1 - Grid Control Repository: Steps to Rename the Repository Database Name using NID Utility
NOTE:1302281.1 - 11g Grid Control: Steps for Migrating the 11g Grid Control Repository from One Database to Another
NOTE:388090.1 - Grid Control Repository: Steps for Cross Platform Migration of the Grid Control Repository

Creating a Duplicate Database on a New Host

Creating a Duplicate Database on a New Host. [Oracle Document ID 388431.1]

  Modified 01-FEB-2011     Type HOWTO     Status PUBLISHED  
In this Document
  Goal
  Solution
     Duplicate database from server A to server B (Non ASM)
     1. Backup of the primary database.
     2. Determine how much disk space will be required.
     3. Ensuring you have enough space on your target server.
     4. Making the backup available for the duplicate process.
     5. Creating the init.ora & administration directories for the duplicate database.
     6. Ensuring SQL*NET connections to primary database and RMAN catalog are working.
     7. Prepare RMAN duplicate script.
     8. Execute the RMAN script.
  References





Applies to:

Oracle Server - Enterprise Edition - Version: 9.0.1.4 to 11.2.0.0 - Release: 9.0.1 to 11.2
Information in this document applies to any platform.
***Checked for relevance on 06-Aug-2009***
***Checked for relevance on 01-Feb-2011***

Goal

The following note will guide a user in creating a duplicate database on a new host (non ASM).  The scenario refers to a recovery catalog however a catalog is not mandatory. The following example has been completed on a linux operating system.

The note is applicable for 9i, 10g and 11g. However 11g has new features related to the DUPLICATE.
See for more details on the 11g DUPLICATE new features : Note 452868.1 RMAN 'Duplicate Database' Feature in 11G

Solution

Duplicate database from server A to server B (Non ASM)


Assumed database names:
Primary Database SID:       PROD
Duplicate Database SID:    AUX
RMAN Catalog SID:          RMAN


====================================================================================
Steps

1.  Backup the primary database.

2.  Determine how much disk space will be required.

3.  Ensuring you have enough space on your target server.

4.  Making the backup available for the duplicate process.

5.  Creating the init.ora & administration directories for the duplicate database.

6.  Ensuring SQL*NET connections to primary database and RMAN catalog are working.

7.  Prepare RMAN duplicate script.

8.  Execute the RMAN script.

=====================================================================================

1. Backup of the primary database.


Host A (Target)

Prior to creating a duplicate database you must have a backup of the target database. In this example we assume the backup media is disk. Find appropriate disk space on host A and take a full backup including archive logs and current controlfile (Figure 1a). If you are planning to duplicate a subset of the database refer to (Figure 1b) this illustrates the RMAN command to backing up certain tablespaces.

[oracle@linux] export ORACLE_SID=PROD

[oracle@linux] rman target=/ catalog=rman/rman@RMAN

RMAN> run {
        allocate channel d1 type disk;
        backup format '/backups/PROD/df_t%t_s%s_p%p' database;
        sql 'alter system archive log current';
        backup format '/backups/PROD/al_t%t_s%s_p%p' archivelog all;
        release channel d1;
      }

       Figure 1a - This command will perform a full database backup including archivelogs and the current controlfile.


[oracle@linux] export ORACLE_SID=PROD

[oracle@linux] rman target=/ catalog=rman/rman@RMAN

RMAN> run {
        allocate channel d1 type disk;
        backup format '/backups/PROD/df_t%t_s%s_p%p'
               tablespace SYSTEM, SYSAUX, UNDO, USERS;
        sql 'alter system archive log current';
        backup format '/backups/PROD/al_t%t_s%s_p%p' archivelog all;
        release channel d1;
     }
    Figure 1b- This command will perform a tablespace backup ( SYSTEM, SYSAUX, UNDO & USERS) including archive logs and the current controlfile.

2. Determine how much disk space will be required.


Host A(Target) -

After deciding what you will be duplicating, calculate the entire space this will require on the new host. The full database calculation (Figure 2a) will calculate the entire space required whereas (figure 2b) allows you to enter the tablespace names in order to calculate the space required. Figure 2c provides a sample output.

Note: sql valid for version 10g only.
SQL> select DF.TOTAL/1048576 "DataFile Size Mb",
            LOG.TOTAL/1048576 "Redo Log Size Mb",
            CONTROL.TOTAL/1048576 "Control File Size Mb",
            (DF.TOTAL + LOG.TOTAL + CONTROL.TOTAL)/1048576 "Total Size Mb"
     from dual,
    (select sum(a.bytes) TOTAL from dba_data_files a) DF,
    (select sum(b.bytes) TOTAL from v$log b) LOG,
    (select sum((cffsz+1)*cfbsz) TOTAL from x$kcccf c) CONTROL;
     Figure 2a - Calculate total space for all datafiles within database.

SQL> select DF.TOTAL/1048576 "DataFile Size Mb",
            LOG.TOTAL/1048576 "Redo Log Size Mb",
            CONTROL.TOTAL/1048576 "Control File Size Mb",
            (DF.TOTAL + LOG.TOTAL + CONTROL.TOTAL)/1048576 "Total Size Mb" from dual,
    (select sum(a.bytes) TOTAL from dba_data_files a
     where tablespace_name in('SYSTEM','UNDOTBS1', 'SYSAUX', 'USERS')) DF,
    (select sum(b.bytes) TOTAL from v$log b) LOG,
    (select sum((cffsz+1)*cfbsz) TOTAL from x$kcccf c) CONTROL;
     Figure 2b - Calculate space for list of tablespaces within primary database.

DataFile Size Mb Redo Log Size Mb Control File Size Mb Total Size Mb
---------------- ---------------- -------------------- -------------
             900              150             20.34375    1070.34375
     Figure 2c - Sample output of space calculation.

3. Ensuring you have enough space on your target server.


Host B (Aux)

Prior to starting the duplicate process you must ensure you have enough disk space within file system to hold the database.

[oracle@linux] df -kh
Filesystem         Size     Used     Avail     Use%     Mounted on
/dev/vg01/root     9.9G     2.8G      6.6G      30%              /
/dev/sda1          145M      16M      122M      12%          /boot
none               999M        0      999M       0%       /dev/shm
/dev/vg01/tmp      2.0G     383M      1.5G      20%           /tmp
/dev/vg01/u01       20G      12G      7.0G      62%           /u01
/dev/vg01/u02      4.9G    1010M      3.6G      22%           /u02
/dev/vg01/backups  5.9G     1.2G      4.4G      22%       /backups
/dev/vg01/oradata   15G      13G      2.0G      87%       /oradata
     Figure 3a - Sample output showing the space available on your filesystem.

Compare the results received from this query with the output from 2a or 2b and ensure you have enough diskspace for your duplicate database.


4. Making the backup available for the duplicate process.


If your backup resides on disk you will need to copy this back up from host A to host B. Ensure you place it in the same directory as where it was created. In the example below (figure 4a) the backup piece resides in ‘/backups/PROD’ these files need to be copied into the same directory on host B.

Furthermore also ensure than all archive log backups have also been moved and any archive logs in the archive log directory that may be require for the recovery. The archive logs required will depend on the point in time of the recovery.

RMAN> list backup;

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
22 Full 529M DISK 00:00:51 2006/05/16 11:12:54
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20060516T111203
Piece Name: /backups/PROD/df_t590584323_s23_p1
List of Datafiles in backup set 22
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/system01.dbf
2 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/undotbs01.dbf
3 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/sysaux01.dbf
4 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/users01.dbf


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
24 48M DISK 00:00:06 2006/05/16 11:13:07
BP Key: 24 Status: AVAILABLE Compressed: NO Tag: TAG20060516T111301
Piece Name: /backups/PROD/al_t590584381_s25_p1

List of Archived Logs in backup set 24
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 78 1344750 2006/05/15 04:55:34 1353208 2006/05/15 10:00:19
1 79 1353208 2006/05/15 10:00:19 1353213 2006/05/15 10:00:20
1 80 1353213 2006/05/15 10:00:20 1371564 2006/05/15 22:00:11
1 81 1371564 2006/05/15 22:00:11 1373291 2006/05/15 22:00:59
1 82 1373291 2006/05/15 22:00:59 1381066 2006/05/16 03:00:05
1 83 1381066 2006/05/16 03:00:05 1390685 2006/05/16 09:03:00
1 84 1390685 2006/05/16 09:03:00 1393870 2006/05/16 11:13:00
1 85 1393870 2006/05/16 11:13:00 1393929 2006/05/16 11:13:00
     Figure 4a - A list backup showing the backup pieces that need to be copied across Host B

5. Creating the init.ora & administration directories for the duplicate database.


Host B(AUX)


Create the pfile [initAUX.ora] parameter file in the $ORACLE_HOME/dbs directory for the auxiliary database. An example is shown below (figure 5a) with bare minimum settings. Note you may require to set parameters which are the same as your production database (refer to you primary init.ora)

# +----------------------------------------+
# | FILE : initAUX.ora                     |
# | DATABASE NAME : AUX                    |
# +----------------------------------------+
# Set the below to location of the clone Duplicate database / name of
# clone database.

audit_file_dest =/oradata/AUX/adump
background_dump_dest =/oradata/AUX/bdump
core_dump_dest =/oradata/AUX/cdump
user_dump_dest =/oradata/AUX/udump
db_name ="AUX"
instance_name =AUX

cluster_database=false (in case the production is a rac environment)


# Set the below to the location of the duplicate clone control file.

control_files =('/oradata/AUX/control01.ctl','/oradata/AUX/control02.ctl','/oradata/AUX/control03.ctl')



# Set the below for the from and to location for all data files / redo
# logs to be cloned.

db_file_name_convert =("/oradata/PROD/", "/oradata/AUX/")
log_file_name_convert =("/oradata/PROD/", "/oradata/AUX/")
#Set the below to the same as the production target

undo_management =AUTO
undo_retention =10800
undo_tablespace =UNDOTBS1
db_block_size = 8192
compatible = 10.2.0.1.0
     Figure 5a - Sample initAUX.ora with minimal settings

Following the creation of the initAUX.ora startup nomount the auxiliary instance.

[oracle@linux]export ORACLE_SID=AUX

[oracle@linux] sqlplus '/as sysdba'

SQLPLUS> startup nomount;
Figure 5b - startup nomount the AUX instance.


6. Ensuring SQL*NET connections to primary database and RMAN catalog are working.


Host B(AUX)

Ensure the production target database is open or at least mounted. If using a catalog database this must also be open. Test your SQL*NET connections: From Host B you must be able to connect to the primary DB as sysdba and make an rman connection to RMAN. Perform a basic test using sqlplus. When you have successfully connected exit sqlplus and move onto the next step.

[oracle@linux]% sqlplus ‘sys/oracle@PROD as sysdba’

[oracle@linux]% sqlplus rman/rman@RMAN   (not mandatory)
     Figure 6a - SQL*NET connections

7. Prepare RMAN duplicate script.


In a working directory on Host B create an RMAN script file duplicate.rcv. The example below (figure 7a) shows the command for a complete duplicate (figure 7b) skips the tablespaces which are not required in the duplicate and (figure 7b) provide the syntax required for a point in time duplicate.

run {
  allocate auxiliary channel C1 device type disk;
  duplicate target database to AUX;
}
     Figure 7a - Sample duplicate command.

run {
  allocate auxiliary channel C1 device type disk;
  duplicate target database to AUX skip tablespace ABC, XYZ;
}
     Figure 7b- Sample duplicate script omitting optional tablespaces;

run {
  set until time "to_date('Jan 01 2000 12:00:00','Mon DD YYYY HH24:MI:SS')";
  allocate auxiliary channel C1 device type disk;
  duplicate target database to AUX;
}
      Figure 7c- Sample duplicate script to a point in time.

8. Execute the RMAN script.


Start RMAN, connect to the production target, the catalog instance and also the auxiliary clone. Run the RMAN duplicate script as shown below. Before doing this ensure that the Oracle SID environment variable is set to the duplicate clone database.

[oracle@linux] export ORACLE_SID=AUX

[oracle@linux] rman target sys/sys@PROD catalog rman/rman@rman auxiliary /

RMAN> @duplicate.rcv
    Figure 8a - Execute the RMAN duplicate script.

References

NOTE:452868.1 - RMAN 'Duplicate Database' Feature in 11G