Tuesday, December 29, 2009

BeSt Of LuCk

Dear Friends

Plssssss go thruuuuu,please do it... jst follow the rules

Copy this onto notepad, press cntrl + H, press 6 in find box and
underscore(_) in replace box and click replace all button. U will get
thrilled

666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666996666699669966999999996699666669966666669966666699669999999966996666996666666666666696666666666666666666666
666996666699669966999999996699666669966666666996666996669999999966996666996666666666666999666666666666666666666

666996666699669966996666666699666669966666666699669966669966669966996666996666666666669969966666666666666666666
666996696699669966999999996699999999966666666669999666669966669966996666996666666666699666996666666666666666666
666996999699669966999999996699999999966666666666996666669966669966996666996666666666999999999666666666666666666

666999969999669966666666996699666669966666666666996666669966669966996666996666666669999999999966666666666666666
666999666999669966999999996699666669966666666666996666669999999966996666996666666699666666666996666666666666666
666996666699669966999999996699666669966666666666996666669999999966999999996666666996666666666699666666666666666

666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
669966666996666666669666666666999999996699999999669966666699666666669966666996699999999669966666996666666666666
669966666996666666699966666666999999996699999999666996666996666666669996666996699999999669966666996666666666666

669966666996666666996996666666996666996699666699666699669966666666669999666996699666666669966666996666666666666
669999999996666669966699666666999999996699999999666669999666666666669969966996699999999669966966996666666666666
669999999996666699999999966666999999996699999999666666996666666666669966996996699999999669969996996666666666666
669966666996666999999999996666999666666699666666666666996666666666669966699996699666666669999699996666666666666

669966666996669966666666699666996666666699666666666666996666666666669966669996699999999669996669996666666666666
669966666996699666666666669966996666666699666666666666996666666666669966666996699999999669966666996666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
666996666669966999999996666666669666666666999999996666666699999996699999999666999666699999999666666666666666666
666699666699666999999996666666699966666666999999996666666699999999699666699666699666699666699666666666666666666
666669966996666996666666666666996996666666996666996666666666666699699966699666699666699966699666666666666666666
666666999966666999999996666669966699666666999999996666666666666996699696699666699666699696699666666666666666666

666666699666666999999996666699999999966666999999996666666666669966699669699666699666699669699666666666666666666
666666699666666996666666666999999999996666996996666666666666996666699666999666699666699666999666666666666666666
666666699666666999999996669966666666699666996699666666666699999999699666699666699666699666699666666666666666666
666666699666666999999996699666666666669966996666996666666699999999699999999666999966699999999666666666666666666
666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666

BEST OF LUCK




Monday, August 3, 2009

Dropping oracle10g Database

Deleting two databases (PROD/RMAN)

1. Deleting Target database using RMAN (PROD using RMAN)

Login as PROD user:

$ sqlplus /nolog

Sql> connect sys/pwd@connect_string as sysdba

Sql> startup mount exclusive

Sql> alter system enable restricted session

Sql> exit

Login as rman user

$ rman target sys/pwd@connect_string catalog rman/pwd@rman

RMAN> drop database including backup;

2. Deleting RMAN Database:

i. Connect to drop catalog

$ connect catalog rman/pwd@rman

RMAN> drop catalog;

RMAN> exit

ii. Check Datafile, Controlfile and Redo logs;

SQL> select name from v$datafile/ v$logfile / v$controlfile;

iii. Drop Database

SQL> startup mount exclusive restrict;
SQL> drop database;

3. Removing Oracle10g Software:

Login as software owner

Run /bin/runinstaller and uninstall oracle software

Restrictions and Usage Notes (FROM ORACLE DOCS)

  • This command can only be run from RMAN.
  • You must be connected to the target database from RMAN while the database is mounted in EXCLUSIVE mode with RESTRICTED SESSION enabled.
  • When using the "DROP DATABASE" command, RMAN drops the target database including the following files at the operating system level:
    • Datafiles
    • Online Redo Log Files
    • Controlfiles
    • SPFILE (if it exists)
  • When including the "INCLUDING BACKUPS" clause, RMAN will delete the files listed above as well as the following files at the operating system level:
    • Archive Redo Logs
    • Backup pieces generated by RMAN for the target database
  • When using the "DROP DATABASE" command with RMAN connected to a recovery catalog, RMAN will unregister the target database.
  • The "DROP DATABASE" command does not delete the following files:
    • init.ora (text version of the Oracle initialization file)
    • password file

Dropping 9i Database (PROD/RMAN)

1. Drop backups using rman

rman target=sys/pwd@connect_string catalog rman/pwd@rman

RMAN> LIST BACKUP SUMMARY;

RMAN> DELETE BACKUP
 2. Unregister database

SQL> CONNECT rman/rman@rman Connected.

SQL> SELECT db_key, db_id FROM db;


DB_KEY DB_ID

1 1465981354

SQL> EXECUTE dbms_rcvcat.unregisterdatabase(1, 1465981354);

PL/SQL procedure successfully completed.

3. Drop rman database

Use dbca to drop database

4. Drop prod database

Use dbca to drop database

Note: If Database is created manually without using DBCA utility then add the entry in ORATAB for DBCA to identify the database

5. Removing 9i Software:

Login as software owner

Run /install/runinstaller and uninstall oracle software

Registering Manual Database with DBCA

Database created manually using traditional CREATE DATABASE command cannot be identified using DBCA Utility for following purposes

Create or Manage templates

Drop database

Change database options

The DBCA will not identify SID for manually created database unless the entry is added in ORATAB file.

Sample:

$ vi /var/opt/oracle/oratab

prod:/u00/app/oracle/product/9.2.0.1.0:N

Drop Database using DBCA

When you drop database using DBCA it will create ShutdownInst.log in $ORACLE_HOME/assistants/dbca/logs; If this file already exists then drop database might fail. So move the old file and run DBCA again.

Sample:

$ more ShutdownInst.log

/u00/app/oracle/oradata/prod/control01.ctl

/u00/app/oracle/oradata/prod/control02.ctl

/u00/app/oracle/oradata/prod/control03.ctl

/u01/prod/database/drsys_prod.dbf

/u01/prod/database/log1a_prod.rdo

/u01/prod/database/log2a_prod.rdo

/u01/prod/database/log3a_prod.rdo

/u01/prod/database/log4a_prod.rdo

/u01/prod/database/system_prod.dbf

/u01/prod/database/tools_prod.dbf

/u01/prod/database/xdb_prod.dbf

/u02/prod/example_prod.dbf

/u02/prod/users01_prod.dbf

/u02/prod/users02_prod.dbf

/u03/prod/log1b_prod.rdo

/u03/prod/log2b_prod.rdo

/u03/prod/log3b_prod.rdo

/u03/prod/log4b_prod.rdo

/u04/prod/control04.ctl

/u04/prod/index01_prod.dbf

/u05/prod/temp01_prod.dbf

/u05/prod/undotbs01_prod.dbf

Database closed.

Database dismounted.

ORACLE instance shut down.

Monday, July 20, 2009

Switching Undo Tablespace

CREATE UNDO TABLESPACE "UNDOTBS"

DATAFILE '/u05/abc/oradata/undotbs.dbf' SIZE 4000M;

SQL> select name from v$tablespace where name like 'UNDO%';
NAME

-------------------------

UNDOTBS1

UNDOTBS

ALTER SYSTEM SET undo_tablespace='UNDOTBS' SCOPE=BOTH;

SQL> show parameter undo

NAME TYPE VALUE

undo_management string AUTO

undo_retention integer 21600

undo_tablespace string UNDOTBS

SQL>select status from V$ROLLSTAT

STATUS

ONLINE

ONLINE

PENDING OFFLINE

PENDING OFFLINE

PENDING OFFLINE

PENDING OFFLINE

ONLINE

ONLINE

ONLINE

ONLINE

ONLINE

ONLINE

If the status is pending offline; u cannot drop undo tablespace UNDOTBS1

SQL>drop tablespace undotbs1 including contents and datafiles

If u drop u will get ORA-30013: undo tablespace 'UNDOTBS1' is currently in use

Note: You can find the following messages in alert.log after issuing alter system set command

Sat Jul 18 15:38:38 2009

Successfully onlined Undo Tablespace 7.

Undo Tablespace 1 moved to Pending Switch-Out state.

*** active transactions found in undo tablespace 1 during switch-out.

Sat Jul 18 15:46:28 2009

Undo Tablespace 1 successfully switched out.

Monday, July 13, 2009

Working on Datapump Export

Create a directory at O/S Level


$ mkdir dump_dir (for dump directory)

$ mkdir log_dir (for log directory)

$ chmod 775 dump_dir log_dir

Connect sqlplus and execute:

Sql> Create directory datapump_dir as ‘/u01/dump_dir’;

Sql> Create directory datapump_log as ‘/u01/log_dir’;

Sql> grant read,write on directory datapump_dir to public; # to take expdp for any schema

Sql> grant read,write on directory datapump_log to public;

$ more expdp.sh

#!/bin/ksh

export ORACLE_HOME="/u01/app/oracle/product/10.2.0"

export ORACLE_SID="abc"

echo export started at `date` >> /u05/abc/export/dailyexpdp_abc.log

$ORACLE_HOME/bin/expdp system/password dumpfile=datapump_dir:abc-expdp-`date '+%Y%m%d'`.dmp logfile=datapump_log:abc-expdp-`date '+%Y%m%d'`.log schemas=aet3

echo export stopped at `date` >> /u05/abc/export/dailyexpdp_abc.log

echo tape archiving started at `date` >> /u05/abc/export/dailyexpdp_abc.log

tar Ecvf /dev/rmt/0 /u05/abc/export

echo tape archiving stopped at `date` >> /u05/abc/export/dailyexpdp_abc.log

$ crontab -l

50 23 * * 0,1,2,3,4,5,6 /u06/abc/scripts/expdp.sh

It will generate dumpfile datewise.

Sunday, July 12, 2009

Enable Crontab

Giving Crontab permission to new os user

Login as root
Goto /etc/cron.d directory
Add the user entry (ex. ota ) in cron.allow file
Note: if cron.allow files doesn't exists then create it

Login as o/s user
Set the editor in .profile or .bash_profile
Example: EDITOR=vi;export EDITOR
Now you can schedule cron jobs.


To setup cronjob
crontab -l( list current jobs in cron)
crontab -e ( edit current jobs in cron)
_1_ _2_ _3_ _4_ _5_ executable_or_job
Where
1 – Minutes (0-59)
2 – Hours ( 0-24)
3 – day of month ( 1- 31 )
4 – Month ( 1-12)
5 – A day of week ( 0- 6 ) 0 -> sunday 1-> monday
e.g. 0 3 * * 6 Means run job at 3AM every saturday
This is useful for scheduling tablespace threshold, ftp, rman backup or removed old log files, or other scripts regularly.
Sample Scheduled backup:
$ crontab –l 
OTA Database:
50
23
*
*
0,2,3,6
/u01/ota/dailyexp_ota.sh
50
23
*
*
1,4
/u01/ota/offbkup_ota.sh
15
14
*
*
0,1,2,3,4,6
/u01/ota/morning_arch.sh






























Upgrade Oracle from 10.2.0.1 To 10.2.0.4 on Linux x86 AS4 / Solaris

Upgrade Oracle from 10.2.0.1 To 10.2.0.4 on Linux x86 AS4
Screen shots below attached for production upgrade on solaris 64-BIT
Download 6810189 [p6810189_10204_Linux-x86]
$ unzip p6810189_10204_Linux-x86.zip
Shut down all the Databases / listener / services / enterprise manger
Backup your database
Start Patching:
cd patchset_directory/Disk1
./runInstaller
OUI starts and patch gets installed; When prompted, run the $ORACLE_HOME/root.sh script as the root user.
Upgrading a Release 10.2 Database using Oracle Database Upgrade Assistant
After you install the patch set, you must perform the following steps on every database:
If you do not run the Oracle Database Upgrade Assistant then the following errors are displayed:
ORA-01092: ORACLE instance terminated.
ORA-39700: database must be opened with UPGRADE option.
Start the listener as follows:
$ lsnrctl start
Run Oracle Database Upgrade Assistant
$ dbua
§ Complete the following steps displayed in the Oracle DBUA
§ On the Welcome screen, click Next.
§ On the Databases screen, select the name of the Oracle Database that you want to update, then click Next.
§ On the Recompile Invalid Objects screen, select the Recompile the invalid objects at the end of upgrade option, then click next.
§ If you have not taken the back up of the database earlier, on the Backup screen, select the I would like to take this tool to backup the database option
§ On the Summary screen, check the summary, and then click Finish.

Upgrade the Database Manually

After you install the patch set, you must perform the following steps on every database associated with the upgraded Oracle home:

Start Listener
Connect as sys user

Sql> Startup Upgrade

Sql> Spool Patch.Log

Sql> @ORACLE_BASE\ORACLE_HOME\rdbms\admin\catupgrd.sql

Sql> Spool Off

Review the patch.log file for errors and inspect the list of components that is displayed at the end of catupgrd.sql script.

This list provides the version and status of each SERVER component in the database.
If necessary, rerun the catupgrd.sql script after correcting any problems.

4. Restart the database:
Sql> Shutdown
Sql> Startup

5. Compile Invalid Objects

Run the utlrp.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time. This step is optional but recommended.
SQL> @ORACLE_BASE\ORACLE_HOME\rdbms\admin\utlrp.sql

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Release 10.2.0.4.0 - Production
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

----Screenshots----
















Enable X Manager

Install X manager enterprise edition on any client machine;
login on server as root and
run gdmsetup utility
and enable XDMCP



Now you can login remotely using X browser

Delete archive log without catalog

Connect rman

$ rman target sys/password@connect_string

RMAN > crosscheck backup;

RMAN > delete archivelog until time `SYSDATE – 7`;

OR

RMAN>delete archivelog until sequence=(sequence_number);




Thursday, June 25, 2009

LOG APPLY SERVICES (LAS)

LOG APPLY SERVICE:

i. Applying redo immediately

ii. Time delay for redo apply

Applying redo data to Physical Standby Database

  1. start redo apply
  2. stop redo apply
  3. monitor redo apply

Applying redo data to Logical Standby Database

iii. start sql apply

iv. stop sql apply

v. monitor sql apply

LOG APPLY SERVICES (LAS): Process is Automatic

1. Redo Apply (Physical Standby Database only)

· Uses Media Recovery to keep Primary Database & Standby Database synchronized.

· Kept in mounted state & can be open for reporting.

2. SQL Apply (Logical Standby Database only)

· Reconstructs the SQL statements form redo data received from Primary Database & applies it to Logical Standby Database.

· Can be opened in R/W mode.

Redo Transport Service Process on the Standby Database receives the redo data and applies it to standby redolog files or archived redolog files.

RFS - Redo file server process

MRP- Managed recovery process (performs recovery i.e... starts apply redo data)

LSP - Logical Standby Process.

FIG 6-1: Oracle Dataguard (B14239-04)

1. Applying Redo Data Immediately: (Real-Time Apply)

In this process the redo data is applied immediately as it is received without waiting for the current standby redolog file to be archived

Enabling real-Time Apply for Physical Standby Database

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE

Enabling real-Time Apply for Logical Standby Database

ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE

2. Specifying a time delay for applying Redologs:

Paramter used: log_archive_dest_n

Attribute: delay

System: Delay in minutes

Default Value: 30 mins

Delay is used to protect the corrupted data getting applied to Standby Database.

Delay time starts after redo is received and completely archived

If real time is applied & delay is specified then delay is ignored

Cancel delay using nodelay

Ex:

Physical Standby Database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;

Logical Standby Database

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NODELAY;

Alternate option for delaying:

Using flash back on standby database.

Applying Redo data to Physical Standby Database:

· By default redo is always applied from archived redo logs of standby db.

· In case of real-time apply; redo applied directly from standby redo log files before they are archived.

· Redo data cannot be applied if Physical Standby Database is open in read only mode. There fore start the Physical Standby Database and keep it in mounted state to apply the redo.

Apply redo is foreground process ( control is not returned)

Sql> Alter database recover managed standby database;

Applying redo as background process(control is returned)

Sql> Alter database recover managed standby database disconnect;

Using Real-time Apply:

Sql> Alter database recover managed standby database using current logfile;

Cancel Real-time Apply:

Sql> Alter database recover managed standby database cancel;

Monitoring:

Use OEM for monitoring log apply services.

Applying redo data to Logical Standby Database:

SQL Apply converts the data from archived redo log files or standby redolog ifles on Logical Standby Database into sql statements and then these sql statement are applied to Logical Standby Database.

Logical Standby Database always remain open as sql statements has to be executed.

Used for reporting, summations and quering purpose.

Starting sql apply

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

Realtime:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Stopping:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Note: This command is delayed as sql apply will wait to apply all the commited transactions.

For stopping immediately used

SQL> ALTER DATABASE ABORT LOGICAL STANDBY APPLY;

Monitoring:

Use OEM for monitoring log apply servies.

REDO TRANSPORT SERVICES (RTS)

Redo Transport Service:

§ Automates the transfer of redo to one or more destinations

§ Resolve gaps in Redo Transport in case of network failures

FIG 5-1: Oracle Dataguard (B14239-04)

Destinations types for Redo Transport Service:

§ Oracle Data Guard Standby Database

§ Archived redo log repository

§ Oracle Streams real-time downstream capture database

§ Oracle Change Data Capture staging database

LOG_ARCHIVE_DEST_n parameter:

§ Number of destinations maximum (10)

§ Number of Standby Database configured max (10)

Attributes:

§ Location= specifies the local destinations

§ Service= specifies remote destinations

LOG_ARCHIVE_DEST_n is used along with LOG_ARCHIVE_DEST_STATE_n parameter.

Attributes: of LOG_ARCHIVE_DEST_STATE_n parameter

ENABLE:

Redo transport services can transmit redo data to this destination. This is the default.

DEFER:

This is a valid but unused destination (Redo transport services will not transmit redo data to this destination.)

ALTERNATE:

This destination is not enabled, but it will become enabled if communication to its associated destination fails.

RESET:

Functions the same as DEFER

Example 5–1 Specifying a Local Archiving Destination

LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago/’

LOG_ARCHIVE_DEST_STATE_1=ENABLE

Example 5–2 Specifying a Remote Archiving Destination

LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago/’

LOG_ARCHIVE_DEST_STATE_1=ENABLE

LOG_ARCHIVE_DEST_2=’SERVICE=boston

LOG_ARCHIVE_DEST_STATE_2=ENABLE

We can change the destination attributes

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=boston

VALID_FOR= (ONLINE_LOGFILES,PRIMARY_ROLE)’;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

(This commands defers the Redo Transport Service)

§ The modifications take effect after the next log switch on the primary database.

§ Parameter for configuring Flash Recovery Area is DB_RECOVERY_FILE_DEST = ‘/……’

§ If no destinations for local archiving are specified then LOG_ARCHIVE_DEST_10 is implicitly mapped to DB_RECOVERY_FILE_DEST location by Oracle Data Guard.

§ A Primary Database cannot write the redo data to the Flash Recovery Area of Logical Standby Database

Note: Flash Recovery Area is the directory to stores the files related to recovery.

§ To configure Flash Recovery Area to any other destinations other then LOG_ARCHIVE_DEST_10 use

§ LOG_ARCHIVE_DEST_9=’LOATION=USE_ DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN 5

§ Specifying Flash Recovery Area as Physical Standby Database

STANDBY_ARCHIVE_DEST = ‘LOCATION=USE_ DB_RECOVERY_FILE_DEST’

§ Sharing a Flash Recovery Area between Physical Standby Database and Primary Database DB_UNIQUE_NAME should be specified to each database and it should have a unique name.

Example 5–3 Primary Database Initialization Parameters for a Shared Recovery Area

DB_NAME=PAYROLL

LOG_ARCHIVE_DEST_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST’

DB_RECOVERY_FILE_DEST=’/arch/oradata’

DB_RECOVERY_FILE_DEST_SIZE=20G

Example 5–4 Standby Database Initialization Parameters for a Shared Recovery Area

DB_NAME=PAYROLL

DB_UNIQUE_NAME=boston

LOG_ARCHIVE_DEST_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST’

STANDBY_ARCHIVE_DEST=’LOCATION=USE_DB_RECOVERY_FILE_DEST’

DB_RECOVERY_FILE_DEST=’/arch/oradata’

DB_RECOVERY_FILE_DEST_SIZE=5G

Sending Redo:

Redo can be transmitted by archiver process (Arcn) and log writer process (lgwr). But both cannot be used for the same destinations i.e... arcn can send redo to one destinations and lgwr to other.

Using ARCn to send redo

§ Default method & 4 processes are used by default

§ Supports only Maximum Performance level of data protection

§ Specify LOCATION attribute for local archiving and SERVICE attribute for remote archiving.

EX:

LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago/’

LOG_ARCHIVE_DEST_2=’SERVICE=boston

Another parameter

LOG_ARCHIVE_MAX_PROCESSES (Dynamic parameter; Maximum is 30 process)

Archival Processing:

FIG 5-3: Oracle Dataguard (B14239-04)

Note: use v$archive_log to verify the redo data is received on Standby Database

Minimum 2 Arch Process are required default is 4 & maximum is 30

RFS: On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (MRP process1) or SQL Apply (LSP process2) to apply the redo to the standby database.

MRP: The managed recovery process applies archived redo log files to the physical standby database, and automatically determines the optimal number of parallel recovery processes at the time it starts. The number of parallel recovery slaves spawned is based on the number of CPUs available on the standby server.

LSP: The logical standby process uses parallel execution (Pnnn) processes to apply archived redo log files to the logical standby database, using SQL interfaces.

Using LGRW to Send Redo

- LGWR SYNC

- LGWR ASYNC

LGWR SYNC archival processing:

Parameter: LOG_ARCHIVE_DEST_n

Attributes: LGWR, SYNC,SERVICE

Example 5–5 Initialization Parameters for LGWR Synchronous Archival

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago'

LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR SYNC NET_TIMEOUT=30'

LOG_ARCHIVE_DEST_STATE_1=ENABLE

LOG_ARCHIVE_DEST_STATE_2=ENABLE

SYNC: Network I/O is synchronous (default)

Waits until each write operation is completed

Note: if LGWR process does not work for some reason then redo transport will automatically shift to ARCn process.

NET_TIMEOUT: waits for specified seconds over the network & give error if write operation does not complete

LGWR ASYNC archival processing:

Ex: Same as above without SYNC & NET_TIMEOUT attribute

Use ASYNC instead of SYNC

NET_TIMEOUT is not necessary in ora10.2

Diagram showing SYNC & ASYNC LGWR archival process:


FIG 5-4: Oracle Dataguard (B14239-04)

FIG 5-5: Oracle Dataguard (B14239-04)

Note: LOG_ARCHIVE_DEST & LOG_ARCHIVE_DUPLEX_DEST should not be used for configuring Flash Recovery Area.

Providing security while transmitting redo:

Sql> Orapwd file=orapw password=xyz entries=10

Note: Make sys user passwore identical for all db’s in Oracle Data Guard. Also set remote_login_password_file=exclusive/shared.

VALID_FOR attribute of LOG_ARCHIVE_DEST_n parameter

VALID_FOR=(redo_log_type,database_role)

redo_log_type: ONLINE_LOGFILE, STANDBY_LOGFILE, or ALL_LOGFILES

database_role: PRIMARY_ROLE, STANDBY_ROLE, or ALL_ROLES.

VALID_FOR attribute is required for role transtition

- configures destination attrivutes for both Primary Database and Standby Database in one SPFILE

- If VaLID_FOR is not used then we need to user two spfiles each time we do the role transitions

- This attribute makes the switch-over and Fail-over easy.

Ex

LOG_ARCHIVE_DEST_1='LOCATION=/ARCH1/CHICAGO/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'

DB_UNIQUE_NAME: Specified unique database name in Oracle Data Guard conf.

Used along with LOG_ARCHIVE_CONFIG.

Ex:

DB_NAME=chicago

DB_UNIQUE_NAME=chicago

LOG_ARCHIVE_CONFIG='DG_CONFIG= (chicago, boston)'

LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ VALID_FOR= (ALL_LOGFILES, ALL_ROLES)

LOG_ARCHIVE_DEST_2= 'SERVICE=boston LGWR ASYNCVALID_FOR= (ONLINE_LOGFILES, PRIMARY_ROLE)

DB_UNIQUE_NAME=boston'

The LOG_ARCHIVE_CONFIG parameter also has SEND, NOSEND, RECEIVE, and NORECEIVE attributes:

- SEND enables a database to send redo data to remote destinations

- RECEIVE enables the standby database to receive redo from another database

To disable these settings, use the NOSEND and NORECEIVE keywords

Ex: LOG_ARCHIVE_CONFIG='NORECEIVE, DG_CONFIG= (chicago,boston)'

Use of these parameters can effect the role transition. Therefore trys to remove them before doing any role transitions

Handling Errors while transmitting redo:

Options when archiving fails

Retry the archival operations (control the number of retry operations)

Use an Alternate destinations

Ex: LOG_ARCHIVE_DEST_1='LOCATION=/arc_dest REOPEN=60 MAX_FAILURE=3's

Other parameters used:

REOPEN: default value is 300 seconds, 0 turns off this option

MAXIMUM _FAILURE: Maximum number of failures

ALTERNATE: Alternate Destinations

Note: Alternate take precedence over mandatory attribute; i.e.. even if the archiving destinations is mandatory and if it fails; the archiving automatically moves to alternate destinations.

DATA PROTECTION MODES:

MAXIMUX PROTECTION

MAXIMUM AVAILIBLITY

MAXIMUX PERFOMANCE (Default)

- No Data loss if Primary Database fails

-Redo data needed for recovery has to be written both to an online redo log files and standby redo log files before commit

- Atleast one Standby Database should be available

- If any fault happens Primary Database will shutdown

- Configure LGWR, SYNC & AFFIRM attributes of LOG_ARCHIVE_DEST_n parameter on Standby Database

-Provides highest level of data protection without compromising availability of Primary Database

-Primary Database doesn’t shutdown and continues to work in maximum performance mode until the fault is corrected

-all gaps in redolog files are resolved and then it goes back to maximum availability mode.

-Alteast on Standby Database should be available

- Configure LGWR, SYNC & AFFIRM attributes of LOG_ARCHIVE_DEST_n parameter on Standby Database

-Provides highest level of data protection

-does not effect the performance

- As soon as the redo data is writted to the online redo log file the transacation is committed.

-redo is also written to alteast one Standby Database asynchronously

- Use network links with sufficient bandwith to get maximum availablitiy with minimal input on performance on pdb

- Set LGWR, SYNC & AFFIRM attributes of LOG_ARCHIVE_DEST_n parameter on alteast Standby Database

Setting the Data Protection Mode of a Data Guard Configuration

Atleast one db should meet the following minimum requirements:


MAXIMUM PROTECTION

MAXIMUM AVAILIBLITY

MAXIMUM PERFOMANCE

Redo Archival Process

LGWR

LGWR

LGWR OR ARCH

Network transmission mode

SYNC

SYNC

SYNC or ASYNC when using

LGWR process. SYNC if using

ARCH process

Disk write option

AFFIRM

AFFIRM

AFFIRM OR NO AFFIRM

Standby redo log required?

YES

YES

NO BUT RECOMMENDED

Note: oracle recommends that on Oracle Data Guard configurations that is running on maximum protection mode contains atleast two Standby Database meeting the above requirements so that the Primary Database continue processing without shutting down if one of the Standby Database cannot receive redo data from Primary Database.

Managing log files:

1. Specify alternate directory for archived redologs.

- Redo received from Primary Database is identified by location attribute of the parameter LOG_ARCHIVE_DEST_n.

- Alternate directory can be specified by using parameter STANDBY_ARCHIVE_DEST.

- If both parameters are specified then STANDBY_ARCHIVE_DEST overrides LOG_ARCHIVE_DEST_n parameter.

- query v$arvhive_dest to check the value of STANDBY_ARCHIVE_DEST parameter

SQL> SELECT DEST_NAME, DESTINATION FROM V$ARCHIVE_DEST WHERE DEST_NAME='STANDBY_ARCHIVE_DEST';

- Filesnames are generated in the format specified LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc

Note: Redo Transport Service stores the fully qualified domain name of these files in Standby Database control file and redo apply uses this information to perform recovery.

- view v$archived_log

- checking archived redo’s log files that are on the standby system

SQL> SELECT NAME FROM V$ARCHIVED_LOG;

2. Reusing Online Redo Log Files

1. Specify alternate directory for archived redolog files

- redo received from Primary Database is identified by location attribute of the parameter LOG_ARCHIVE_DEST_n

- Alternate directory can be specified by using parameter STANDBY_ARCHIVE_DEST

- If both parameters are specified than STANDBY_ARCHIVE_DEST overrides LOG_ARCHIVE_DEST_n parameter

- Query v$arhive_dest to check the value of STANDBY_ARCHIVE_DEST parameter

SQL> SELECT DEST_NAME, DESTINATION FROM V$ARCHIVE_DEST WHERE DEST_NAME= 'STANDBY_ARCHIVE_DEST';

- Files name are generated in the format specified by LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc

Note: Redo Transport Service stores fully qualified domain name of these files in Standby Database control file & redo apply uses this information to perform recovery.

2. Reusing Online Redolog file:

For reusing the online redolog files we have to set optional or mandatory option with LOG_ARCHIVE_DEST_n parameter

Ex: LOG_ARCHIVE_DEST_3=’LOCATION=/arch_dest MANDATORY’

Note: By Default remote destinations are set to optional.

By Default one local destination is mandatory.

§ If mandatory is specified the archive log files are not overwritten until the failed archive log is applied

§ If optional is specified; even if the redo is not applied the files are over written.

3. Managing Standy Redo log files:

Check the RFS process trace file or database alert log file to determine we have adequate standby redo log files or not.

i.e... If these files indicate RFS process has to wait frequently for a group as archiving is not getting completed than add more log file groups to standby redo log.

Note: when ever an ORL file group is added to Primary Database than we must add the corresponding standby redo log file group to the Standby Database.

If the no. of standby redo log file groups are inadequate then Primary Database will shutdown if it is in maximum protection mode or switch to maximum performance mode if it is in maximum availability mode

Ex: Adding a member to the standby redo log group

Sql> Alter database add standby logfile member ‘/disk1/oracle/dbs/log2b.rdo’ to group 2;

4. Planning for growth & reuse of control files:

§ The maximum control file size is 20,000 database blocks

§ If 8k is the block size (8124) then the maximum control file size will be 156 MB.

§ As long as the archived redo logs are generated or RMAN backups are taken records are added to the control file. If control file reaches its maximum size then these records are reused.

§ Parameter used to specify the time for keeping control file records is control_file_record_keep_time value ranges from 0-365 days (default is 7 days)

§ Note: Keep the control_file_record_keep_time value atleast upto last 2 full backup’s period.

§ In case if redo is planned to apply with delay then set this value to more no. of days.

5. Sharing a log file destinations among multiple Standby Databases:

Ex:

LOG_ARCHIVE_DEST_1=‘LOCATION=disk1 MANDATORY’

LOG_ARCHIVE_DEST_2 =‘SERCIVE=standby1 optional’

LOG_ARCHIVE_DEST_3 =‘SERCIVE=standby2 optional dependency LOG_ARCHIVE_DEST_2’

In this case DEPENDENCY attribute is set to second standby db which takes the redo data from LOG_ARCHIVE_DEST_2.

This kind of setup can be used if

§ Primary Database & Standby Database resides on the same system.

§ Physical Standby Database & Logical Standby Database resides on the same system.

§ When clustered file system is used.

§ When network file system is used

MANAGING ARCHIVE GAPS:

§ Oracle Data Guard resolves the gaps automatically

§ Gaps can happen due to network failure or archiving problem on Primary Database.

§ Primary Database polls Standby Database every minute to detect the gaps [polling mechanism]

§ In case Primary Database is not available then we have to resolve the gaps manually by applying redo from one of the Standby Database.

§ No extra configurations are required to resolve the gaps automatically.

1. Using FAL [fetch archive logs mechanism] to resolve gaps.

Set the parameters FAL_SERVER = net_service_name

FAL_SERVER=standby2_db, standby3_db

FAL_CLIENT=stadnby1_db

2. Manually resolving archive gaps

We have to resolve the gaps manually if Primary Database is not available and if we are using Logical Standby Database (case 1); Also application for some other cases.

Resolving gaps on a Physical Standby Database:

  1. Query the gap on Physical Standby Database:

SQL> SELECT * FROM V$ARCHIVE_GAP;

THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#

----------- ------------------------ --------------------------

1 7 10

  1. Find the missing logs on Primary Database and copy them to the Physical Standby Database.

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;

NAME

-------------------------------------------------------------------------

/primary/thread1_dest/arcr_1_7.arc

/primary/thread1_dest/arcr_1_8.arc

/primary/thread1_dest/arcr_1_9.arc

  1. Once these log files are copied on Physical Standby Database then we have to register them with Physical Standby Database

SQL> ALTER DATABASE REGISTER LOGFILE

'/physical_standby1/thread1_dest/arcr_1_7.arc';

SQL> ALTER DATABASE REGISTER LOGFILE

'/physical_standby1/thread1_dest/arcr_1_8.arc';

  1. Restart Redo Apply.

Resolving gaps on a Logical Standby Database:

Same procedure as Physical Standby Database but the view used is dba_logstdby_log instead of v$archive_gap

Steps:

a.

SQL> COLUMN FILE_NAME FORMAT a55

SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L

WHERE NEXT_CHANGE# NOT IN

(SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#) ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME

---------- ---------- -----------------------------------------------

1 6 /disk1/oracle/dbs/log-1292880008_6.arc

1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Note: If there is a gap then only one file is hsown for each thread. Otherwise it shows two files for each thread

In the above examples missing files are 7,8,9.

b. copy these file on Logical Standby Database location.

c. register these files with Logical Standby Database

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE 'file_name';

d. Restart SQL Apply

Verification:

1. Check the status of online redofile on Primary Database

SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG;

2. determine is the most recent archive file on Primary Database

SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

3. Use the following query on Primary Database to check which is the most recently transmitted archive log file to each destination

SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#

FROM V$ARCHIVE_DEST_STATUS

WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

4. Use the following query on Primary Database to find out the archive redolog files not received at each destination

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM

(SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)

LOCAL WHERE LOCAL.SEQUENCE# NOT IN

(SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND

THREAD# = LOCAL.THREAD#);

5. Set log_archive_trace parameter in Primary Database & Standby Database to see the transmission of redo data

Monitoring the Performance of Redo Transport Services

View: v$system_event

Parameter: log_archive_dest_n

Attributes: ARCH

LGWR ( SYNC/ASYNC)

Waits Events to monitor:

  1. ARCn Wait Events
  2. LGWR SYNC wait events
  3. LGWR ASYNC Wait Events

Note: Use OEM to Monitor in GUI for Oracle Data Guard