Tuesday, August 17, 2010

Oracle Block Corruptions


 There are different layers where a corruption can occur.

1. OS layer
2. DB layer
3. SAP Pool / Cluster table layer
4. Application layer

The best way to begin the investigation is through File system check.
SAP pool or cluster table layer check can be done with the tool R3check.

DB layer corruptions
A corrupted block can occur in datafiles or in redologfiles.

Consistency check can be performed with the following tools.

1. dbverify - checks if the database block fulfills the predefined rules
2. export - reads the data stored in table blocks.
3. analyze - reads table and index data and performs cross checks.

Note: If using RMAN for backing up the database, an implicit consistency check for blocks saved is performed. Explicit dbverify check is not needed.

The above 3 methods can only be used for datafiles.
Corruptions in redolog files can only be found when applying the redologs during recovery. If you dump a redolog file and do not get an error, this is not sufficient to prove that the redo log is ok.

We basically dump the output of redo log files in a trace and then read the trace file to understand the content. Below are some of the useful command.

The following ways of dumping a redo log file are covered

1. To dump records based in DBA (Data Block Address)
2. To dump records based on RBA (Redo Block Address)
3. To dump records based on SCN
4. To dump records based on time
5. Dump the file header information
6. Dump an entire log file

1. To dump records based on DBA  (Data Block Address)

Connect to database using sysdba and execute the below command
ALTER SYSTEM DUMP LOGFILE ‘filename’  DBA MIN (fileno) (blockno) DBA MAX (fileno) (blockno);

Example:

ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ DBA MIN 5 . 31125 DBA MAX 5 . 31150;
This will cause all the changes to the specified range of data blocks to be dumped to the trace file.  In the example given, all redo records for file #5, blocks 31125 thru 31150 are dumped.

2. To dump records based on RBA (Redo Block Address)

This will dump all redo records for the range of redo addresses specified for the given sequence number and block number.

Syntax:
ALTER SYSTEM DUMP LOGFILE ‘filename’ RBA MIN seqno blockno RBA MAX seqno blockno;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ RBA MIN 2050 13255 RBA MAX 2255 15555;

3. To dump records based on SCN

Using this option will cause redo records owning changes within the SCN range
specified to be dumped to the trace file.

ALTER SYSTEM DUMP LOGFILE ‘filename’ SCN MIN minscn SCN MAX maxscn;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ SCN MIN 103243  SCN MAX 103294;

4. To dump records based on time

Using this option will cause redo records created within the time range specified to be dumped to the trace file.
ALTER SYSTEM DUMP LOGFILE ‘filename’ TIME MIN value TIME MAX value;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ TIME MIN 299425687 TIME MAX 299458800;

Please Note: the time value is given in REDO DUMP TIME

5. Dump the file header information

This will dump file header information for every online redo log file.

alter session set events immediate trace name redohdr level 10;

6. Dump an entire log file:

ALTER SYSTEM DUMP LOGFILE ‘filename’;

Please note: Fully qualify the filename, and include the single quotes.

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’;

Note: Please know that brconnect -f check is not a consistency check. This is only for left freespace, parameter settings, last successful backup etc.

A proper consistency check can prevent data loss because of the corruption. If you check sufficiently often for corruptions you can restore the file containing the corrupted blocks from a backup that does not contain the corruption and recover up to the current point in time. It is quite unlikely that the same corruption is in the archive logs too.

Recommendation: Check the database for consistency at least once a week.

Analyze and export reads the blocks to be checked into the SGA which is why the buffer quality of the DB block buffer is adversely affected for a short while.

Analyze – checks both tables and indexes and may run for long time. Better to run this when the system has low workload. Buffer quality is adversely affected.
Export – Only checks tables. Performance loss as a result of export processes. Certain kinds of corruptions are exported without an error.
If you later try to reorganize this type of table (export/import), you will have problems during import. Buffer quality is adversely affected.

Dbverify – feasible in running operation, this checks for table and indexes and also the blank DB blocks. Doesn’t checks the cross references. Reads the blocks without loading them in the SGA. So, buffer quality is not affected. Dbverify can be run on the data files restored from the backup (without these files having to belong to a DB). Also checks the data in the LOB columns.

To be continued...

No comments:

Post a Comment