Tracing SPM’s and Datapump (Tracing Series-I)

Some of  the complex issues that we face in our day-day operations can be resolved by tracing with or without Oracle support.  It depends on how much you want to grind yourself as understanding some of the trace files are not for faint hearts.  I always end up spending time googling for appropriate  syntax or trace number. Sometimes you  are lucky enough to get  to the intended document but sometimes you end up at wrong place.   So here I am compiling a one stop shop for all tracing needs as series.  In  Part-I , I will be discussing SPM tracing and Datapump tracing.  SPM and datapump have nothing in common , just  picked them randomly


SPM Tracing: There may be many  reasons why one would trace SPM.  Few of the reasons I can think of is to diagnose SPM’s not being created or to see more information of plan evolution.  SPM Tracing can be enabled either of following ways

  •     exec dbms_spm.configure(‘spm_tracing’,1);

I could not find documentation about spm_tracing option  in 11gR2. Two documented parameters supported by dbms_spm.configure procedure are  space_budget_percent and plan_retention_weeks.  Only references for parameter spm_tracing was found in metalink with values 0 (disable tracing) and 1(enable tracing). The actual command however takes any values you provide.  You can query sys.smb$config to verify tracing is enabled or disabled

SQL> exec dbms_spm.configure('spm_tracing',1); 

PL/SQL procedure successfully completed.

SQL> 
SQL>  select parameter_name, parameter_value from sys.smb$config;

PARAMETER_NAME                 PARAMETER_VALUE
------------------------------ ---------------
SPACE_BUDGET_PERCENT                        25
PLAN_RETENTION_WEEKS                       523
SPM_TRACING                                  1
  •     alter session set events ‘trace [sql_planmanagement.*]’;

Noticed the 2nd command; If does not look familiar then you are not  alone. This is the new syntax introduced with 11g also referred to as universal tracing.  Unfortunately I was not able to get this working after trying different options. I will follow-up with Oracle on this command.
SQL>
SQL> alter session set events ‘trace [sql_planmanagement.*]’;
ERROR:
ORA-49100: Failed to process event statement [trace [sql_planmanagement.*]]
ORA-48904: String [SQL_PLANMANAGEMENT] is not a valid component name

SQL> alter session set events ‘trace {sql_plan_management.*}’;
ERROR:
ORA-49100: Failed to process event statement [trace {sql_plan_management.*}]
ORA-49109: Library Name [SQL_PLAN_MANAGEMENT] not found

SQL>

DataPump Tracing:  Datapump can be traced in multiple ways either at database level or at command level or with 10046 trace.  With datapump , tracing is as complex as analyzing the trace files.  Never understood the reasoning behind  exposing hexadecimals to users unless this was intended for Oracle support. Datapump tracing  is enabled with  TRACE parameter using 7 digit hexadecimal. The first three digits enable the trace  while last four digits are usually “0300”
At the database level, datapump tracing can be enabled with initialization parameter EVENT 39089.  Again you have to find the correct  hexadecimal value and  bounce the database.

Example:(Init.ora setting)

EVENT=”39089 trace name context forever,level 0x300″ .

Example (alter system ,10046 trace)
alter system set events ‘sql_trace {process : pname = dw | pname = dm} level=12’; Just replace 12 with off to disable tracing. Be aware of this command , it will take any argument for process name.


SQL> alter system set events ‘sql_trace {process : pname = dw } level=12’;

System altered.

SQL> alter system set events ‘sql_trace {process : pname =djsljl} level=12′;  ==> Incorrect value

System altered.

SQL>
—-

Example (Command level)
expdp system/manager DIRECTORY=mydata_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log full=y TRACE=480300

Here is information about  the hexadecimals supported trace levels supported.  Oracle also supports combination of these values

================================================================================
Hex     | Purpose
================================================================================
10300   |  SHDW: To trace the Shadow process (API) (expdp/impdp)
20300   |  KUPV: To trace Fixed table
40300   |  'div' To trace Process services
80300   |  KUPM: To trace Master Control Process (MCP,DM)
100300  |  KUPF: To trace File Manager
200300  |  KUPC: To trace Queue services
400300  |  KUPW: To trace Worker process(DW)
800300  |  KUPD: To trace Data Package
1000300 |  META: To trace Metadata Package
1FF0300 |  'all' To trace all components(full tracing)
================================================================================

Unnesting Nested Loops

Lately I have been trying to help my non-DBA colleague to understand optimizer operations. Now that I am putting this effort , why not add it to my blog. I will be writing a series of blogs discussing optimizer operations.  Let me start the honors with NESTED LOOPS.

NESTED LOOPS:  As the name implies, this operation works similar to nested loops in programming language aka a loop within a loop.   For example, let’s use PL/SQL language that most of us are familiar

SQL> BEGIN

2       FOR  OuterLoop IN 1..2 LOOP

3            FOR InnerLoop  IN 1..3 LOOP

4                 DBMS_OUTPUT.PUT_LINE(‘OuterLoop=> ‘|| OuterLoop  || ‘   ‘||  ‘ InnerLoop=> ‘ || InnerLoop);

5            END LOOP;

6       END LOOP;

7  END;

8  /

OuterLoop=> 1    InnerLoop=> 1

OuterLoop=> 1    InnerLoop=> 2

OuterLoop=> 1    InnerLoop=> 3

OuterLoop=> 2    InnerLoop=> 1

OuterLoop=> 2    InnerLoop=> 2

OuterLoop=> 2    InnerLoop=> 3

 PL/SQL procedure successfully completed.

In this example, the outer loop executes 2 times and for each execution of the outer loop, the inner loop executes 3 times. Switching gears to nested loop between let’s say tables A and B; Assume table A has 50 rows and table B has 1000 rows.  If table A is used as the outer table, then for every row accessed in table a, inner table B will be accessed 1000 times.

So who makes the choice of deciding the outer and inner table? Well the Oracle optimizer does. Generally the rule of thumb is the table with less number of rows is used in the outer loop and table will more number of rows is used in the inner table.   This is where table statistics plays an important role, actually up-to-date statistics.  Number of IO operations and blocks retrieved also influences the decision in choosing the appropriate outer and inner tables. None of what is mentioned in this article is always true  as most of us don’t have access to actual Oracle logic .

A nested loops join is  very effective if the outer input is small and the inner input is indexed and large.  Generally nested loops perform better in OLTP environment where the result set tends to be small.

Changes in 11g

Execution plans with 11G might look different because of  changes in nested loop implementation. The change was introduced  to reduce overall latency for physical I/O  when the required data for nested loop join is not in the buffer cache.  Oracle reduces the overall physical I/O request latency by batching multiple physical I/O requests  improving the  performance (most likely). You can see the new batching operation with a  10053 trace or  DBMS_XPLAN.DISPLAY_CURSOR.

You will notice  two NESTED LOOPS  operations  in the execution plan instead of one because of this implementation change.   Two nested loop are required for

  1.   1st nested loop to  join data from outer loop table  and index on the inner loop table.
  2.   2nd nested loop to join the result set of  1st nested loop and table  data on the inner side of the join.

The above implementation change can be controlled with init.ora parameter optimizer_features_enable or  with a new hint “no_nlj_batching”

For example, Let us run the a query with optimizer_features_enable set to Oracle version 11 and then again rerun the same query with Oracle version 10.

Case-1 : optimizer_features_enable=11.2.0.2

In this case you will see 2  nested loops in the execution plan and new operation NLJ_BATCHING in the outline_data.

SQL> SELECT *   FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format => 'ADVANCED'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apzagcxrnwka8, child number 0
-------------------------------------
SELECT   A.OBJECT_NAME ,  A. LAST_DDL_TIME , B.CREATED FROM A, B WHERE
b.OBJECT_ID=a.OBJECT_ID  AND  a.OBJECT_ID BETWEEN 1 AND 30

Plan hash value: 1521750179

----------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |       |       |     5 (100)|          |
|   1 |  NESTED LOOPS                 |        |       |       |            |          |
|   2 |   NESTED LOOPS                |        |     3 |   135 |     5   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| A      |    16 |   512 |     3   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | IDX_A  |    16 |       |     2   (0)| 00:00:01 |
|*  5 |    INDEX UNIQUE SCAN          | INDX_B |     1 |       |     0   (0)|          |
|   6 |   TABLE ACCESS BY INDEX ROWID | B      |     1 |    13 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   3 - SEL$1 / A@SEL$1
   4 - SEL$1 / A@SEL$1
   5 - SEL$1 / B@SEL$1
   6 - SEL$1 / B@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
      DB_VERSION('11.2.0.2')
      OPT_PARAM('query_rewrite_enabled' 'false')
      FIRST_ROWS(10)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "A"@"SEL$1" ("A"."OBJECT_ID"))
      INDEX(@"SEL$1" "B"@"SEL$1" ("B"."OBJECT_ID"))
      LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1")
      USE_NL(@"SEL$1" "B"@"SEL$1")
      NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("A"."OBJECT_ID">=1 AND "A"."OBJECT_ID"<=30)
   5 - access("B"."OBJECT_ID"="A"."OBJECT_ID")
       filter(("B"."OBJECT_ID"<=30 AND "B"."OBJECT_ID">=1))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "A"."OBJECT_NAME"[VARCHAR2,128], "A"."LAST_DDL_TIME"[DATE,7],
       "B"."CREATED"[DATE,7]
   2 - "A"."OBJECT_NAME"[VARCHAR2,128], "A"."LAST_DDL_TIME"[DATE,7],
       "B".ROWID[ROWID,10]
   3 - "A"."OBJECT_NAME"[VARCHAR2,128], "A"."OBJECT_ID"[NUMBER,22],
       "A"."LAST_DDL_TIME"[DATE,7]
   4 - "A".ROWID[ROWID,10], "A"."OBJECT_ID"[NUMBER,22]
   5 - "B".ROWID[ROWID,10]
   6 - "B"."CREATED"[DATE,7]

Case-2 : optimizer_features_enable=10.2.0.3

In this case you will see 1  nested loops in the execution plan and new operation NLJ_BATCHING in missing in outline_data.

SQL> SELECT *   FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format => 'ADVANCED'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apzagcxrnwka8, child number 1
-------------------------------------
SELECT   A.OBJECT_NAME ,  A. LAST_DDL_TIME , B.CREATED FROM A, B WHERE
b.OBJECT_ID=a.OBJECT_ID  AND  a.OBJECT_ID BETWEEN 1 AND 30

Plan hash value: 1900124691

---------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |       |       |     5 (100)|          |
|   1 |  NESTED LOOPS                |        |     3 |   135 |     5   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| A      |    16 |   512 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | IDX_A  |    16 |       |     2   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID| B      |     1 |    13 |     1   (0)| 00:00:01 |
|*  5 |    INDEX UNIQUE SCAN         | INDX_B |     1 |       |     0   (0)|          |
---------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   2 - SEL$1 / A@SEL$1
   3 - SEL$1 / A@SEL$1
   4 - SEL$1 / B@SEL$1
   5 - SEL$1 / B@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
      DB_VERSION('11.2.0.2')
      OPT_PARAM('query_rewrite_enabled' 'false')
      FIRST_ROWS(10)
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "A"@"SEL$1" ("A"."OBJECT_ID"))
      INDEX_RS_ASC(@"SEL$1" "B"@"SEL$1" ("B"."OBJECT_ID"))
      LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1")
      USE_NL(@"SEL$1" "B"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("A"."OBJECT_ID">=1 AND "A"."OBJECT_ID"<=30)
   5 - access("B"."OBJECT_ID"="A"."OBJECT_ID")
       filter(("B"."OBJECT_ID"<=30 AND "B"."OBJECT_ID">=1))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "A"."OBJECT_NAME"[VARCHAR2,128], "A"."LAST_DDL_TIME"[DATE,7],
       "B"."CREATED"[DATE,7]
   2 - "A"."OBJECT_NAME"[VARCHAR2,128], "A"."OBJECT_ID"[NUMBER,22],
       "A"."LAST_DDL_TIME"[DATE,7]
   3 - "A".ROWID[ROWID,10], "A"."OBJECT_ID"[NUMBER,22]
   4 - "B"."CREATED"[DATE,7]
   5 - "B".ROWID[ROWID,10]

63 rows selected.
 

Oracle SCN Bug

Oracle community seems  to be in some what confused state after Infoworld  published an article about Oracle SCN flaw. Questions like “Am I affected” or “When should I apply the patch”  are common.  Personally I think this is a low risk bug but applying the patch to mitigate the risk is highly recommended for following reasons

  1. The patch also fixes other SCN unpublished bugs. so this patch is highly recommended even if you are doing hot backups with RMAN.
  2. Best practice to apply latest security patch.
  3. The patch introduces new checks/threshold against SCN contamination from db links.   Oracle has not disclosed the  threshold but very useful if you are not running the SCN health check on a daily routine. With the patch you are protected from databases with SCNs close to soft limits   via db links.
.
Oracle SCN
.
System Change Number or SCN  is internal timestamp used by Oracle database to comply with ACID properties of database.  Every transaction in the database is assigned a  (unique) SCN value.  Generally the SCN values assigned for each transaction is supposed to be unique but  quite often we have noticed transactions with duplicate SCN values  mostly under high concurrency.
.
Oracle SCN is incremented by 16384 for every second since 1/1/1988. This number is also called soft Limit and increase every second.  The maximum value or hard limit of SCN supported by Oracle is 281 trillion, good enough to last 500 years. At any point in time, the database SCN cannot be greater than SCN Soft limit.  This might temporarily make the database unavailable or  may cause the  database to crash with ORA-600 errors. The database will start to function normally as you start to accumulate SCNs with time. Without any bugs, it is possible but rare to use up 16384 SCN’s in 1 second. In unusual circumstances  and if required, you can also fool the database by  changing the system date to sometime in future but this may introduce data issues.
.
SCN soft limit can be calculated with following query
.

COLUMN SCN FORMAT 999999999999999999999999999

SELECT (SYSDATE – TO_DATE(’01-JAN-1988′)) * 24*60*60 *16384 AS SCN FROM DUAL;

.

Here is simple example to understand working of SCN and for sake of simplicity , lets assume the current Value of SCN at 10:00:01 AM is X.

Click on the diagram for larger view.


Now lets talk about the hot backup bug that impacts 11g databases

The hot backup bug( 12371955 ) causes SCN values to jump very high causing it to  reach the soft limit earlier than how Oracle intended.  This bug is not exposed with hot RMAN backups. As per Oracle, the bug is categorized as  hot backup bug which seems to indicate that both  “ALTER DATABASE/TABLESPACE …. BEGIN/END BACKUP command  instead of  “ALTER DATABASE BEGIN/END BACKUP”  command only.    There are other unknown triggers that may cause this jump too.

Database links and SCN Jump

Now introduce databases interconnected via database link.  When databases are interconnected via database link, Oracle synchronizes the SCN value of the database with the highest SCN to all the other interconnected databases whenever they communicate with each other. So even if one of your databases is infected with this bug, it impacts all interconnected databases irrespective of the version of database with respect to propagating the SCN values.

From the above diagram, databases db1 to db8 are interconnected by db links. So  even if one of the database gets infected with the bug, it propagates to all the databases.

The Fix

Apply Oracle recommended PSU or CPU patch. The patch fix resolves  the hot backup bug and many other unknowns but you are still not fully protected from SCN hacks or other unknown triggers. So it is recommended to run SCN health check at regular intervals to prevent any runaway growth.  The patch also collects additional diagnostic information if the issues occur due to some other undocumented reason.  Refer to  metalink doc 1393363.1 to download the  SCN health check.

Please refer to https://swervedba.wordpress.com/2011/05/30/oracle-patch-sets-psu-and-cpu  for differences between CPU and PSU

Improving performance of GoldenGate Replicat

It has be quite few years that I have been working with GoldenGate.  I am really impressed with this replication tool.  Its very easy to install, manage and learn this tool. In this blog, I am writing about how to improve the performance  of replicat.

As time permits , I will be adding more information regarding this topic.

Replicate can operate in three modes.

  1. Source mode: Transaction is applied in the same order as they occur in source database
  2. Normal mode: Multiple transaction are grouped together and applied.
  3. Batch mode:  Multiple similar SQL statements from different transactions are grouped together and applied.

You get better performance with batch mode off course there are some caveats.  The performance improvement is more for smaller row changes than large row changes. It basically follows law of diminishing returns where in he performance improvement decreases as the size of row change increases.

Batch mode should be avoided when the chances of data collision is more because replicat reverts back to normal mode when errors are encountered. If errors still persists , Replicat will apply transaction as they occurred in source database.

Trace Analyzer Explained

I am writing this blog to  discuss my experience  with Trace Analyzer.  I frequently use this tool due to nature of my job.  Sometimes , trace analyzer  is a silver bullet and sometimes it is your DBA experience or instinct.  This tool is basically an enhanced version of tkprof that not only formats Oracle trace files generated by  setting of  10046 event. It also  provides  diagnostic information like top SQL, Hot IO blocks ,  IO wait summary and  SQL  Genealogy to list  few of them. The output generated is both in text and HTML format.

Installation.

Download  trca.zip from metalink. Refer to  metalink doc ID  [ID 224270.1] for detailed instructions. Once you unzip trca.zip , you will find the following folders along with ” instructions.txt”.

  1. dict
  2. doc
  3. install
  4. instructions.txt
  5. run
  6. utl

All you need to do is “cd install” and execute “tacreate.sql”. The script will create user called “trcanlzr“. You  will be prompted for password, defaulted tablespace and  temporary tablespace.  The tool can be uninstalled  by executing “tadrop.sql” i.e.. user “trcanlzr” is dropped. More detailed instructions can be found in “instructions.txt”

One key advantage of  “trcanlzr” schema is that it stores  diagnostic/detailed information of the trace files in its own schema and therefore you can write your own queries to mine whatever data you want. The trcanlzr schema also captures data dictionary information and stores it within the schema. The downside of storing all that data in the schema is that  running diagnostics on a trace file takes longer as data volume increases. Tell me about tuning the  tool itself.  I have resolved this slowness by backing up existing data , uninstall and then install the tool once again.

How to generate formatted output?

Logon to database as trcanlzr and execute “start trcanlzr tracfile_name” from the run directory of trace analyzer.  The tool will create formatted  text and HTML output in the database udump directory.  A zip file with formated text, HTML, error and tkprof  will also get created in the run folder.

About using the tool itself, I generally start with Top SQL, look at each of the queries, their execution plan , top wait events for the query, bind variables used, any segment contention etc.

.
Sample output

Top SQL

Hints , Outlines, SQL Profiles & SQL Plan Management Compared

My last blog was comparison of DataGuard, GoldenGate, Streams and RAC.  While writing my previous blog , the idea of comparing Hints , Outlines, SQL Profiles &  SQL Plan Management stuck me.  I am kind of in a compare-maniac mode and here   goes some comparison of some key features.  I also think  you can understand technology better when you compare with something you know.

I will add more content to this blog as I get more time. The table format had to be changed to fit the data.

Definition
Hints Optimizer is not making decisions regarding the execution plan, Instead you are. The pros are that it provides stable execution plan and also lets you tune vendor client applications without rewriting the SQL. The cons being that you might end up with sub-optimal execution plan. You also may not be able to take advantage of newer optimizer logic as you upgrade.
Outlines Collection of hints stored in OL$ , OL$HINTS and OL$NODES tables. Oracle uses this information in the table to come up with the execution plan.
SQL Profiles Conceptually SQL profiles and outlines are same. Just like outlines, SQL profiles are collection of hints. The main difference between  outlines and  SQL profiles is Outlines have limited  artificial intelligence embed into the feature compared to SQL profiles which means SQL profile may wander  around from the original  purpose  of  “providing  stable plan”  based on additional  data collected.  As always data distribution is the key culprit (generally).
SPM SPM or SQL Plan management support plan stability by kind of guarantying that your query performance will not degrade because of change in execution plan SPM’s also support evolution of execution plan.
Oracle Version
Hints Since Oracle 7
Outlines Since Oracle 9i
SQL Profiles Since Oracle 10g
SPM Since Oracle 11g
Lock Execution Plan
Hints Varies, Oracle can ignore hints
Outlines Yes, Most of the time
SQL Profiles No,  May deviate from  original intended plan based on OPT_ESTIMATE or  COLUMN_STATS or TABLE_STATS
SPM Yes
How to create?
Hints Not Applicable
Outlines CREATE_STORED_OUTLINE, DBMS_OUTLN.CREATE_OUTLINE AND CREATE OUTLINE
SQL Profiles DBMS_SQLTUNE, SQL  Tuning Advisor, AWR
SPM OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES , AWR
How to Use?
Hints /*+ hints */
Outlines Set USE_STORED_OUTLINE at session or system level.This  is not an init.ora parameter
SQL Profiles Enable/Disable the SQL profile
SPM Set OPTIMIZER_USE_SQL_PLAN_BASELINES at system or session level
Support  for Private plans
Hints N/A
Outlines YES
SQL Profiles N/A
SPM YES
Overhead
Hints None
Outlines YES, Additional checks for every statement Oracle parses.
SQL Profiles YES, Minimal
SPM No
Support for Multiple Executions at any point in time
Hints NO
Outlines NO
SQL Profiles NO
SPM YES
Use unstable execution plans.
Hints YES
Outlines YES
SQL Profiles YES
SPM NO

.

Dataguard, Streams, GoldenGate and RAC Compared

Dataguard: Copy of primary database  used mainly for disaster recovery.  There are two types of dataguard

  1. Physical dataguard:  Physically identical to primary database on a block-block basis.
  2. Logical dataguard:  Same logical data as primary; physical organization and structure can be different.

Streams: Propagates data and events from one database to another or within same database

 GoldenGate: Capture, transform, and deliver data. Supports heterogeneous databases.

RAC: Real Application Clusters is cluster database with shared cache architecture to overcome the limitations of traditional shared-nothing and shared-disk. In simple terms, it is multiple instances of Oracle sharing the same database. To avoid confusion, please note an instance is defined as of SGA and Oracle background process.

Dataguard Streams GoldenGate RAC
Main Objective  

-Disaster Recovery

-Data Protection

-High Availability

 

-Data Replication-Data Sharing -Data Replication

-Data Sharing

-High Availability

-Scalability

Platform**/OS/Oracle SW  

-Must  be same

 

-Can be Different -Can be Different -Must  be same
Full Replication  

-Yes

 

-Yes -Yes -No
Heterogeneous DB Support  

-No

 

-No -Yes -No
Replicate Data Subset  

-Yes (Logical Only)

 

-Yes -Yes -No
Near Zero Upgrade  

-Yes

 

-Yes -Yes -No

**Platform:  Depending on definition of platform, they can be different. For example P550 can be used instead of P560 for Dataguard

Want to know when there is change in query execution plan ?

Want to  get alerted whenever there is  change  in execution plan?  Try the below query.

COLUMN SQL_TEXT FORMAT A100
SET LINESIZE 150 PAGESIZE 2000
SET LONG 300000

SELECT DISTINCT A.SQL_ID ,A.PLAN_HASH_VALUE , B.SQL_TEXT FROM V$SQL_PLAN A, V$SQL B
WHERE A.SQL_ID IN (
SELECT SQL_ID  FROM V$SQL_PLAN
WHERE OBJECT_OWNER=’SCOTT
AND TIMESTAMP>SYSDATE-1
GROUP BY SQL_ID
HAVING COUNT (DISTINCT PLAN_HASH_VALUE) >1)
AND A.SQL_ID=B.SQL_ID

You can choose to run the query every hour or  once every day depending on the sensitivity of your applications to plan changes. It is good to change the TIMESTAMP”  accordingly  in the query. You can also ORDER BY SQL_ID if required.

Redo Logs on RAID-6

We migrated our storage from DMX-4 to VMAX.  I was some what uncomfortable  with one of our main architectural decision of going with RAID-6 because of our millisecond SLA’s. Off course I was wrong and our excellent storage gurus were right. I did lot of research about Oracle installations on RAID-6 especially redo logs. Unfortunately I did not find any. The whole purpose of this article to provide some insight into performance of redo logs on RAID-6. Also the intention of this article is not to explain different types of RAID levels, So I will just to cover some basics before introducing redo logs.

RAID stands for  “redundant array of inexpensive disks”  used to provide highly available disks by storing or striping data across multiple hard drives. Storing data across multiple disks also improves read performance.   I will talk about few important RAID levels.

RAID-0:  Striping with no redundancy. Improved performance but no fault tolerance.

RAID-1: Disk Mirroring i.e.. data is duplicated. Improved read performance with minimal impact to writes.

RAID-10: Combination of RAID-0 and RAID-1.

RAID-5:  Striping and parity information stored in a cyclic/rotating fashion. Some impacts to write operation

RAID-6:  Striping and double parity information stored in a cyclic/rotating fashion. Some impacts to write operation

..

Now coming to redo logs. Typically Oracle recommends RAID-1 for redo logs. As mentioned earlier, RAID-1 provides increased read performance with minimal impact on writes . However this always comes with additional  cost because of hard disk mirroring.  Most of the time, apart from high concurrency ,  redo log  writes happen to be the major  bottleneck for a tuned database because transaction is considered  committed when change is written to redo log files.

With advancement of storage technologies, RAID-1 may not be necessarily required for improved redo performance. Most of the vendors provide storage cache with  the cache itself  being  mirrored for fault tolerance. This means the modified data is considered  committed   by the storage vendor when it is written to cache. All this is transparent to  Oracle. Therefore  performance gain with RAID-1 may be  insignificant. compared to RAID-5 or RAID-6.  We did not notice any performance impact with RAID-6 even with synchronous SRDF because of storage cache.

Recreating your Oracle Inventory

From 10g onwards, you can reverse engineer and recreate your Oracle inventory if it gets corrupted or  accidentally deleted, thereby avoiding  time consuming re-installation of Oracle S/W  or any other unsupported tricks.

If Oracle inventory is corrupted or missing, you generally get the  below  error when opatch command is issued.

=============================================

oracle@myhost:/app/oracle$ opatch lsinventory
Invoking OPatch 11.2.0.1.6

Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation.  All rights reserved.

Oracle Home                   : /app/oracle/product/10.2/db
Central Inventory           : /app/oracleai/oraInventory
from                                : /etc/oraInst.loc
OPatch version               : 11.2.0.1.6
OUI version                     : 10.2.0.3.0
Log file location               : /app/oracle/product/10.2/db/cfgtoollogs/opatch/opatch2011-12-27_13-19-08PM.log

OPatch failed to locate Central Inventory.
Possible causes are:
The Central Inventory is corrupted
The oraInst.loc file specified is not valid.
LsInventorySession failed: OPatch failed to locate Central Inventory.
Possible causes are:
The Central Inventory is corrupted
The oraInst.loc file specified is not valid.

OPatch failed with error code 73
oracle@myhost:/app/oracle$

=============================================

..

..

You may also get this error because of  incorrect inventory location.  So  it is a good idea to make sure the location of inventory is specified correctly  in one of the following files depending upon you OS.

.

  1. /var/opt/oracle/oraInst.loc
  2. /etc/oraInst.loc

.

Contents of oraInst.loc

bash-3.2$ cat /etc/oraInst.loc
inventory_loc=/app/oraInventory
inst_group=dba

..

If the error occurred due to missing or corrupt inventory, then you can recreate the inventory following the steps below.

  1. Backup your existing oracle corrupted  inventory  if it exists.
  2. Run the following OUI command from the Oracle home  whose inventory is corrupt or missing.

cd $ORACLE_HOME/oui/bin

./runInstaller -silent –attachHome ORACLE_HOME=”/app/oracle/product/10.2/db” ORACLE_HOME_NAME=”Ora10202Home”

Note: Even though –attachHome was introduced with OUI version 10.1, it is  doucumented with  OUI 10.2 and higher.