Monthly Archives: September 2011

Resolving performance issue with GoldenGate Extract

In general extract does not require any tuning. Tuning is mainly required for  replicat as it must reproduce the source operations by constructing the SQL statement  resulting in  random I/O activity.   We were surprised when we  noticed slow performance of extract  after couple of hours and extract hanging issues  after upgrading  from 10.4 to

Slow performance of Extract

The slow performance was traced to the new feature where the “ADD extract” command by defaults registers the SCN number with RMAN   to  prevent RMAN from deleting archive log files that extract may need for restart and recovery.  Very good feature but looks like the there is bug that causes the slowness in performance. As per Oracle, Extract is suppose to try to register every 10ms up to 4 hrs , but after 4 hrs the wait time overflows which cause extract to issue the SQL to register the SCN thousands of times a second.

This new feature is disabled with extract parameter “ TRANLOGOPTIONS LOGRETENTION DISABLED”

Extract Hanging

We occasionally noticed that extract stopped extracting  data from redo logs until it became full and archived log file was created. Once archived logfile was created, the extract caught-up on backlog  and started processing OK.

This issue was trace to new ReadAhead feature implemented to gain improvement on the queue reading logic but it lacks some synchronization with regards to checking the queue statuses (empty, full, etc.) and causes intermittent hang issues.

This new feature is disabled with extract parameter “TRANLOGOPTIONS _NOREADAHEAD ANY ” .  This parameter is an underscore parameter  similar to  initialization parameters in Oracle.

11g Compression Uncompressed

In 11g, there is compression for everyone.  I am really impressed with the direction of Oracle w.r.t to compression. The main problem with advance compression is that it requires additional options licensing.  Not  a bad deal if you are trying to save on storage costs; just share the savings with Oracle.

This blog  will discuss the key features and implementation of compression. To start with compression is supported at following levels.



  • OLTP compression for DML (Structured Data)
  • SecureFiles Compression (Unstructured Data)

2. Network

  • Data Guard redo Compression


  • RMAN compression
  • Data pump compression



OLTP Compression

  • New compression algorithm uses deferred or batched approach
  • Data is inserted as is without compression until  PCTFREE  value is reached.
  • Compression of data starts once PCTFREE threshold is reached
  • Can be enabled at  table, partition or tablespace level
  • No need of decompressing the data during reads
  • Recommended for low update activity tables


Data guard Compression

  • Redo is compressed as it is transmitted over a network.
  • Helps efficiently utilize network bandwidth when data guard  is across data centers
  • Faster resynchronization of Data guard  during gap resolution.
  • Recommended for low network bandwidth
  • Implemented with  attribute “COMPRESSION”  of initialization parameter LOG_ARCHIVE_DEST_n


RMAN Compression

  • Supports compression of backups using ZLIB algorithm
  • Faster compression and low CPU utilization compared to default BZIP2 (10g)
  • Low compression ratio  compared to BZIP2
  • Implement  with  CONFIGURE COMPRESSION ALGORITHM  ‘value’ command where value can be HIGH, MEDIUM(ZLIB) and LOW(LZO).


Data Pump Compression

  • Compression of metadata introduced in 10g
  • Compression of data introduced in 11g.
  • Both are Inline operation
  • Save on storage allocation
  • No need to uncompress before Import
  • Implemented with COMPRESSION attribute, Values supported are ALL,DATA_ONLY,METADATA_ONLY