Failover with Amazon Aurora PostgreSQL

Side effects of Index Fast Full Scan on OLTP

The goal of this blog is not to analyze the reasons behind working of Oracle optimize as every database implementation is unique. The main objective  is to understand what happens when Oracle optimizer chooses Index Fast Full Scan over Unique Scan. Significantly more occurrences of this behavior was observed after upgrading to Oracle12c.

As per Oracle documentation, An index fast full scan reads the index blocks in unsorted order, as they exist on disk. This scan does not use the index to probe the table like range scan, but reads the index instead of the table, essentially treating the index itself as a table.

The optimizer generally considers index fast full scan when a query criteria is met by the index itself, i.e. index contains all the columns that are needed for the query. You can also force Index Fast Full Scan with the hint INDEX_FFS.

Note: Since an index fast full scan reads the index blocks in unsorted order ,  a fast full scan cannot eliminate a sort operation.

So what happens when Oracle chooses Index Fast Full Scan over Unique Scan?

With respect to performance, you will not see any noticeable difference initially. However with increased concurrency, you may notice your CPU reaching the sky and subsequent performance degradation.  You will feel lucky if you are able to execute even a simple troubleshooting query.

Your “Top 10 Foreground Events by Total Wait Time” might look something like the following:




The #1 top wait event is “latch: cache buffers chains” and apart from that you can see many more latch related wait events in Top 10.

Now what is wait event “latch: cache buffers chains”?

This event  is generally caused by block contention (hot blocks) when multiple sessions repeatedly access one or more blocks that are protected by the same cache buffers chains latch. Why did this happen suddenly to my database. No suspense here. Further investigation revealed change in execution plan from unique scan to Index Fast Full Scan.

The new plan with Index Fast Full Scan caused 1219 more buffer gets in my database resulting in latch contention because of concurrency. So additional buffer gets with concurrency caused the latching issue.

To Summarize, Index Fast Full scans may be good fit for DSS applications more so because index  full scans can be parallelized. When it comes to OLTP, It looks OK as long as the SQL statement is not frequently executed.  But once concurrency kicks in, your will notice performance degradation and 100% CPU usage.

My initial experience upgrading database from Oracle 11g to Oracle 12c (Part -1)

This is work in progress document.

Just wanted to share some experience upgrading database from Oracle 11g  to Oracle 12c; One thing I want to avoid is just go over best practices  instead I want to go over some of the stuff that we encountered.. May  be because of  bug ; some of them may be because of undocumented changes to Oracle 12c ;Some of them may be because of lack of in-dept research.

First thing that was noticed after upgrade was some of our home grown scripts failed. Further investigation revealed that Oracle 12c does not support  sqlplus -sl option ;Instead it has to be sqlplus -s -l. As per Oracle Support, Oracle 12c behavior is correct one and the Oracle 10g/11g behavior was a bug that was fixed in Oracle 12c.


Second issue is  one of our internal  application failed with following error:  “Caused by: java.sql.SQLException: ORA-28040: No matching authentication protocol”. There are multiple ways to fix/workaround this issue.  One of fix/workaround  is to set SQLNET.ALLOWED_LOGON_VERSION_SERVER in sqlnet.ora file to lower Oracle version like 8 or 9 10 . This parameter actually specifies the authentication protocol  that a client is allowed to use not the actual version of that client.  Therefore even though the parameter value implies Oracle version , the internal check is really against the authentication protocol.

Now now some background about the authentication protocol behavior. In earlier Oracle versions there was a 1-1 relation between authentication protocol and Oracle client version. This behavior changed in Oracle 10g , starting with Oracle 10g, this is no longer a 1-1 relation between authentication protocol and Oracle client version.  So what is the problem?  the problem lies in the fact that both Oracle 10g and 11g use SHA-1 protocol where as Oracle 12c uses SHA-2 protocol.  While SHA-2 protocol by itself is not causing the error, It is the default setting for SQLNET.ALLOWED_LOGON_VERSION_SERVER that is causing the error. In earlier versions , the default was 8 whereas it is 11 in Oracle 12c; therefore all  client versions 10 and below may get ORA-28040 error.

Please note that  SQLNET.ALLOWED_LOGON_VERSION parameter is deprecated in Oracle Database 12c and replaced with SQLNET.ALLOWED_LOGON_VERSION_SERVER  and SQLNET.ALLOWED_LOGON_VERSION_CLIENT.

Other ways to fix the above issue is to upgrade your client like JDBC drivers to  12c to match authentication protocols.


Oracle 12c: DataPump import in NOLOGGING mode

I have not been blogging for quite some time now and thought of ending the year with one last blog. I picked a very simple  yet powerful datapump topic for blogging. Read on …

As is the norm after exhausting all tuning options, we kind of hope that redo is the bottleneck for all performance problems :). So if your datapump performance issues still exists after multiple tuning iterations, As DBA’s you explore ways to run the import in NOLOGGING mode to help improve the performance. There aren’t too many easy options prior to Oracle 12c.  Let us  look at some of options available prior to 12c

Few of the ways to implement NOLOGGING are

1) Set the Database to NOARCHIVELOG mode, Hmm , this requires a bounce and I don’t have the luxury of having an scheduled outage in production. Next option …

2) Set all tables to NOLOGGING mode (after pre-creating them if needed)

3) If both the above options are not possible, then for some of us when our back is against the wall ….. we either quit or move forward. Since quitting is not an option , we tend to try undocumented stuff like “_disable_logging” hippy I finally found the silver bullet to better performance and off course it is OK if the database gets corrupted 🙂

It looks like Oracle Support had enough of it dealing with database corruption:). Starting with Oracle 12 , you have an option to import the data by disabling the archival only for the import operation. You also have the option of disabling archival only for table or Indexes.   Again some amount of redo is always generated  with NOLOGGING which is the norm than exception.

Also this option will only work if FORCE LOGGING is turned off.  This could be a pain point if you have DataGuard.

It is highly recommended to backup the database after running import in NOLOGGING mode  so that media recovery is possible. Otherwise you may have rerun the import operation which is easier said than done because of the dependencies.

Few examples of usage are given below


Oracle 12c: Advanced Network Compression

Most of you are aware that Oracle does simple compression when you have a ordered result set by sending the data once along with count for duplicate data. This is prior to Oracle 12c. Starting with Oracle 12c, Oracle supports Advanced Network Compression. Too bad, it is available as part of Advanced Compression option.

Lets forget Oracle for sometime and look at what are the main constraints for networking? It is generally the network bandwidth and data volume. Lets think of network bandwidth as a pipe and data volume to
be water. If you need to send more water across the pipe, you need to have a bigger pipe or convert the water to a different form to occupy less space so that more water in different
form can be send across the same pipe. The best example for data volume is YouTube & NetFlix who gobble up more than 60% of the internet traffic.

Now back to Oracle, Oracle does something similar to what I explained above with advanced network compression. Oracle compresses the data to be transmitted over network at the sending side and then
convert  data back to original at the receiver side to reduce the network traffic ; End result being transparent to user. As with any other advanced compression option, Oracle supports GZIP or LZO for
compression of data.

You can notice significant improvement in performance if network is the bottleneck. If you have a CPU bound system, you are most likely going to make things worst from bad.
How to implement?

1. Using SQL*Net parameters


  • SQLNET.COMPRESSION: This parameter enables or disables data compression. Compression will be used only if this parameter is set to ON  at both the server and client side. Also please note that SQLNET.COMPRESSION is not applicable to Oracle Data Guard streaming redo and SecureFiles LOBs.


  • SQLNET.COMPRESSION_LEVELS: This parameter specifies the compression ratio. A value of LOW consumes less CPU for compression resulting in lower compression ratio whereas a value of HIGH consumes more CPU for compression resulting in higher compression ratio.


  • SQLNET.COMPRESSION_THRESHOLD: This parameter specifies in bytes the minimum data size for which compression will be done. Compression is not done for any value below this size


2. Using TNSNAMES parameters

  • Compression can also be enabled using connect descriptor for an individual connection using COMPRESSION and COMPRESSION_LEVELS. These parameter settings have same meaning as SQL*Net compression parameters


  • Example:



Oracle12c : Truncate Table Cascade

Prior to Oracle 12c truncating tables with dependent children was a  pain. We had to disable the constraint before truncating the parent table followed  by enabling the constraint.  The performance gain achieved using TRUNCATE instead of DELETE  almost disappeared if the constraints were enabled with VALIDATE option.  If enabling the constraint with NOVALIDATE option was not acceptable, then DELETE seemed  to be the only way to go.

Starting with Oracle 12c , Oracle supports truncating tables with CASCADE option similar to other RDBMS like MySQL & PostgreSQL. Well  It took some time for Oracle to introduce this option but glad we finally have this option.

Similar to DELETE , the constraints have to be defined with ON DELETE CASCADE. Otherwise just like  DELETE , TRUNCATE also will fail. Also the cascading effect will impact all children , grand children and so on.

Let us look at an example. See scripts to create the table towards end of this blog.

In t his example we have 3 tables DEPT (PARENT) , EMP(CHILD) & EMPPROJ (GRANDCHILD) . The child/grandchild tables are created without the “ON DELETE CASCADE” , So obviously the TRUNCATE command with CASCADE option will fail. Please do note the difference in the error message for with and without CASCADE option.

SQL> truncate table emp;

truncate table emp
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
SQL> truncate table emp cascade ;
truncate table emp cascade
ERROR at line 1:
ORA-14705: unique or primary keys referenced by enabled foreign keys in table


Now lets go ahead modify the table to add the ON DELETE CASCADE option; To do this, we have to drop existing constraint and add the constraint with ON DELETE CASCADE option.






Before we truncate the table, lets take the count of emp & empproj table.


SQL> select count(*) from emp;


SQL> select count(*) from empproj;



Now let us truncate the table with and without t he CASACADE option and then take a count of emp and empproj tables. The key thing to remember is that the CASCADE option will truncate all  data from empproj when emp table is truncated because of child & grandchild option.

SQL> truncate table emp;

truncate table emp
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
SQL> truncate table emp cascade;

Table truncated.

SQL> select count(*) from emp;


SQL> select count(*) from empproj;





Scripts to create the tables 
create table dept
deptno number(4) constraint pk_dept primary key,
dname varchar2(20) not null,
loc varchar2(30)


create table emp
empno number(4) constraint pk_emp primary key,
ename varchar2(20) not null,
salary number(10,2) ,
deptno number(4) constraint fk_deptno_dept references dept(deptno)


create table empproj
empno number(4) ,
proj varchar2(20) not null,
hours number(4) ,
constraint pk_empproj primary key (empno, proj),
constraint fk_empno_emp foreign key (empno) references emp(empno)


INSERT INTO EMP VALUES ( 1001,’TIM’, 5800, 10);
INSERT INTO EMP VALUES ( 1002,’MIKE’, 5800, 30);
INSERT INTO EMP VALUES ( 1003,’SAM’, 5800, 30);
INSERT INTO EMP VALUES ( 1004,’JOE’, 5800, 20);
INSERT INTO EMP VALUES ( 1005,’SEAN’, 5800, 10);




Oracle 12c RMAN New Features

In Oracle 12c , there are lot of new great RMAN features  focused on reducing recovery time; Some of them to provide better DBA 🙂 experience too .Let us  glance at some of them in this blog.

  1. Support for point in time recovery for tables  and partitions. I would rate this as one of best options of Oracle 12c.
  2. Option to duplicate database with NOOPEN option so that duplicated/cloned database remains in mount state. Prior to 12c, the cloned database is automatically opened in RESETLOGS mode. The NOOPEN option is very useful when you want to clone a database  as part of upgrade.
  3. In Oracle 12c , duplicate database supports pull based restore from backup sets; Prior to Oracle 12c , the support was only for push based restore from image copies.
  4. Support for recovering the database from snapshots taken using 3rd party vendors like EMC,Hitachi etc. The recover command now supports SNAPSHOT  TIME clause to implement this feature.
  5. Transporting database across platforms using backupsets. Prior to 12c, only image copies was supported.
  6. Transporting database across platforms using inconsistent backups i.e.. backups taken without putting the database in READONLY mode. New clause ALLOW INCONSISTENT is introduced to implement this.
  7. Support for execution of SQL commands from RMAN prompt without SQL prefix.
  8. Support for SQL*Plus command DESC  from RMAN prompt.