Tag Archives: oracle

Significance of De-normalization

De-normalization is adding of  redundant data to already normalized data set to improve performance without incurring additional  data maintenance and sacrificing the integrity of data

Why should you De-normalize?

While normalization makes it easy to input correct data , it also  makes it difficult to get it out. This is why you  should de-normalization your data. The advantages of de-normalization are faster selects & reduced joins at the cost of slower updates and additional storage.

Personally  I have seen up to 300% performance improvement with de-normalization  when complex joins are  involved. I fully support  exploring of de-normalization to dba’s/developers.

Raw Devices still rules the roost

Note : The focus of this blog is not Data Guard.

We found ourselves in complex situation while implementing Data Guard.  Since our SLA’s are in Milli seconds , Our response times jumped 400% when we implemented synchronous physical Data guard;  All best practices were followed  as per Oracle. See metalink Note:387343.1 and Data Guard Redo Apply and Media Recovery Best Practices 10gR2

Introduction of Data Guard introduced data guard related wait events but log file sync increased more than 10  times indicating  that writes to  standby redo logs were consuming more time. We have a very talented storage team. In the 1st phase, we striped the standby redo log files(1GB)  across 8 disks  in such a way that data only resides on the tracks closer to the drive head. This reduced  the impact of  data guard considerably and helped us to maintain the SLA’s.

In the next phase , we used raw devices for redo logs and control files. With raw devices , Our response times were kind of close to architecture without Data guard.

Our Environment

Database :

OS : AIX5.3

Storage: DMX