https://aws.amazon.com/blogs/database/failover-with-amazon-aurora-postgresql/
Category Archives: Oracle
Side effects of Index Fast Full Scan on OLTP
The goal of this blog is not to analyze the reasons behind working of Oracle optimize as every database implementation is unique. The main objective is to understand what happens when Oracle optimizer chooses Index Fast Full Scan over Unique Scan. Significantly more occurrences of this behavior was observed after upgrading to Oracle12c.
As per Oracle documentation, An index fast full scan reads the index blocks in unsorted order, as they exist on disk. This scan does not use the index to probe the table like range scan, but reads the index instead of the table, essentially treating the index itself as a table.
The optimizer generally considers index fast full scan when a query criteria is met by the index itself, i.e. index contains all the columns that are needed for the query. You can also force Index Fast Full Scan with the hint INDEX_FFS.
Note: Since an index fast full scan reads the index blocks in unsorted order , a fast full scan cannot eliminate a sort operation.
So what happens when Oracle chooses Index Fast Full Scan over Unique Scan?
With respect to performance, you will not see any noticeable difference initially. However with increased concurrency, you may notice your CPU reaching the sky and subsequent performance degradation. You will feel lucky if you are able to execute even a simple troubleshooting query.
Your “Top 10 Foreground Events by Total Wait Time” might look something like the following:
The #1 top wait event is “latch: cache buffers chains” and apart from that you can see many more latch related wait events in Top 10.
Now what is wait event “latch: cache buffers chains”?
This event is generally caused by block contention (hot blocks) when multiple sessions repeatedly access one or more blocks that are protected by the same cache buffers chains latch. Why did this happen suddenly to my database. No suspense here. Further investigation revealed change in execution plan from unique scan to Index Fast Full Scan.
The new plan with Index Fast Full Scan caused 1219 more buffer gets in my database resulting in latch contention because of concurrency. So additional buffer gets with concurrency caused the latching issue.
To Summarize, Index Fast Full scans may be good fit for DSS applications more so because index full scans can be parallelized. When it comes to OLTP, It looks OK as long as the SQL statement is not frequently executed. But once concurrency kicks in, your will notice performance degradation and 100% CPU usage.
My initial experience upgrading database from Oracle 11g to Oracle 12c (Part -1)
This is work in progress document.
Just wanted to share some experience upgrading database from Oracle 11g to Oracle 12c; One thing I want to avoid is just go over best practices instead I want to go over some of the stuff that we encountered.. May be because of bug ; some of them may be because of undocumented changes to Oracle 12c ;Some of them may be because of lack of in-dept research.
First thing that was noticed after upgrade was some of our home grown scripts failed. Further investigation revealed that Oracle 12c does not support sqlplus -sl option ;Instead it has to be sqlplus -s -l. As per Oracle Support, Oracle 12c behavior is correct one and the Oracle 10g/11g behavior was a bug that was fixed in Oracle 12c.
Second issue is one of our internal application failed with following error: “Caused by: java.sql.SQLException: ORA-28040: No matching authentication protocol”. There are multiple ways to fix/workaround this issue. One of fix/workaround is to set SQLNET.ALLOWED_LOGON_VERSION_SERVER in sqlnet.ora file to lower Oracle version like 8 or 9 10 . This parameter actually specifies the authentication protocol that a client is allowed to use not the actual version of that client. Therefore even though the parameter value implies Oracle version , the internal check is really against the authentication protocol.
Now now some background about the authentication protocol behavior. In earlier Oracle versions there was a 1-1 relation between authentication protocol and Oracle client version. This behavior changed in Oracle 10g , starting with Oracle 10g, this is no longer a 1-1 relation between authentication protocol and Oracle client version. So what is the problem? the problem lies in the fact that both Oracle 10g and 11g use SHA-1 protocol where as Oracle 12c uses SHA-2 protocol. While SHA-2 protocol by itself is not causing the error, It is the default setting for SQLNET.ALLOWED_LOGON_VERSION_SERVER that is causing the error. In earlier versions , the default was 8 whereas it is 11 in Oracle 12c; therefore all client versions 10 and below may get ORA-28040 error.
Please note that SQLNET.ALLOWED_LOGON_VERSION parameter is deprecated in Oracle Database 12c and replaced with SQLNET.ALLOWED_LOGON_VERSION_SERVER and SQLNET.ALLOWED_LOGON_VERSION_CLIENT.
Other ways to fix the above issue is to upgrade your client like JDBC drivers to 12c to match authentication protocols.
Oracle 12c: DataPump import in NOLOGGING mode
I have not been blogging for quite some time now and thought of ending the year with one last blog. I picked a very simple yet powerful datapump topic for blogging. Read on …
As is the norm after exhausting all tuning options, we kind of hope that redo is the bottleneck for all performance problems :). So if your datapump performance issues still exists after multiple tuning iterations, As DBA’s you explore ways to run the import in NOLOGGING mode to help improve the performance. There aren’t too many easy options prior to Oracle 12c. Let us look at some of options available prior to 12c
Few of the ways to implement NOLOGGING are
1) Set the Database to NOARCHIVELOG mode, Hmm , this requires a bounce and I don’t have the luxury of having an scheduled outage in production. Next option …
2) Set all tables to NOLOGGING mode (after pre-creating them if needed)
3) If both the above options are not possible, then for some of us when our back is against the wall ….. we either quit or move forward. Since quitting is not an option , we tend to try undocumented stuff like “_disable_logging” hippy I finally found the silver bullet to better performance and off course it is OK if the database gets corrupted 🙂
It looks like Oracle Support had enough of it dealing with database corruption:). Starting with Oracle 12 , you have an option to import the data by disabling the archival only for the import operation. You also have the option of disabling archival only for table or Indexes. Again some amount of redo is always generated with NOLOGGING which is the norm than exception.
Also this option will only work if FORCE LOGGING is turned off. This could be a pain point if you have DataGuard.
It is highly recommended to backup the database after running import in NOLOGGING mode so that media recovery is possible. Otherwise you may have rerun the import operation which is easier said than done because of the dependencies.
Few examples of usage are given below
impdp … TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
impdp … TRANSFORM=DISABLE_ARCHIVE_LOGGING:N:TABLES
impdp … TRANSFORM=DISABLE_ARCHIVE_LOGGING:N:INDEXES
Oracle 12c: Advanced Network Compression
Most of you are aware that Oracle does simple compression when you have a ordered result set by sending the data once along with count for duplicate data. This is prior to Oracle 12c. Starting with Oracle 12c, Oracle supports Advanced Network Compression. Too bad, it is available as part of Advanced Compression option.
Lets forget Oracle for sometime and look at what are the main constraints for networking? It is generally the network bandwidth and data volume. Lets think of network bandwidth as a pipe and data volume to
be water. If you need to send more water across the pipe, you need to have a bigger pipe or convert the water to a different form to occupy less space so that more water in different
form can be send across the same pipe. The best example for data volume is YouTube & NetFlix who gobble up more than 60% of the internet traffic.
Now back to Oracle, Oracle does something similar to what I explained above with advanced network compression. Oracle compresses the data to be transmitted over network at the sending side and then
convert data back to original at the receiver side to reduce the network traffic ; End result being transparent to user. As with any other advanced compression option, Oracle supports GZIP or LZO for
compression of data.
You can notice significant improvement in performance if network is the bottleneck. If you have a CPU bound system, you are most likely going to make things worst from bad.
How to implement?
1. Using SQL*Net parameters
- SQLNET.COMPRESSION: This parameter enables or disables data compression. Compression will be used only if this parameter is set to ON at both the server and client side. Also please note that SQLNET.COMPRESSION is not applicable to Oracle Data Guard streaming redo and SecureFiles LOBs.
- SQLNET.COMPRESSION_LEVELS: This parameter specifies the compression ratio. A value of LOW consumes less CPU for compression resulting in lower compression ratio whereas a value of HIGH consumes more CPU for compression resulting in higher compression ratio.
- SQLNET.COMPRESSION_THRESHOLD: This parameter specifies in bytes the minimum data size for which compression will be done. Compression is not done for any value below this size
2. Using TNSNAMES parameters
- Compression can also be enabled using connect descriptor for an individual connection using COMPRESSION and COMPRESSION_LEVELS. These parameter settings have same meaning as SQL*Net compression parameters
- Example:
- mydb= (DESCRIPTION= (COMPRESSION=on) (COMPRESSION_LEVELS=(LEVEL=high)) ADDRESS_LIST=
Oracle12c : Truncate Table Cascade
Prior to Oracle 12c truncating tables with dependent children was a pain. We had to disable the constraint before truncating the parent table followed by enabling the constraint. The performance gain achieved using TRUNCATE instead of DELETE almost disappeared if the constraints were enabled with VALIDATE option. If enabling the constraint with NOVALIDATE option was not acceptable, then DELETE seemed to be the only way to go.
Starting with Oracle 12c , Oracle supports truncating tables with CASCADE option similar to other RDBMS like MySQL & PostgreSQL. Well It took some time for Oracle to introduce this option but glad we finally have this option.
Similar to DELETE , the constraints have to be defined with ON DELETE CASCADE. Otherwise just like DELETE , TRUNCATE also will fail. Also the cascading effect will impact all children , grand children and so on.
Let us look at an example. See scripts to create the table towards end of this blog.
In t his example we have 3 tables DEPT (PARENT) , EMP(CHILD) & EMPPROJ (GRANDCHILD) . The child/grandchild tables are created without the “ON DELETE CASCADE” , So obviously the TRUNCATE command with CASCADE option will fail. Please do note the difference in the error message for with and without CASCADE option.
SQL> truncate table emp;
truncate table emp
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
SQL> truncate table emp cascade ;
truncate table emp cascade
*
ERROR at line 1:
ORA-14705: unique or primary keys referenced by enabled foreign keys in table
“SHAN”.”EMPPROJ”
Now lets go ahead modify the table to add the ON DELETE CASCADE option; To do this, we have to drop existing constraint and add the constraint with ON DELETE CASCADE option.
ALTER TABLE EMP DROP CONSTRAINT fk_deptno_dept ;
ALTER TABLE EMPPROJ DROP CONSTRAINT fk_empno_emp ;
ALTER TABLE EMP ADD CONSTRAINT fk_deptno_dept FOREIGN KEY (DEPTNO) REFERENCES dept(deptno) ON DELETE CASCADE;
ALTER TABLE EMPPROJ ADD CONSTRAINT fk_empno_emp foreign key (empno) REFERENCES emp(empno) ON DELETE CASCADE;
Before we truncate the table, lets take the count of emp & empproj table.
SQL>
SQL> select count(*) from emp;
COUNT(*)
———-
5
SQL> select count(*) from empproj;
COUNT(*)
———-
5
Now let us truncate the table with and without t he CASACADE option and then take a count of emp and empproj tables. The key thing to remember is that the CASCADE option will truncate all data from empproj when emp table is truncated because of child & grandchild option.
SQL> truncate table emp;
truncate table emp
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
SQL> truncate table emp cascade;
Table truncated.
SQL> select count(*) from emp;
COUNT(*)
———-
0
SQL> select count(*) from empproj;
COUNT(*)
———-
0
x==x==x==x==x==x==x==x==x==x==x=x=x==x==x==x==x==x==x==x==x==x==x==x=x=x==x
Scripts to create the tables
create table dept
(
deptno number(4) constraint pk_dept primary key,
dname varchar2(20) not null,
loc varchar2(30)
);
create table emp
(
empno number(4) constraint pk_emp primary key,
ename varchar2(20) not null,
salary number(10,2) ,
deptno number(4) constraint fk_deptno_dept references dept(deptno)
);
create table empproj
(
empno number(4) ,
proj varchar2(20) not null,
hours number(4) ,
constraint pk_empproj primary key (empno, proj),
constraint fk_empno_emp foreign key (empno) references emp(empno)
);
INSERT INTO DEPT VALUES (10,’FINANCE’,’RESTON’);
INSERT INTO DEPT VALUES (20,’ACCOUNTING’,’CHICAGO’);
INSERT INTO DEPT VALUES (30,’RESEARCH’,’DALLAS’);
INSERT INTO EMP VALUES ( 1001,’TIM’, 5800, 10);
INSERT INTO EMP VALUES ( 1002,’MIKE’, 5800, 30);
INSERT INTO EMP VALUES ( 1003,’SAM’, 5800, 30);
INSERT INTO EMP VALUES ( 1004,’JOE’, 5800, 20);
INSERT INTO EMP VALUES ( 1005,’SEAN’, 5800, 10);
INSERT INTO EMPPROJ VALUES (1001,’RED’,20);
INSERT INTO EMPPROJ VALUES (1001,’PINK’,30);
INSERT INTO EMPPROJ VALUES (1001,’BLUE’,10);
INSERT INTO EMPPROJ VALUES (1002,’PINK’,10);
INSERT INTO EMPPROJ VALUES (1002,’BLUE’,40);
Oracle 12c RMAN New Features
In Oracle 12c , there are lot of new great RMAN features focused on reducing recovery time; Some of them to provide better DBA 🙂 experience too .Let us glance at some of them in this blog.
- Support for point in time recovery for tables and partitions. I would rate this as one of best options of Oracle 12c.
- Option to duplicate database with NOOPEN option so that duplicated/cloned database remains in mount state. Prior to 12c, the cloned database is automatically opened in RESETLOGS mode. The NOOPEN option is very useful when you want to clone a database as part of upgrade.
- In Oracle 12c , duplicate database supports pull based restore from backup sets; Prior to Oracle 12c , the support was only for push based restore from image copies.
- Support for recovering the database from snapshots taken using 3rd party vendors like EMC,Hitachi etc. The recover command now supports SNAPSHOT TIME clause to implement this feature.
- Transporting database across platforms using backupsets. Prior to 12c, only image copies was supported.
- Transporting database across platforms using inconsistent backups i.e.. backups taken without putting the database in READONLY mode. New clause ALLOW INCONSISTENT is introduced to implement this.
- Support for execution of SQL commands from RMAN prompt without SQL prefix.
- Support for SQL*Plus command DESC from RMAN prompt.
Oracle 12c : Partial Indexes
Let us assume you have a table with ten columns and four B*Tree indexes. If you analyze the space usage; chances are very high that size of all indexes together is nearly equal to more than the size of the table. If you end up creating more indexes to support complex business requirements, the size of all indexes together could be 2-4 times(or more) the size of tables. So what is the big deal, space is cheap. Really? This is what you get to read but when your database is in 10s or 100s of terabytes, every byte matters. So how would you feel if you were told that there is an option in Oracle 12c that would let you create partial index on partitioned tables to save storage and improve on performance.
Let us explore more on this topic.
We know that the main purpose of index is to provide fast access to selective data and that the purpose of index is defeated when you retrieve large amounts of data(subjective to size of table). Moreover for some applications, you need indexes only for the current or recent months whereas for some applications, index is not required for current or recent months. For reporting or data warehouse databases, full table scans in parallel are generally many times faster compared to sequential index range scans. But when you create a global or local Index on partitioned table, index is created on all of the data or partitions. Prior to Oracle 12c , you could not create indexes on selective partitions; Indexes always meant on all of the data. However with Oracle 12c, you can create partial indexes that contains index data from selective partitions only.
So how would you implement partial indexes?
- Enable or disable the indexing property at table level or partition/subpartiton level using INDEXING clause.
- Create indexes as FULL or PARTIAL. When you specify PARTIAL, only partitions that have the indexing property turned on will be part of index.
Example
In the below example, I am disabling the index property of the table and enabling the indexing property at partition level. Now when I create partial indexes, only those partitions with indexing property turned on are part of index.
SQL> CREATE TABLE TRANSACTION(
2 TRANSACTION_ID NUMBER(12) PRIMARY KEY,
3 OPERATION_ID NUMBER(3) NOT NULL,
4 TRANSCTION_AMT NUMBER(9,2) NOT NULL,
5 TRANSACTION_DATE DATE,
6 TRANSACTION_TYPE VARCHAR2(2))
7 INDEXING OFF
8 PARTITION BY RANGE (TRANSACTION_DATE)
9 (
10 PARTITION TRANSACTIONDATE_P1 VALUES LESS THAN (TO_DATE(’01-MAR-2014′,’DD-MON-YYYY’)) INDEXING OFF,
11 PARTITION TRANSACTIONDATE_P2 VALUES LESS THAN (TO_DATE(’01-APR-2014′,’DD-MON-YYYY’)) INDEXING OFF,
12 PARTITION TRANSACTIONDATE_P3 VALUES LESS THAN (TO_DATE(’01-MAY-2014′,’DD-MON-YYYY’)) INDEXING OFF,
13 PARTITION TRANSACTIONDATE_P4 VALUES LESS THAN (TO_DATE(’01-JUN-2014′,’DD-MON-YYYY’)) INDEXING ON,
14 PARTITION TRANSACTIONDATE_P5 VALUES LESS THAN (TO_DATE(’01-JUL-2014′,’DD-MON-YYYY’)) INDEXING ON
15 ) ;
Table created.
In the example below,Only data from partitions TRANSACTIONDATE_P4 and TRANSACTIONDATE_P5 are part of index.
SQL> CREATE INDEX IDX_TRANS_OID ON TRANSCTION(OPERATION_ID)
2 GLOBAL INDEXING PARTIAL;
Index created.
In this example, all partitions are part of index.
SQL> CREATE INDEX IDX_TRANS_TYPE ON TRANSCTION(TRANSACTION_TYPE)
2 GLOBAL INDEXING FULL;
Index created.
In the example below,Only data from partitions TRANSACTIONDATE_P4 and TRANSACTIONDATE_P5 are part of index.
SQL> CREATE INDEX IDX_TRANS_DATE ON TRANSACTION(TRANSACTION_DATE)
2 LOCAL INDEXING PARTIAL;
Index created.
You can enable or disable the indexing property of the table or partition or subpartition using the ALTER TABLE command;
SQL> alter table transaction modify partition TRANSACTIONDATE_P3 indexing on ;
Table altered.
Oracle 12c Extended Datatypes
Extended Datatypes
Prior to Oracle 12c, VARCHAR2/NVARCHAR2 datatypes allowed storing variable length characters of up to 4000 bytes whereas RAW data type allowed storing variable length binry data of up to 2000 bytes. In Oracle 12c, the maximum size for data types VARCHAR2, NVARCHAR2 and RAW is increased to 32767 bytes. These data types are now referred to as extended data types. Extended data types are implicitly implemented using concept of LOBs; Just like LOB’s, any data stored in excess of 4000 bytes for VARCHAR2/NVARCHAR2/RAW is stored out-line. If the data stored is within pre-Oracle12c limits , then they are stored in-line. So be aware about the restrictions of LOBs since all most all of the restrictions of lobs are applicable to extended datatypes (at least for now).
The first thing that came to mind about extended data types is what purpose does it actually serve since it is implemented as LOB; Why would you introduce LOB restrictions to your columns by increasing the limits of VARCHAR2, NVARCHAR2 and RAW. As most of you are aware that all versions of databases including Oracle 12c do not support changing VARCHAR2/NUMBER/RAW to LOBs directly with ALTER ..MODIFY command. (Would result in ORA-22858: invalid alteration of datatype); I thought that instead of extended datatype ,Oracle could have removed the restriction to convert VARCHAR2/NUMBER/RAW to LOBs directly and provided some restricted form of index support.
Being a new feature, I did not find much useful information about the feature and so decided why not be one of the earliest blogger to explore this topic.
So let us make sure that the database is enabled for extended datatypes.
SQL> show parameter max_string_size
NAME TYPE VALUE
———————————— ———– ——————————
max_string_size string EXTENDED
Please refer to following blog on instructions of how to set up your database for extended datatypes. https://twelvec.com/2014/02/08/oracle-12c-how-to-setup-your-database-to-support-extended-datatypes/
Now lets create a table with two columns , one with VARCHAR2 and other with LOB
SQL> CONN SHAN/SHAN
Connected.
SQL> CREATE TABLE EXT_DATATYPE (
2 EMPLOYEE_NO VARCHAR2(32767) ,
3 EMPLOYEE_COMMENTS BLOB);
Table created.
Ok now lets trying adding primary key to this table. Ooops , It failed because of the restrictions on index lengths.
SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);
ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Now lets modify the column size to 6398 bytes and add primary key to the table. The command still fails; So what is the secret column length that supports indexes? It is actually 6389 bytes.
SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6398));
Table altered.
SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);
ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO)
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
For VARCHAR2/NVARCHR2/RAW columns more that 6389 bytes , you can create indexes using function based indexes or virtual columns. With both approaches, you are shortening the length of column to create indexes using either SUBSTR or new STANDARD_HASH functions or something creative that you can come up with.
Let us revert the column length to time trusted 4000 bytes, add index and then try to increase the column length. You are going to get a different error but the underlying cause us same.
SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(4000));
Table altered.
SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);
Table altered.
SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6390));
ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6390))
*
ERROR at line 1:
ORA-01404: ALTER COLUMN will make an index too large
Now let us drop our primary key constraint and try adding lob data type as primary key; It will fail as LOB columns cannot be primary key or unique key.
SQL> alter table EXT_DATATYPE drop primary key;
Table altered.
SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS));
ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS))
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option
SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS);
ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS)
ERROR at line 1:
ORA-02329: column of datatype LOB cannot be unique or a primary key
Summary: In essence, It if would have been better if Oracle increased the datatypes sizes to 6000 bytes instead of 32K so that indexing , primary key constraints are all supported. There is always a learning curve with new features; it gets little more complicated with enhancements. With 32K sizes ,Initially it is going to introduce new risks and create confusion if implemented without understanding the actual consequences even though there are some workarounds for creating indexes using virtual columns or function based indexes. I just picked one restriction of LOBs ; More testing to be done to verify the behavior of AFTER UPDATE DML trigger , INSERT AS SELECT and many more restrictions.
Oracle 12c: How to setup your database to support extended datatypes?
This blog discusses the setup required to support extended datatypes in Oracle12c; For more information about extended datatypes, please refer to https://twelvec.com/2014/02/08/oracle12c_extended_datatypes/
In nutshell , the following steps are required to enable extended datatypes. Most of the steps are familiar except for step 3 and 4. In step 3 , we are modifying initialization parameter MAX_STRING_SIZE from STANDARD to EXTENDED. Once changed , this is a irreversible action. Step 4 increases the sizes of VARCHAR2, NVARCHAR2 & RAW in the required views.
1. SHUTDOWN IMMEDIATE
2. STARTUP UPGRADE;
3. ALTER SYSTEM SET max_string_size=extended scope=spfile;
4. @?/rdbms/admin/utl32k.sql
5. SHUTDOWN IMMEDIATE;
6. STARTUP;
Example:
SQL> select name from v$database;
NAME
———
ORCL12C
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>
SQL>
SQL> startup upgrade
ORACLE instance started.
Total System Global Area 2505338880 bytes
Fixed Size 2291472 bytes
Variable Size 671090928 bytes
Database Buffers 1811939328 bytes
Redo Buffers 20017152 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL> ALTER SYSTEM SET max_string_size=extended;
System altered.
SQL> @?/rdbms/admin/utl32k.sql
Session altered.
DOC>#######################################################################
DOC>#######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the database has not been opened for UPGRADE.
DOC>
DOC> Perform a “SHUTDOWN ABORT” and
DOC> restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#
no rows selected
DOC>#######################################################################
DOC>#######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the database does not have compatible >= 12.0.0
DOC>
DOC> Set compatible >= 12.0.0 and retry.
DOC>#######################################################################
DOC>#######################################################################
DOC>#
PL/SQL procedure successfully completed.
Session altered.
0 rows updated.
Commit complete.
System altered.
PL/SQL procedure successfully completed.
Commit complete.
System altered.
Session altered.
PL/SQL procedure successfully completed.
No errors.
Session altered.
PL/SQL procedure successfully completed.
Commit complete.
Package altered.
TIMESTAMP
——————————————————————————–
COMP_TIMESTAMP UTLRP_BGN 2014-01-28 14:51:31
DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC> objects in the database. Recompilation time is proportional to the
DOC> number of invalid objects in the database, so this command may take
DOC> a long time to execute on a database with a large number of invalid
DOC> objects.
DOC>
DOC> Use the following queries to track recompilation progress:
DOC>
DOC> 1. Query returning the number of invalid objects remaining. This
DOC> number should decrease with time.
DOC> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC> 2. Query returning the number of objects compiled so far. This number
DOC> should increase with time.
DOC> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC> This script automatically chooses serial or parallel recompilation
DOC> based on the number of CPUs available (parameter cpu_count) multiplied
DOC> by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC> On RAC, this number is added across all RAC nodes.
DOC>
DOC> UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC> recompilation. Jobs are created without instance affinity so that they
DOC> can migrate across RAC nodes. Use the following queries to verify
DOC> whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC> 1. Query showing jobs created by UTL_RECOMP
DOC> SELECT job_name FROM dba_scheduler_jobs
DOC> WHERE job_name like ‘UTL_RECOMP_SLAVE_%’;
DOC>
DOC> 2. Query showing UTL_RECOMP jobs that are running
DOC> SELECT job_name FROM dba_scheduler_running_jobs
DOC> WHERE job_name like ‘UTL_RECOMP_SLAVE_%’;
DOC>#
PL/SQL procedure successfully completed.
TIMESTAMP
——————————————————————————–
COMP_TIMESTAMP UTLRP_END 2014-01-28 14:51:33
DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#
OBJECTS WITH ERRORS
——————-
0
DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#
ERRORS DURING RECOMPILATION
—————————
0
Function created.
PL/SQL procedure successfully completed.
Function dropped.
…Database user “SYS”, database schema “APEX_040200”, user# “98” 14:51:48
…Compiled 0 out of 2998 objects considered, 0 failed compilation 14:51:48
…263 packages
…255 package bodies
…453 tables
…11 functions
…16 procedures
…3 sequences
…458 triggers
…1322 indexes
…207 views
…0 libraries
…6 types
…0 type bodies
…0 operators
…0 index types
…Begin key object existence check 14:51:48
…Completed key object existence check 14:51:48
…Setting DBMS Registry 14:51:48
…Setting DBMS Registry Complete 14:51:48
…Exiting validate 14:51:48
PL/SQL procedure successfully completed.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>
SQL>
SQL> startup
ORACLE instance started.
Total System Global Area 2505338880 bytes
Fixed Size 2291472 bytes
Variable Size 671090928 bytes
Database Buffers 1811939328 bytes
Redo Buffers 20017152 bytes
Database mounted.
Database opened.
SQL> show parameter max_string_size
NAME TYPE VALUE
———————————— ———– ——————————
max_string_size string EXTENDED
Recent Comments