1. Change the cluster database parameter so the database can be mounted in exclusive mode which is required to enable archive logging
alter system set cluster_database=false scope=spfile;
2. Shutdown the database using srvctl
srvctl stop database -d ORA
3. Startup one of the instances upto the mount state
sqlplus / as sysdba
startup mount
4. Enable archivelog mode
alter database archivelog;
5. Change the cluster_database parameter back to true in the spfile
alter system set cluster_database=true scope=spfile;
6. Shutdown the instance
shutdown immediate
7. Startup the database using srvctl
srvctl start database -d ORA
8. Once the database is back up you can verify the change by connecting to one of the database instances
sqlplus / as sysdba
archive log list
For example:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /archlogs/ORA/
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
Simple as that.
Notes:
You don’t need to set log_archive_dest1 as it defaults to the flash recovery area (ie USE_DB_RECOVERY_FILE_DEST) although you’ll need to make sure it is large enough for your needs.
Translate
Thursday, 29 September 2011
Wednesday, 28 September 2011
Table Partitioning in Oracle
Now a days enterprises run databases of hundred of Gigabytes in size. These databases are known as Very Large Databases (VLDB). From Oracle Ver. 8.0 Oracle has provided the feature of table partitioning i.e. you can partition a table according to some criteria . For example you have a SALES table with the following structure
Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993 and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following
select sum(amt) from sales where year=1991;
select product,sum(amt) from sales where year=1992 Group by product;
Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.
CREATING PARTITION TABLES
To create a partition table give the following statement
create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5;
In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the
partitions will still be available.
The above example the table is partition by range.
In Oracle you can partition a table by
Range Partitioning
Hash Partitioning
List Partitioning
Composite Partitioning
Range Partitioning
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed;
for example, value of year. Performance is best when the data evenly distributes across the range
Hash partitioning
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for
performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a
specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key
The following example shows how to create a hash partition table.
The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created
and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).
CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);
List Partitioning
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete
values for the partitioning column in the description for each partition. This is different from range partitioning, where a
range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to
partition mapping.
List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally
The following example creates a table with list partitioning
Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,
’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the
city column the row is stored in that partition.
COMPOSITE PARTITONING
Composite partitioning partitions data using the range method, and within each partition, subpartitions it using
the hash method. Composite partitions are ideal for both historical data and striping, and provide improved
manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.
When creating composite partitions, you specify the following:
Partitioning method: range
Partitioning column(s)
Partition descriptions identifying partition bounds
Subpartitioning method: hash
Subpartitioning column(s)
Number of subpartitions for each partition or descriptions of subpartitions
The following statement creates a composite-partitioned table. In this example, three range partitions are created, each
containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).
CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
ALTERING PARTITION TABLES
To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition
at the beginning or in the middle of a table, use the SPLIT PARTITION clause.
For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);
To add a partition to a Hash Partition table give the following command.
Alter table products add partition;
Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace.
To add a partition by user define name and in your specified tablespace give the following command.
Alter table products add partition p5 tablespace u5;
To add a partition to a List partition table give the following command.
alter table customers add partition central_India
values (‘BHOPAL’,’NAGPUR’);
Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions
of the table.
Coalescing Partitions
Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.
To coalesce a hash partition give the following statement.
Alter table products coalesce partition;
This reduces by one the number of partitions in the table products.
DROPPING PARTITIONS
To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;
Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.
Alter index sales_ind rebuild;
To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this
Delete from sales where year=1994;
Alter table sales drop partition p4;
This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.
Another method of dropping partitions is give the following statement.
ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;
This causes the global index to be updated at the time the partition is dropped.
Exchanging a Range, Hash, or List Partition
To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.
ALTER TABLE stocks
EXCHANGE PARTITION p3 WITH stock_table_3;
Merging Partitions
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.
Alter table sales merge partition p2 and p3 into
partition p23;
Modifying Partitions: Adding Values
Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.
The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.
ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');
Modifying Partitions: Dropping Values
Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.
ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);
SPLITTING PARTITIONS
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.
Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));
TRUNCATING PARTITON
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement
Alter table sales truncate partition p5;
LISTING INFORMATION ABOUT PARTITION TABLES
To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;
To see on partition level partitioning information
Select * from user_tab_partitions;
Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993 and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following
select sum(amt) from sales where year=1991;
select product,sum(amt) from sales where year=1992 Group by product;
Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.
CREATING PARTITION TABLES
To create a partition table give the following statement
create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5;
In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the
partitions will still be available.
The above example the table is partition by range.
In Oracle you can partition a table by
Range Partitioning
Hash Partitioning
List Partitioning
Composite Partitioning
Range Partitioning
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed;
for example, value of year. Performance is best when the data evenly distributes across the range
Hash partitioning
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for
performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a
specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key
The following example shows how to create a hash partition table.
The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created
and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).
CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);
List Partitioning
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete
values for the partitioning column in the description for each partition. This is different from range partitioning, where a
range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to
partition mapping.
List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally
The following example creates a table with list partitioning
Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,
’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the
city column the row is stored in that partition.
COMPOSITE PARTITONING
Composite partitioning partitions data using the range method, and within each partition, subpartitions it using
the hash method. Composite partitions are ideal for both historical data and striping, and provide improved
manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.
When creating composite partitions, you specify the following:
Partitioning method: range
Partitioning column(s)
Partition descriptions identifying partition bounds
Subpartitioning method: hash
Subpartitioning column(s)
Number of subpartitions for each partition or descriptions of subpartitions
The following statement creates a composite-partitioned table. In this example, three range partitions are created, each
containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).
CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
ALTERING PARTITION TABLES
To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition
at the beginning or in the middle of a table, use the SPLIT PARTITION clause.
For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);
To add a partition to a Hash Partition table give the following command.
Alter table products add partition;
Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace.
To add a partition by user define name and in your specified tablespace give the following command.
Alter table products add partition p5 tablespace u5;
To add a partition to a List partition table give the following command.
alter table customers add partition central_India
values (‘BHOPAL’,’NAGPUR’);
Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions
of the table.
Coalescing Partitions
Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.
To coalesce a hash partition give the following statement.
Alter table products coalesce partition;
This reduces by one the number of partitions in the table products.
DROPPING PARTITIONS
To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;
Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.
Alter index sales_ind rebuild;
To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this
Delete from sales where year=1994;
Alter table sales drop partition p4;
This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.
Another method of dropping partitions is give the following statement.
ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;
This causes the global index to be updated at the time the partition is dropped.
Exchanging a Range, Hash, or List Partition
To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.
ALTER TABLE stocks
EXCHANGE PARTITION p3 WITH stock_table_3;
Merging Partitions
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.
Alter table sales merge partition p2 and p3 into
partition p23;
Modifying Partitions: Adding Values
Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.
The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.
ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');
Modifying Partitions: Dropping Values
Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.
ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);
SPLITTING PARTITIONS
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.
Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));
TRUNCATING PARTITON
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement
Alter table sales truncate partition p5;
LISTING INFORMATION ABOUT PARTITION TABLES
To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;
To see on partition level partitioning information
Select * from user_tab_partitions;
Sunday, 25 September 2011
ORA-16005: database requires recovery
Due to some reason, I had to bring one of our test databases in read only mode, but the follwoing error generated while doing the same. "ORA-16005: database requires recovery".
I'm sure that we have't done any changes with the database recently. Well, the problem was resolved by shutting down the database in normal mode. Please take a look at the below steps for the same.
SQL> startup force open read only;
ORA-16005: database requires recovery
SQL> shutdown immediate;
SQL> startup restrict;
SQL> shutdown;
SQL> startup mount;
SQL> alter database open read only;
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ ONLY
Conclusion: If you have got the error "ORA-16005: database requires recovery" due to other reasons, I request you to take a look at the metalink note ID: 316154.1
I'm sure that we have't done any changes with the database recently. Well, the problem was resolved by shutting down the database in normal mode. Please take a look at the below steps for the same.
SQL> startup force open read only;
ORA-16005: database requires recovery
SQL> shutdown immediate;
SQL> startup restrict;
SQL> shutdown;
SQL> startup mount;
SQL> alter database open read only;
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ ONLY
Conclusion: If you have got the error "ORA-16005: database requires recovery" due to other reasons, I request you to take a look at the metalink note ID: 316154.1
Creating a Recovery Catalog – RMAN in Oracle 10g
As we all know, RMAN maintains metadata about the target database and its backup and
recovery operations in the RMAN repository. The RMAN repository data is always in the control file of the target database. The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter controls how long backup records are kept in the control file before those records are re-used to hold information about more recent backups. By default this parameter set to 7 days.
Another copy of the RMAN repository data can also be saved in the recovery catalog.
Using a recovery catalog preserves RMAN repository information if the control file is lost, making it much easier to restore and recover following the loss of the control file. (A backup control file may not contain complete information about recent available backups.) The recovery catalog can also store a much more extensive history of your backups than the control file, due to limits on the number of control file records.
In addition to RMAN repository records, the recovery catalog can also hold RMAN stored scripts, sequences of RMAN commands for common backup tasks. Centralized storage of scripts in recovery catalog can be more convenient than working with command files.
Create a new database for RMAN – Recovery catalog database
Note: You can create a small database with minimal sizes of tablespaces and others, and you can name the database as CATDB for naming convention and to avoid the confusion between your production and rman databases.
Create a new tablespace in the new database (CATDB)
$ sqlplus /nolog
CONNECT SYS/passwd@catdb AS SYSDBA;
CREATE TABLESPACE rman
DATAFILE '/u02/app/oradata/rman/rman01.dbf' size 100m;
Create the Recovery Catalog Owner in the new database (CATDB)
CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;
Grant the necessary privileges to the schema owner
SQL> GRANT connect, resource, recovery_catalog_owner TO rman;
Here the role "RECOVERY_CATALOG_OWNER" provides the user with all privileges required to maintain and query the recovery catalog
Creating the Recovery Catalog
Connect to the database which will contain the catalog as the catalog owner. For example:
$ rman catalog rman/passwd@catdb
Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:22:13 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to recovery catalog database
RMAN>
Run the CREATE CATALOG command to create the catalog
RMAN> CREATE CATALOG;
recovery catalog created
Registering a Database in the Recovery Catalog
Connect to the target database and recovery catalog database.
$ ORACLE_SID=prod; export ORACLE_SID
$ rman target / catalog rman/passwd@catdb
Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:25:30 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: PROD (DBID=3677528376)
connected to recovery catalog database
RMAN> REGISTER DATABASE;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
Make sure that the registration was successful by running REPORT SCHEMA:
RMAN> REPORT SCHEMA;
Report of database schema
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 500 SYSTEM YES /u02/app/oradata/prod/system01.dbf
2 200 UNDOTBS1 YES /u02/app/oradata/prod/undotbs01.dbf
3 325 SYSAUX NO /u02/app/oradata/prod/sysaux01.dbf
4 100 EXAMPLE NO /u02/app/oradata/prod/example01.dbf
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 200 TEMP 200 /u02/app/oradata/prod/temp01.dbf
recovery operations in the RMAN repository. The RMAN repository data is always in the control file of the target database. The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter controls how long backup records are kept in the control file before those records are re-used to hold information about more recent backups. By default this parameter set to 7 days.
Another copy of the RMAN repository data can also be saved in the recovery catalog.
Using a recovery catalog preserves RMAN repository information if the control file is lost, making it much easier to restore and recover following the loss of the control file. (A backup control file may not contain complete information about recent available backups.) The recovery catalog can also store a much more extensive history of your backups than the control file, due to limits on the number of control file records.
In addition to RMAN repository records, the recovery catalog can also hold RMAN stored scripts, sequences of RMAN commands for common backup tasks. Centralized storage of scripts in recovery catalog can be more convenient than working with command files.
Create a new database for RMAN – Recovery catalog database
Note: You can create a small database with minimal sizes of tablespaces and others, and you can name the database as CATDB for naming convention and to avoid the confusion between your production and rman databases.
Create a new tablespace in the new database (CATDB)
$ sqlplus /nolog
CONNECT SYS/passwd@catdb AS SYSDBA;
CREATE TABLESPACE rman
DATAFILE '/u02/app/oradata/rman/rman01.dbf' size 100m;
Create the Recovery Catalog Owner in the new database (CATDB)
CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;
Grant the necessary privileges to the schema owner
SQL> GRANT connect, resource, recovery_catalog_owner TO rman;
Here the role "RECOVERY_CATALOG_OWNER" provides the user with all privileges required to maintain and query the recovery catalog
Creating the Recovery Catalog
Connect to the database which will contain the catalog as the catalog owner. For example:
$ rman catalog rman/passwd@catdb
Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:22:13 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to recovery catalog database
RMAN>
Run the CREATE CATALOG command to create the catalog
RMAN> CREATE CATALOG;
recovery catalog created
Registering a Database in the Recovery Catalog
Connect to the target database and recovery catalog database.
$ ORACLE_SID=prod; export ORACLE_SID
$ rman target / catalog rman/passwd@catdb
Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:25:30 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: PROD (DBID=3677528376)
connected to recovery catalog database
RMAN> REGISTER DATABASE;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
Make sure that the registration was successful by running REPORT SCHEMA:
RMAN> REPORT SCHEMA;
Report of database schema
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 500 SYSTEM YES /u02/app/oradata/prod/system01.dbf
2 200 UNDOTBS1 YES /u02/app/oradata/prod/undotbs01.dbf
3 325 SYSAUX NO /u02/app/oradata/prod/sysaux01.dbf
4 100 EXAMPLE NO /u02/app/oradata/prod/example01.dbf
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 200 TEMP 200 /u02/app/oradata/prod/temp01.dbf
Thursday, 22 September 2011
Enable Automatic Compilation of JSP pages in R12
1. Login into E-Business suite and select System Administrator responsibility
2. Select function AutoConfig (under Oracle Applications Manager)
For each web tier server perform the following:
Click on pencil icon under Edit Parameters
Select tab System
Expand section jtff_server
3. Change value for the entry s_jsp_main_mode from justrun to recompile
Confirm the change by clicking Save button
4. Run AutoConfig to propagate the changes to the configuration files
Verify that the $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml has the following:
Check param-name "main_mode" under init-param variables
Its changed into "recompile"
5. Restart the web tier services
2. Select function AutoConfig (under Oracle Applications Manager)
For each web tier server perform the following:
Click on pencil icon under Edit Parameters
Select tab System
Expand section jtff_server
3. Change value for the entry s_jsp_main_mode from justrun to recompile
Confirm the change by clicking Save button
4. Run AutoConfig to propagate the changes to the configuration files
Verify that the $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml has the following:
Check param-name "main_mode" under init-param variables
Its changed into "recompile"
5. Restart the web tier services
Wednesday, 21 September 2011
Start/Stop Oracle Apps Instance. Is it that simple ??
Start/Stop Oracle Apps Instance.
Just run adstrtal.sh/adstpall.sh, addbctl.sh and addlnctl.sh.
Starting is Simple.
addbctl.sh start
addlnctl.sh start SID
adstrtal.sh apps/password
Stoping is also fairly simple but “small care” needs to be taken to avoid critical issues.
I start my preparation sometime before the downtime scheduled, to let the concurrent request finish. Following are the steps to bring down middle-tier services
1.Bring down the concurrent manager before maintenance say 20 mins before.
adcmctl.sh stop apps/Password
2.Check if any concurrent reqeust is running. if running, check what it is doing, like sql, session is active.
3.Check previous execution of similar program took how much time.Is it worth to wait or cancel the request
4.If it is affecting downtime then login from front-end and terminate the concurrent program, and make a note of request id(communicate to user who submitted this request so they can submit again)
5.Check the OS process id, whether it got terminated or not. If running then its a runaway process kill it. I dont like killings but…
SQL> select oracle_process_id from fnd_concurrent_requests where request_id=&Request_id;
For bringing down database tier.
1. Check if hot backup is going on or not..
To check, go to alert log file $ORACLE_HOME/admin/CONTEXT_NAME/bdump/alert_sid.log
and also from sqlplus
SQL> select distinct status from v$backup;
If it returns row containing “ACTIVE” then hot back is in progress.
Wait till it gets over.
Otherwise next startup shall create problem. Though we have ways and means to overcome but why do that.
2.Conditional - If you are using DR, pls take care of following steps
a.Check which archive dest state refer for DR, enable it .
From show parameter log_archive_dest.. you may come to know..
say if you are using 3rd then run the sql
SQL>alter system set log_archive_dest_state_3=enable;
b.Check if standby is performing managed recovery.
SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;PROCESS STATUS
——- ————
ARCH CLOSING
ARCH CONNECTED
MRP0 WAIT_FOR_LOG
RFS WRITING
RFS RECEIVING
RFS RECEIVING
c.Cancel managed recovery operations.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
d.Shut down the standby database.
SQL> SHUTDOWN IMMEDIATE;
3.Stop database
4.Now stop the listener
5.If still database is not going down, check in alert log , what exactly is going on.
6.Check if any processes are running having local=NO is running. If yes, kill..
Just run adstrtal.sh/adstpall.sh, addbctl.sh and addlnctl.sh.
Starting is Simple.
addbctl.sh start
addlnctl.sh start SID
adstrtal.sh apps/password
Stoping is also fairly simple but “small care” needs to be taken to avoid critical issues.
I start my preparation sometime before the downtime scheduled, to let the concurrent request finish. Following are the steps to bring down middle-tier services
1.Bring down the concurrent manager before maintenance say 20 mins before.
adcmctl.sh stop apps/Password
2.Check if any concurrent reqeust is running. if running, check what it is doing, like sql, session is active.
3.Check previous execution of similar program took how much time.Is it worth to wait or cancel the request
4.If it is affecting downtime then login from front-end and terminate the concurrent program, and make a note of request id(communicate to user who submitted this request so they can submit again)
5.Check the OS process id, whether it got terminated or not. If running then its a runaway process kill it. I dont like killings but…
SQL> select oracle_process_id from fnd_concurrent_requests where request_id=&Request_id;
For bringing down database tier.
1. Check if hot backup is going on or not..
To check, go to alert log file $ORACLE_HOME/admin/CONTEXT_NAME/bdump/alert_sid.log
and also from sqlplus
SQL> select distinct status from v$backup;
If it returns row containing “ACTIVE” then hot back is in progress.
Wait till it gets over.
Otherwise next startup shall create problem. Though we have ways and means to overcome but why do that.
2.Conditional - If you are using DR, pls take care of following steps
a.Check which archive dest state refer for DR, enable it .
From show parameter log_archive_dest.. you may come to know..
say if you are using 3rd then run the sql
SQL>alter system set log_archive_dest_state_3=enable;
b.Check if standby is performing managed recovery.
SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;PROCESS STATUS
——- ————
ARCH CLOSING
ARCH CONNECTED
MRP0 WAIT_FOR_LOG
RFS WRITING
RFS RECEIVING
RFS RECEIVING
c.Cancel managed recovery operations.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
d.Shut down the standby database.
SQL> SHUTDOWN IMMEDIATE;
3.Stop database
4.Now stop the listener
5.If still database is not going down, check in alert log , what exactly is going on.
6.Check if any processes are running having local=NO is running. If yes, kill..
Tuesday, 20 September 2011
How to Restore the Controlfile from RMAN Backup.
If you loss or if your all copies of control file is corrupted and if you have backup of your control file then it is required to restore your control file from your backup.
Restore control file to default location:
----------------------------------------------
The default location is defined by CONTROL_FILES parameter of pfile/spfile. If you don't specify any location while restoring your control file then the control file will be restored to the location set by CONTROL_FILES parameter.
RMAN>SET DBID 3386862249
RMAN> RUN {
RESTORE CONTROLFILE FROM AUTOBACKUP;
}
Restore of the Control File from Control File Autobackup
-------------------------------------------------------------
If you are not using a recovery catalog, you must restore your control file from an autobackup. The database must be in a NOMOUNT state. And you have to set DBID. RMAN uses the autobackup format and DBID to determine where to find for the control file autobackup.
RMAN>SET DBID 3386862249
RMAN> RUN {
SET CONTROLFILE AUTOBACKUP FORMAT
FOR DEVICE TYPE DISK TO 'autobackup_format';
RESTORE CONTROLFILE FROM AUTOBACKUP;
}
Restore of the Control File When Using a Flash Recovery Area
---------------------------------------------------------------------
Suppose you restored a backup of the control file. Now in that control file the backup information may not updated/full. May be it contains only current backup information of that session while taking backup. If you use flash recovery area then RMAN automatically catalog the backups in the flash recovery area. As a result the restored control file has a complete and accurate record of all backups in your flash recovery area and any other backups that were known to the control file at the time of the backup.
Restoring a Control File When Using a Recovery Catalog
------------------------------------------------------------------
The recovery catalog contains a complete record of your backups, including backups of the control
file. Therefore, you do not have to specify your DBID or control file autobackup format.
Just use,
$rman TARGET / CATALOG catdb/catdb
RMAN> RESTORE CONTROLFILE;
Restore of the Control File From a Known Location
-----------------------------------------------------
If you know the backuppiece of controlfile or any copy then simply you can use,
RMAN> RESTORE CONTROLFILE from 'filename';
Restore of the Control File to a New Location
---------------------------------------------------
In prior cases RMAN restore the control file to the location specified by CONTROL_FILES parameter of the spfile or pfile.
If you want to restore the control file to another location use,
RMAN>RESTORE CONTROLFILE TO 'give_here_new_location';
You can also change CONTROL_FILES parameter and then perform RESTORE CONTROLFILE to change location.
Limitations When Using a Backup Control File
------------------------------------------------
After you restore your database using a backup control file, you must run RECOVER DATABASE and perform an OPEN RESETLOGS on the database.
Restore control file to default location:
----------------------------------------------
The default location is defined by CONTROL_FILES parameter of pfile/spfile. If you don't specify any location while restoring your control file then the control file will be restored to the location set by CONTROL_FILES parameter.
RMAN>SET DBID 3386862249
RMAN> RUN {
RESTORE CONTROLFILE FROM AUTOBACKUP;
}
Restore of the Control File from Control File Autobackup
-------------------------------------------------------------
If you are not using a recovery catalog, you must restore your control file from an autobackup. The database must be in a NOMOUNT state. And you have to set DBID. RMAN uses the autobackup format and DBID to determine where to find for the control file autobackup.
RMAN>SET DBID 3386862249
RMAN> RUN {
SET CONTROLFILE AUTOBACKUP FORMAT
FOR DEVICE TYPE DISK TO 'autobackup_format';
RESTORE CONTROLFILE FROM AUTOBACKUP;
}
Restore of the Control File When Using a Flash Recovery Area
---------------------------------------------------------------------
Suppose you restored a backup of the control file. Now in that control file the backup information may not updated/full. May be it contains only current backup information of that session while taking backup. If you use flash recovery area then RMAN automatically catalog the backups in the flash recovery area. As a result the restored control file has a complete and accurate record of all backups in your flash recovery area and any other backups that were known to the control file at the time of the backup.
Restoring a Control File When Using a Recovery Catalog
------------------------------------------------------------------
The recovery catalog contains a complete record of your backups, including backups of the control
file. Therefore, you do not have to specify your DBID or control file autobackup format.
Just use,
$rman TARGET / CATALOG catdb/catdb
RMAN> RESTORE CONTROLFILE;
Restore of the Control File From a Known Location
-----------------------------------------------------
If you know the backuppiece of controlfile or any copy then simply you can use,
RMAN> RESTORE CONTROLFILE from 'filename';
Restore of the Control File to a New Location
---------------------------------------------------
In prior cases RMAN restore the control file to the location specified by CONTROL_FILES parameter of the spfile or pfile.
If you want to restore the control file to another location use,
RMAN>RESTORE CONTROLFILE TO 'give_here_new_location';
You can also change CONTROL_FILES parameter and then perform RESTORE CONTROLFILE to change location.
Limitations When Using a Backup Control File
------------------------------------------------
After you restore your database using a backup control file, you must run RECOVER DATABASE and perform an OPEN RESETLOGS on the database.
How to find Master Node in Oracle RAC
There are two types of Masters in Oracle RAC, one is Mater node at Oracle Clusterware level and other is Master node for specific resource or block or object.
The node which gets the active state during startup is authorized to be a master node by Cluster Synchronization Service.
Run the below command to find which node is master at Clusterware level
$cat $ORA_CRS_HOME/log/`hostname`/cssd/ocssd* |grep master
or
$ for x in `ls -tr $ORA_CRS_HOME/log/`hostname`/cssd/ocssd* `; do grep -i "master node" $x ; done | tail -1
The OCR Automatic backups are taken only by master node. If the Master fails, the OCR backups will be created on the new Master. The Master node which has OCR backups goes down due to failure then we cannot be recover the OCR that’s why Oracle recommends taking backups using “ocrconfig” and also integrating OCR backups with backup strategy.
Run the below command to find which node is OCR Master and taking automatic backups.
$ocrconfig –showbackup
rac02 2010/08/30 16:29:52 /oracle/crs/cdata/crs
rac02 2010/08/30 16:29:52 /oracle/crs/cdata/crs
rac02 2010/08/30 12:29:49 /oracle/crs/cdata/crs
rac02 2010/08/30 08:29:46 /oracle/crs/cdata/crs
rac02 2010/08/29 00:29:23 /oracle/crs/cdata/crs
The block level masters are used by Cache fusion while transferring the block. Any node can become the master node of a particular block and you can also see which node acting as master in V$GES_RESOURCE table (MASTER_NODE column)
You can manually remaster an object with oradebug command:
SQL> oradebug lkdebug -m pkey "object_id"
The node which gets the active state during startup is authorized to be a master node by Cluster Synchronization Service.
Run the below command to find which node is master at Clusterware level
$cat $ORA_CRS_HOME/log/`hostname`/cssd/ocssd* |grep master
or
$ for x in `ls -tr $ORA_CRS_HOME/log/`hostname`/cssd/ocssd* `; do grep -i "master node" $x ; done | tail -1
The OCR Automatic backups are taken only by master node. If the Master fails, the OCR backups will be created on the new Master. The Master node which has OCR backups goes down due to failure then we cannot be recover the OCR that’s why Oracle recommends taking backups using “ocrconfig” and also integrating OCR backups with backup strategy.
Run the below command to find which node is OCR Master and taking automatic backups.
$ocrconfig –showbackup
rac02 2010/08/30 16:29:52 /oracle/crs/cdata/crs
rac02 2010/08/30 16:29:52 /oracle/crs/cdata/crs
rac02 2010/08/30 12:29:49 /oracle/crs/cdata/crs
rac02 2010/08/30 08:29:46 /oracle/crs/cdata/crs
rac02 2010/08/29 00:29:23 /oracle/crs/cdata/crs
The block level masters are used by Cache fusion while transferring the block. Any node can become the master node of a particular block and you can also see which node acting as master in V$GES_RESOURCE table (MASTER_NODE column)
You can manually remaster an object with oradebug command:
SQL> oradebug lkdebug -m pkey "object_id"
In function `lcdprm':: warning: the `gets' function is dangerous and should not be used. :failed
Many users experienced Patch failure on Node2 or Remote node in RAC environment and in some cases not able to start the RAC database instances on NODE2 or remote NODE’S (if more than 2 nodes).
You may receive below errors/warning why you apply a patch
$ opatch apply
. . . . . . .
The following warnings you may see during “OPatch” execution :
OUI-67212
WARNING for re-link on remote node 'rac02':
.........
/oracle/v10202/bin/oracle/oracle/v10202/lib//libcore10.a(lcd.o)(.text+0xb71): In function `lcdprm':: warning: the `gets' function is dangerous and should not be used. :failed
OPatch Session completed with warnings.
OPatch completed with warnings.
Solution:-
If you are able to startup the database then No action is required, please ignore the message. It is a internal code bug message reference for developer to fix the code in future versions. This issue is fixed in 11g.
If you are not able to startup the database, here are two common reasons
1.Bug 5128575 - RAC install of 10.2.0.2 does not update libknlopt.a on all nodes
Check “Unable to start RAC instance after applying patch” link to fix “Bug 5128575”
2.Re-link failed on remote nodes.
Once again Re-link the Oracle libraries on node2 or remote node.
You may receive below errors/warning why you apply a patch
$ opatch apply
. . . . . . .
The following warnings you may see during “OPatch” execution :
OUI-67212
WARNING for re-link on remote node 'rac02':
.........
/oracle/v10202/bin/oracle/oracle/v10202/lib//libcore10.a(lcd.o)(.text+0xb71): In function `lcdprm':: warning: the `gets' function is dangerous and should not be used. :failed
OPatch Session completed with warnings.
OPatch completed with warnings.
Solution:-
If you are able to startup the database then No action is required, please ignore the message. It is a internal code bug message reference for developer to fix the code in future versions. This issue is fixed in 11g.
If you are not able to startup the database, here are two common reasons
1.Bug 5128575 - RAC install of 10.2.0.2 does not update libknlopt.a on all nodes
Check “Unable to start RAC instance after applying patch” link to fix “Bug 5128575”
2.Re-link failed on remote nodes.
Once again Re-link the Oracle libraries on node2 or remote node.
Monday, 19 September 2011
Time difference between the RAC nodes is out of sync
If the time difference between the RAC nodes is out of sync (time difference > 30 sec) then it will result one of the following issues
1. CRS installation failure on remote node
2. RAC node reboots periodically
3. CRS Application status UKNOWN or OFFLINE
To avoid these issues configure NTP (Network Time Protocol) on both nodes using any one of the following methods
1. system-config-time or system-config-date or dateconfig
Type command system-config-time or system-config-date or dateconfig at terminal --> Click “Network Time Protocol” ->check “Enable Network Time Protocol” and select NTP server --> Click OK
2. date MMDDHHMMSYY
Type command date with current date and time
3. /etc/ntp.conf
Update /etc/ntp.conf file with timeservers IP addresses and start or restart the ntp daemon
$ /etc/init.d/ntp start
or
$ /etc/rc.d/init.d/ntp start
Once RAC nodes are time sync you might need to shutdown and startup the CRS applications manually.
$ crs_stop ora.testrac1.ons
$ crs_start ora.testrac1.ons
In case if you encounter CRS-0223: Resource 'ora.testrac1.ons’ has placement error then stop and start all CRS Applications to resolve the issue.
$ crs_stop -all
$ crs_start -all
1. CRS installation failure on remote node
2. RAC node reboots periodically
3. CRS Application status UKNOWN or OFFLINE
To avoid these issues configure NTP (Network Time Protocol) on both nodes using any one of the following methods
1. system-config-time or system-config-date or dateconfig
Type command system-config-time or system-config-date or dateconfig at terminal --> Click “Network Time Protocol” ->check “Enable Network Time Protocol” and select NTP server --> Click OK
2. date MMDDHHMMSYY
Type command date with current date and time
3. /etc/ntp.conf
Update /etc/ntp.conf file with timeservers IP addresses and start or restart the ntp daemon
$ /etc/init.d/ntp start
or
$ /etc/rc.d/init.d/ntp start
Once RAC nodes are time sync you might need to shutdown and startup the CRS applications manually.
$ crs_stop ora.testrac1.ons
$ crs_start ora.testrac1.ons
In case if you encounter CRS-0223: Resource 'ora.testrac1.ons’ has placement error then stop and start all CRS Applications to resolve the issue.
$ crs_stop -all
$ crs_start -all
Saturday, 17 September 2011
ORA-00257: archiver error. Connect internal only, until freed.
Cause:
******
The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.
Action:
*******
Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.
******
The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.
Action:
*******
Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.
What is PSP process and How to reslove ORA-00490 & ORA-27301 Errors?
Actually PSP process is undocumented background process of Oracle. PSP process was introduced in Oracle 10g (specially in 10.2.0.1). PSP process is called as Process SPawner. It starts with PSP0 as background process of Oracle instance. Process SPawner (PSP0) has job of creating and managing other Oracle background processes.
While PSP process itself terminated due to any error the whole instance is crashed with ORA-00490 error as follows message.
ORA-00490: PSP process terminated with error
When operating system is encountering with some unknown error like insufficient space in temp Area or swap Area or insufficient system resources then Oracle throws following errors.
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3
Same time PMON is terminating instance with following error with process id of PMON process. Because Oracle processes are being unmanageable of Oracle database instance.
PMON: terminating instance due to error 490
Instance terminated by PMON, pid = 20094
Cause
*****
ORA-00490: Process SPawner abnormally terminated with error.
Root cause of this error that there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn't able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.
Solution:
*********
First and main solution is check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process. Second solution is check "ulimit" setting for Oracle. "ulimit" is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the "ulimit" setting for Oracle user.
While PSP process itself terminated due to any error the whole instance is crashed with ORA-00490 error as follows message.
ORA-00490: PSP process terminated with error
When operating system is encountering with some unknown error like insufficient space in temp Area or swap Area or insufficient system resources then Oracle throws following errors.
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3
Same time PMON is terminating instance with following error with process id of PMON process. Because Oracle processes are being unmanageable of Oracle database instance.
PMON: terminating instance due to error 490
Instance terminated by PMON, pid = 20094
Cause
*****
ORA-00490: Process SPawner abnormally terminated with error.
Root cause of this error that there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn't able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.
Solution:
*********
First and main solution is check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process. Second solution is check "ulimit" setting for Oracle. "ulimit" is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the "ulimit" setting for Oracle user.
Thursday, 15 September 2011
Patch 12419353 - 11.2.0.2.3 G Patch Set Update (Includes Database PSU 11.2.0.2.3)
1 Patch Information
GI Patch Set Update (PSU) patches are cumulative. That is, the content of all previous PSUs (if any) is included in the latest GI PSU 11.2.0.2.3 patch.
Table 1 describes installation types and security content. For each installation type, it indicates the most recent PSUs, which includes new security fixes that are pertinent to that installation type. If there are no security fixes to be applied to an installation type, then "None" is indicated. If a specific PSUs is listed, then apply that or any later PSUs patch to be current with security fixes.
Table 1 Installation Types and Security Content
Installation Type Latest PSU with Security Fixes
Server homes
11.2.0.2.3 GI PSU
Client-Only Installations
None
Instant Client Installations
None
(The Instant Client installation is not the same as the client-only Installation. For additional information about Instant Client installations, see Oracle Database Concepts.)
2 Patch Installation and Deinstallation
This section includes the following sections:
Section 2.1, "Patch Installation Prerequisites"
Section 2.2, "OPatch Automation for GI"
Section 2.3, "One-off Patch Conflict Detection and Resolution"
Section 2.4, "Patch Installation"
Section 2.5, "Patch Post-Installation Instructions"
Section 2.6, "Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home"
Section 2.7, "Patch Deinstallation"
Section 2.8, "Unmounting ACFS File Systems"
Section 2.9, "Mounting ACFS File Systems"
Section 2.10, "Patch Post-Deinstallation Instructions for a RAC Environment"
2.1 Patch Installation Prerequisites
You must satisfy the conditions in the following sections before applying the patch:
OPatch Utility Information
OCM Configuration
Validation of Oracle Inventory
Downloading OPatch
Unzipping the GI PSU 11.2.0.2.3 Patch
2.1.1 OPatch Utility Information
You must use the OPatch utility version 11.2.0.1.5 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 11.2 releases, which is available for download from My Oracle Support patch 6880880 by selecting ARU link for the 11.2.0.0.0 release. It is recommended that you download the Opatch utility and the GI PSU 11.2.0.2.3 patch in a shared location to be able to access them from any node in the cluster for the patch application on each node.
Note:
When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.
The new opatch utility should be updated in all the Oracle RAC database homes and the GI home that are being patched. To update Opatch, use the following instructions.
Download the OPatch utility to a temporary directory.
For each Oracle RAC database homes and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
unzip -d
/OPatch/opatch version
The version output of the previous command should be 11.2.0.1.5 or later.
For information about OPatch documentation, including any known issues, see My Oracle Support Note 293369.1 OPatch documentation list.
2.1.2 OCM Configuration
The OPatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. You should enter a complete path of OCM response file if you already have created this in your environment.
If you do not have the OCM response file (ocm.rsp) and you wish to use one during the patch application, then you should run the following command to create it.
As the Grid home owner execute:
%/OPatch/ocm/bin/emocmrsp
You can also invoke opatch auto with the -ocmrf option to run opatch auto in silent mode.
2.1.3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.
%/OPatch/opatch lsinventory -detail -oh
If this command succeeds, it lists the Oracle components that are installed in the home. The command will fail if the Oracle Inventory is not set up properly. If this happens, contact Oracle Support Services for assistance.
2.1.4 Downloading OPatch
If you have not already done so, download OPatch 11.2.0.1.5 or later, as explained in Section 2.1.1, "OPatch Utility Information".
2.1.5 Unzipping the GI PSU 11.2.0.2.3 Patch
The patch application requires explicit user actions to run 'opatch auto' command on each node of Oracle clusterware. So, it is recommended that you download and unzip the GI PSU 11.2.0.2.3 patch in a shared location to be able to access it from any node in the cluster and then as the Grid home owner execute the unzip command.
Note:
Do not unzip the GI PSU 11.2.0.2.3 patch in the top level /tmp directory.
The unzipped patch location should have read permission for ORA_INSTALL group in order to patch Oracle homes owned by different owners. The ORA_INSTALL group is the primary group of the user who owns the GI home or the group owner of the Oracle central inventory.
(In this readme, the downloaded patch location directory is referred as .)
%cd
Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:
%unzip p12419353_112020_Linux.zip
For example, if in your environment is /u01/oracle/patches, enter the following command:
%cd /u01/oracle/patches
Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:
%unzip p12419353_112020_Linux.zip
2.2 OPatch Automation for GI
The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.
The utility must be executed by an operating system (OS) user with root privileges (usually the user root), and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in Non-shared storage. The utility should not be run in parallel on the cluster nodes.
Depending on command line options specified, one invocation of Opatch can patch the GI home, one or more Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version. You can also roll back the patch with the same selectivity.
Add the directory containing the opatch to the $PATH environment variable. For example:
export PATH=$PATH:/OPatch
To patch GI home and all Oracle RAC database homes of the same version:
#opatch auto
To patch only the GI home:
#opatch auto -oh
To patch one or more Oracle RAC database homes:
#opatch auto -oh ,
To roll back the patch from the GI home and each Oracle RAC database home:
#opatch auto -rollback
To roll back the patch from the GI home:
#opatch auto -oh -rollback
To roll back the patch from the Oracle RAC database home:
#opatch auto -oh -rollback
For more information about opatch auto, see My Oracle Support Note 293369.1 OPatch documentation list.
For detailed patch installation instructions, see Section 2.4, "Patch Installation".
2.3 One-off Patch Conflict Detection and Resolution
For an introduction to the PSU one-off patch concepts, see "Patch Set Updates Patch Conflict Resolution" in My Oracle Support Note 854428.1 Patch Set Updates for Oracle Products.
The fastest and easiest way to determine whether you have one-off patches in the Oracle home that conflict with the PSU, and to get the necessary conflict resolution patches, is to use the Patch Recommendations and Patch Plans features on the Patches & Updates tab in My Oracle Support. These features work in conjunction with the My Oracle Support Configuration Manager. Recorded training sessions on these features can be found in Note 603505.1.
However, if you are not using My Oracle Support Patch Plans, follow these steps:
Determine whether any currently installed one-off patches conflict with the PSU patch as follows:
In the unzipped directory as in Section 2.1.5.
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./12419331
The report will indicate the patches that conflict with PSU 12419331 and the patches for which PSU 12419331 is a superset.
Note that Oracle proactively provides PSU 11.2.0.2.3 one-off patches for common conflicts.
Use My Oracle Support Note 1061295.1 Patch Set Updates - One-off Patch Conflict Resolution to determine, for each conflicting patch, whether a conflict resolution patch is already available, and if you need to request a new conflict resolution patch or if the conflict may be ignored.
When all the one-off patches that you have requested are available at My Oracle Support, proceed with Section 2.4, "Patch Installation".
2.4 Patch Installation
This section will guide you through the steps required to apply this GI PSU 11.2.0.2.3 patch to RAC database homes, the Grid home, or all relevant homes on the cluster.
Note:
When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.
The patch instructions will differ based on the configuration of the Grid infrastructure and the Oracle RAC database homes.
The patch installations will also differ based on following aspects of your existing configuration:
GI home is shared or non-shared
The Oracle RAC database home is shared or non-shared
The Oracle RAC database home software is on ACFS or non-ACFS file systems.
Patch all the Oracle RAC database and the GI homes together, or patch each home individually
You must choose the most appropriate case that is suitable based on the existing configurations and your patch intention.
Note:
You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Case 1: Patching Oracle RAC Database Homes and the GI Home Together
Case 2: Patching Oracle RAC Database Homes
Case 3: Patching GI Home Alone
Case 4: Patching Oracle Restart Home
Case 5: Patching a Software Only GI Home Installation
Case 6: Patching a Software Only Oracle RAC Home Installation
Case 1: Patching Oracle RAC Database Homes and the GI Home Together
Follow the instructions in this section if you would like to patch all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.
Case 1.1: GI Home Is Shared
Follow these instructions in this section if the GI home is shared.
Note:
Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases which depend on GI stack, ASM for data files, or an ACFS file system.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database –d
ORACLE_HOME: Complete path of the Oracle database home.
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Patch the GI home.
On local node, as root user, execute the following command:
#opatch auto -oh
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.
For each database home execute the following as root user:
#opatch auto -oh
ORACLE_HOME: Complete path of Oracle database home.
Note:
The previous command should be executed only once on any one node if the database home is shared.
Restart the Oracle databases that you have previously stopped in step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 1.2: GI Home Is Not Shared
Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
As root user execute the following command on each node of the cluster:
#opatch auto
Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS
From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.
As the database home owner execute:
/bin/srvctl stop database –d
On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 1st node, apply the patch to the GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.
On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.
As root user, execute the following command:
opatch auto -oh
On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 2nd node, apply the patch to GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 2nd node, running the opatch auto command in Step 8 will restart the stack.
On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.
On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
Repeat Steps 7 through 10 for all remaining nodes of the cluster.
Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared
For each node, perform the following steps:
On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the local node, apply the patch to the GI home and to the Database home.
As root user, execute the following command:
opatch auto
This operation will patch both the CRS home and the Database home.
The opatch auto command will restart the stack on the local node and restarts the Database on the local node.
Repeat Steps 1 through 3 for all remaining nodes of the cluster.
Case 2: Patching Oracle RAC Database Homes
You should use the following instructions if you prefer to patch Oracle RAC databases alone with this GI PSU 11.2.0.2.3 patch.
Case 2.1: Non-Shared Oracle RAC Database Homes
Execute the following command on each node of the cluster.
As root user execute:
#opatch auto -oh
Case 2.2: Shared Oracle RAC Database Homes
Make sure to stop the databases running from the Oracle RAC database homes that you would like to patch. Execute the following command to stop each database.
As Oracle database home owner execute:
/bin/srvctl stop database -d
As root user execute only on the local node.
#opatch auto -oh
Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.
As Oracle database home owner execute:
/bin/srvctl start database -d
Case 3: Patching GI Home Alone
You should use the following instructions if you prefer to patch Oracle GI (Grid Infrastructure) home alone with this GI PSU 11.2.0.2.3 patch.
Case 3.1: Shared GI Home
Follow these instructions in this section if the GI home is shared.
Note:
Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database –d
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Execute the following command on the local node
As root user execute:
#opatch auto -oh
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 3.2: Non-Shared GI Home
If the GI home is not shared then use the following instructions to patch the home.
Case 3.2.1: ACFS File System Is Not Configured
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes use ACFS file system for its software files.
Execute the following on each node of the cluster.
As root user execute:
#opatch auto -oh
Case 3.2.2: ACFS File System Is Configured
Repeat Steps 1 through 5 for each node in the cluster:
From the Oracle database home, stop the Oracle RAC database running on that node.
As the database home owner execute:
/bin/srvctl stop instance –d -n
Unmount all ACFS filesystems on this node using instructions in Section 2.8.
Apply the patch to the GI home on that node using the opatch auto command.
Execute the following command on that node in the cluster.
As root user execute:
#opatch auto -oh
Remount ACFS file systems on that node. See Section 2.9 for instructions.
Restart the Oracle database on that node that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 4: Patching Oracle Restart Home
You must keep the Oracle Restart stack up and running when you are patching. Use the following instructions to patch Oracle Restart home.
As root user execute:
#opatch auto -oh
Case 5: Patching a Software Only GI Home Installation
Apply the CRS patch using.
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419353
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419331
Case 6: Patching a Software Only Oracle RAC Home Installation
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Apply the DB patch.
As the database home owner execute:
$/OPatch/opatch napply -oh -local /12419353/custom/server/12419353
$/OPatch/opatch napply -oh -local /12419331
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
2.5 Patch Post-Installation Instructions
After installing the patch, perform the following actions:
Apply conflict resolution patches as explained in Section 2.5.1.
Load modified SQL files into the database, as explained in Section 2.5.2.
Upgrade Oracle Recovery Manager catalog, as explained in Section 2.5.3.
2.5.1 Applying Conflict Resolution Patches
Apply the patch conflict resolution one-off patches that were determined to be needed when you performed the steps in Section 2.3, "One-off Patch Conflict Detection and Resolution".
2.5.2 Loading Modified SQL Files into the Database
The following steps load modified SQL files into the database. For a RAC environment, perform these steps on only one node.
For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT
The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.
For information about the catbundle.sql script, see My Oracle Support Note 605795.1 Introduction to Oracle Database catbundle.sql.
Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:
catbundle_PSU__APPLY_.log
catbundle_PSU__GENERATE_.log
where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".
2.5.3 Upgrade Oracle Recovery Manager Catalog
If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:
$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;
2.6 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home
These instructions are for a database that is created or upgraded after the installation of PSU 11.2.0.2.3.
You must execute the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" for any new database only if it was created by any of the following methods:
Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
Using a script that was created by DBCA that creates a database from a sample database
There are no actions required for databases that have been upgraded.
2.7 Patch Deinstallation
You can use the following steps to roll back GI and GI PSU 11.2.0.2.3 patches. Choose the instructions that apply to your needs.
Note:
You must stop the EM agent processes running from the database home, prior to rolling back the patch from Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Case 1: Rolling Back the Oracle RAC Database Homes and GI Homes Together
Case 2: Rolling Back from the Oracle RAC Database Homes
Case 3: Rolling Back from the GI Home Alone
Case 4: Rolling Back the Patch from Oracle Restart Home
Case 5: Rolling Back the Patch from a Software Only GI Home Installation
Case 6: Rolling Back the Patch from a Software Only Oracle RAC Home Installation
Case 1: Rolling Back the Oracle RAC Database Homes and GI Homes Together
Follow the instructions in this section if you would like to rollback the patch from all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.
Case 1.1 GI Home Is Shared
Follow these instructions in this section if the GI home is shared.
Note:
An operation on a shared GI home requires shutdown of the Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database -d
ORACLE_HOME: Complete path of the Oracle database home.
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for un-mounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Rollback the patch from the GI home.
On local node, as root user, execute the following command:
#opatch auto -oh -rollback
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.
For each database home, execute the following as root user:
#opatch auto -oh -rollback
ORACLE_HOME: Complete path of Oracle database home.
Note:
The previous command should be executed only once on any one node if the database home is shared.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database -d
Case 1.2: GI Home Is Not Shared
Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
As root user, execute the following command on each node of the cluster.
#opatch auto -rollback
Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS
From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.
As the database home owner execute:
/bin/srvctl stop database –d
On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 1st node, apply the patch to the GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.
On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.
As root user, execute the following command:
opatch auto -oh
On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 2nd node, apply the patch to GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 2nd node, running the opatch auto command in Step 8 will restart the stack.
On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.
On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
Repeat Steps 7 through 10 for all remaining nodes of the cluster.
Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared
For each node, perform the following steps:
On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the local node, apply the patch to the GI home and to the Database home.
As root user, execute the following command:
opatch auto
This operation will patch both the CRS home and the Database home.
The opatch auto command will restart the stack on the local node and restarts the Database on the local node.
Repeat Steps 1 through 3 for all remaining nodes of the cluster.
Case 2: Rolling Back from the Oracle RAC Database Homes
You should use the following instructions if you prefer to rollback the patch from Oracle RAC databases homes alone.
Case 2.1: Non-Shared Oracle RAC Database Homes
Execute the following command on each node of the cluster.
As root user execute:
#opatch auto -oh -rollback
Case 2.2: Shared Oracle RAC Database Homes
Make sure to stop the databases running from the Oracle RAC database homes from which you would like to rollback the patch. Execute the following command to stop each database.
As Oracle database home owner execute:
/bin/srvctl stop database -d
As root user execute only on the local node.
#opatch auto -oh -rollback
Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.
As Oracle database home owner execute:
/bin/srvctl start database -d
Case 3: Rolling Back from the GI Home Alone
You should use the following instructions if you prefer to rollback patch from the Oracle GI (Grid Infrastructure) home alone.
Case 3.1 Shared GI Home
Follow these instructions in this section if the GI home is shared.
Note:
An operation in a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database -d
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Execute the following command on the local node.
As root user execute:
#opatch auto -oh -rollback
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database -d
Case 3.2: Non-Shared GI Home
If the GI home is not shared, then use the following instructions to rollback the patch from the GI home.
Case 3.2.1: ACFS File System Is Not Configured
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
Execute the following on each node of the cluster.
As root user execute:
#opatch auto -oh -rollback
Case 3.2.2: ACFS File System Is Configured
Repeat Steps 1 through 5 for each node in the cluster:
From the Oracle database home, stop the Oracle RAC database running on that node.
As the database home owner execute:
/bin/srvctl stop instance -d -n
Make sure the ACFS file systems are unmounted on that node. Use instructions in Section 2.8 for unmounting ACFS file systems.
Apply the patch to the GI home on that node using the opatch auto command.
Execute the following command on each node in the cluster.
As root user execute:
#opatch auto -oh -rollback
Remount ACFS file systems on that node. See Section 2.9 for instructions.
Restart the Oracle database on that node that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start instance -d -n
Case 4: Rolling Back the Patch from Oracle Restart Home
You must keep the Oracle Restart stack up and running when you are rolling back the patch from the Oracle Restart home. Use the following instructions to roll back the patch from the Oracle Restart home.
As root user execute:
#opatch auto -oh -rollback
Case 5: Rolling Back the Patch from a Software Only GI Home Installation
Roll back the CRS patch.
As the GI home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Case 6: Rolling Back the Patch from a Software Only Oracle RAC Home Installation
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Roll back the DB patch from the database home.
As the database home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome -n
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.
Execute the following command to find all ACFS file system mount points.
As the root user execute:
#/sbin/acfsutil registry
Unmount ACFS file systems found in Step 1.
As the root user execute:
# /bin/umount
Note:
On Solaris operating system use: /sbin/umount.
On AIX operating system, use: /etc/umount.
Verify that the ACFS file systems are unmounted. Execute the following command to verify.
As the root user execute:
#/sbin/acfsutil info fs
The previous command should return the following message if there is no ACFS file systems mounted.
"acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
2.9 Mounting ACFS File Systems
If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.
Execute the following command to find the names of the CRS managed ACFS file system resource.
As root user execute:
# crsctl stat res -w "TYPE = ora.acfs.type"
Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
#crsctl start res -n
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.
Execute the following command to find all ACFS file system mount points.
As the root user execute:
#/sbin/acfsutil registry
Mount ACFS file systems found in Step 1.
As the root user execute:
# /bin/mount
Note:
On Solaris operating system use: /sbin/mount.
On AIX operating system, use: /etc/mount.
2.10 Patch Post-Deinstallation Instructions for a RAC Environment
Follow these steps only on the node for which the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" were executed during the patch application.:
Start all database instances running from the Oracle home. (For more information, see Oracle Database Administrator's Guide.)
For each database instance running out of the ORACLE_HOME, connect to the database using SQL*Plus as SYSDBA and run the rollback script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle_PSU__ROLLBACK.sql
SQL> QUIT
In a RAC environment, the name of the rollback script will have the format catbundle_PSU__ROLLBACK.sql.
Check the log file for any errors. The log file is found in $ORACLE_BASE/cfgtoollogs/catbundle and is named catbundle_PSU__ROLLBACK_.log where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".
All other instances can be started and accessed as usual while you are executing the deinstallation steps.
3 Known Issues
For information about OPatch issues, see My Oracle Support Note 293369.1 OPatch documentation list.
For issues documented after the release of this PSUs, see My Oracle Support Note 1272288.1 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues.
Other known issues are as follows.
Issue 1
Known Issues for Opatch Auto
Bug 10339274 - 'OPATCH AUTO' FAILED TO APPLY 11202 PATCH ON EXADATA RAC CLUSTER WITH 11201 RAC
Bug 10339251 - MIN OPATCH ISSUE FOR DB HOME SETUP IN EXADATA RAC CLUSTER USING 'OPATCH AUTO'
These two issues are observed in an environment where lower version database homes coexist with 11202 clusterware and database homes and opatch auto is used to apply the 11202 GIBundle.
Workaround:
Apply the 11202 GIBundle to the 11202 GI Home and Oracle RAC database home as follows:
#opatch auto -oh
#opatch auto -oh <11202 ORACLE_HOME1_PATH>,<11202 ORACLE_HOME1_PATH>
Issue 2
Bug 10226210 (11,11.2.0.2GIBTWO) 11202_GI_OPATCH_AUTO: OPATCH TAKES MORE STORAGE SPACE AFTER ROLLBACK SUCCEED
Workaround:
Execute the following command as Oracle home owner after a successful rollback to recover the storage used by the backup operation:
opatch util cleanup -silent
Issue 3
Bug 11799240 - 11202_GIBTWO:STEP 3 FAILED,CAN'T ACCESS THE NEXT OUI PAGE DURING SETTING UP GI
After applying GI PSU 11.2.0.2.3, the Grid Infrastructure Configuration Wizard fails with error INS-42017 when choosing the nodes of the cluster.
Workaround:
Apply the one off patch for the bug 10055663.
Issue 4
Bug 11856928 - 11202_GIBTWO_HPI:PATCH SUC CRS START FAIL FOR PERM DENY TO MKDIR $EXTERNAL_ORACL
This issue is seen only on the HPI platform when the opatch auto command is invoked from a directory that does not have write permission to the root user.
Workaround:
Execute the opatch auto from a directory that has write permission to the root user.
Issue 5
The following ignorable errors may be encountered while running the catbundle.sql script or its rollback script:
ORA-29809: cannot drop an operator with dependent objects
ORA-29931: specified association does not exist
ORA-29830: operator does not exist
ORA-00942: table or view does not exist
ORA-00955: name is already used by an existing object
ORA-01430: column being added already exists in table
ORA-01432: public synonym to be dropped does not exist
ORA-01434: private synonym to be dropped does not exist
ORA-01435: user does not exist
ORA-01917: user or role 'XDB' does not exist
ORA-01920: user name '' conflicts with another user or role name
ORA-01921: role name '' conflicts with another user or role name
ORA-01952: system privileges not granted to 'WKSYS'
ORA-02303: cannot drop or replace a type with type or table dependents
ORA-02443: Cannot drop constraint - nonexistent constraint
ORA-04043: object does not exist
ORA-29832: cannot drop or replace an indextype with dependent indexes
ORA-29844: duplicate operator name specified
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
ORA-06512: at line . If this error follow any of above errors, then can be safely ignored.
ORA-01927: cannot REVOKE privileges you did not grant
Issue 6
Bug 12619571 - 11202_GIBTHREE: PATCH FAILED IN MULTI-BYTES LANG ENV ISSUE SHOULD BE DOCUMENTED
This issue is seen when trying to run opatch auto to apply the GI PSU patch in the Japanese environment. The cause of the problem is that opatch auto currently only supports the English language environment.
Workaround:
Always keep the environmemt as the English language environmemt when running opatch auto to apply the GI PSU.
4 References
The following documents are references for this patch.
Note 293369.1 OPatch documentation list
Note 360870.1 Impact of Java Security Vulnerabilities on Oracle Products
Note 468959.1 Enterprise Manager Grid Control Known Issues
5 Bugs Fixed by This Patch
This patch includes the following bug fixes:
Section 5.1, "CPU Molecules"
Section 5.2, "Bugs Fixed in GI PSU 11.2.0.2.3"
Section 5.3, "Bugs Fixed in GI PSU 11.2.0.2.2"
5.1 CPU Molecules
CPU molecules in GI PSU 11.2.0.2.3:
GI PSU 11.2.0.2.3 contains the following new CPU 11.2.0.2 molecules:
12586486 - DB-11.2.0.2-MOLECULE-004-CPUJUL2011
12586487 - DB-11.2.0.2-MOLECULE-005-CPUJUL2011
12586488 - DB-11.2.0.2-MOLECULE-006-CPUJUL2011
12586489 - DB-11.2.0.2-MOLECULE-007-CPUJUL2011
12586490 - DB-11.2.0.2-MOLECULE-008-CPUJUL2011
12586491 - DB-11.2.0.2-MOLECULE-009-CPUJUL2011
12586492 - DB-11.2.0.2-MOLECULE-010-CPUJUL2011
12586493 - DB-11.2.0.2-MOLECULE-011-CPUJUL2011
12586494 - DB-11.2.0.2-MOLECULE-012-CPUJUL2011
12586495 - DB-11.2.0.2-MOLECULE-013-CPUJUL2011
12586496 - DB-11.2.0.2-MOLECULE-014-CPUJUL2011
5.2 Bugs Fixed in GI PSU 11.2.0.2.3
GI PSU 11.2.0.2.3 contains all fixes previously released in GI PSU 11.2.0.2.2 (see Section 5.3 for a list of these bug fixes) and the following new fixes:
Note:
ACFS is not supported on HP and therefore the bug fixes for ACFS do not apply to the HP GI PSU 3.
Automatic Storage Management
6892311 - PROVIDE REASON FOR MOUNT FORCE FAILURE WITHOUT REQUIRING PST DUMP
9078442 - ORA-19762 FROM ASMCMD CP COPYING FILE WITH DIFFERENT BYTE ORDER FROM FILESYSTEM
9572787 - LONG WAITS FOR ENQ: AM CONTENTION FOLLOWING CELL CRASH CAUSED CLUSTERWIDE OUTAGE
9953542 - TB_SOL_SP: HIT 7445 [KFKLCLOSE()+20] ERROR WHEN DG OFFLINE
10040921 - HUNG DATABASE WORKLOAD AND BACKGROUNDS AFTER INDUCING WRITE ERRORS ON AVD VOLUME
10155605 - 11201-OCE:DISABLE FC IN ONE NODE, ASM DISKGOUP FORCE DISMOUNTED IN OTHER NODES.
10278372 - TB:X:CONSISTENTLY PRINT "WARNING: ELAPSED TIME DID NOT ADVANCE" IN ASM ALERT LOG
10310299 - TB:X:LOST WRITES DUE TO RESYNC MISSING EXTENTS WHEN DISK GO OFFLINE DURING REBAL
10324294 - DBMV2: DBFS INSTANCE WAITS MUCH FOR "ASM METADATA FILE OPERATION"
10356782 - DBMV2+: ASM INSTANCE CRASH WITH ORA-600 : [KFCGET0_04], [25],
10367188 - TB:X:REBOOT 2 CELL NODES,ASM FOREGROUND PROCESS HIT ORA-600[KFNSMASTERWAIT01]
10621169 - FORCE DISMOUNT IN ASM RECOVERY MAY DROP REDO'S AND CAUSE METADATA CORRUPTIONS
11065646 - ASM MAY PICK INCORRECT PST WHEN MULTIPLE COPIES EXTANT
11664719 - 11203_ASM_X64:ARB0 STUCK IN DG REBALANCE
11695285 - ORA-15081 I/O WRITE ERROR OCCURED AFTER CELL NODE FAILURE TEST
11707302 - FOUND CORRUPTED ASM FILES AFTER CELL NODES FAILURE TESTING.
11707699 - DATABASE CANNOT MOUNT DUE TO ORA-00214: CONTROL FILE INCONSISTENCY
11800170 - ASM IN KSV WAIT AFTER APPLICATION OF 11.2.0.2 GRID PSU
11800854 - BUG TO TRACK LRG 5135625
12620422 - FAILED TO ONLINE DISKS BECAUSE OF A POSSIBLE RACING RESYNC
Buffer Cache Management
11674485 - LOST DISK WRITE INCORRECTLY SIGNALLED IN STANDBY DATABASE WHEN APPLYING REDO
Generic
9748749 - ORA-7445 [KOXSS2GPAGE]
10082277 - EXCESSIVE ALLOCATION IN PCUR OF "KKSCSADDCHILDNO" CAUSES ORA-4031 ERRORS
10126094 - ORA-600 [KGLLOCKOWNERSLISTDELETE] OR [KGLLOCKOWNERSLISTAPPEND-OVF]
10142788 - APPS 11I PL/SQL NCOMP:ORA-04030: OUT OF PROCESS MEMORY
10258337 - UNUSABLE INDEX SEGMENT NOT REMOVED FOR "ALTER TABLE MOVE"
10378005 - EXPDP RAISES ORA-00600[KOLRARFC: INVALID LOB TYPE], EXP IS SUCCESSFUL
10636231 - HIGH VERSION COUNT FOR INSERT STATEMENTS WITH REASON INST_DRTLD_MISMATCH
12431716 - UNEXPECTED CHANGE IN MUTEX WAIT BEHAVIOUR IN 11.2.0.2.2 PSU (HIGHER CPU POSSIBLE
High Availability
9869401 - REDO TRANSPORT COMPRESSION (RTC) MESSAGES APPEARING IN ALERT LOG
10157249 - CATALOG UPGRADE TO 11.2.0.2 FAILS WITH ORA-1
10193846 - RMAN DUPLICATE FAILS WITH ORA-19755 WHEN BCT FILE OF PRIMARY IS NOT ACCESSIBLE
10648873 - SR11.2.0.3TXN_REGRESS - TRC - KCRFW_REDO_WRITE
11664046 - STBH: WRONG SEQUENCE NUMBER GENERATED AFTER DB SWITCHOVER FROM STBY TO PRIMARY
Oracle Portable ClusterWare
8906163 - PE: NETWORK AND VIP RESOURCES FAIL TO START IN SOLARIS CONTAINERS
9593552 - GIPCCONNECT() IS NOT ASYNC 11.2.0.2GIBTWO
9897335 - TB-ASM: UNNECCESSARY OCR OPERATION LOG MESSAGES IN ASM ALERT LOG WITH ASM OCR
9902536 - LNX64-11202-MESSAGE: EXCESSIVE GNS LOGGING IN CRS ALERT FILE WHEN SELFCHECK FAIL
9916145 - LX64: INTERNAL ERROR IN CRSD.LOG, MISROUTED REQUEST, ASSERT IN CLSM2M.CPP
9916435 - ROOTCRS.PL FAILS TO CREATE NODEAPPS DURING ADD NODE OPERATION
9939306 - SERVICES NOT COMING UP AFTER SWITCHOVER USING SRVCTL START DATABASE
10012319 - ORA-600 [KFDVF_CSS], [19], [542] ON STARTUP OF ASM DURING ADDNODE
10019726 - MEMORY LEAK 1.2MB/HR IN CRSD.BIN ON NON-N NODE
10056713 - LNX64-11202-CSS: SPLIT BRAIN WHEN START CRS STACK IN PARALLEL WITH PRIV NIC DOWN
10103954 - INTERMITTENT "CANNOT COMMUNICATE WITH CRSD DAEMON" ERRORS
10104377 - GIPC ENSURE INITIAL MESSAGE IS NOT LOST DURING ESTABLISH PHASE
10115514 - SOL-X64-11202: CLIENT REGISTER IN GLOBAL GROUP MASTER#DISKMON#GROUP#MX NOT EXIT
10190153 - HPI-SG-11202 ORA.CTSSD AND ORA.CRSD GOES OFFLINE AFTER KILL GIPC ON CRS MASTER
10231906 - 11202-OCE-SYMANTEC:DOWN ONE OF PRIVAE LINKS ON NODE 3,OCSSD CRASHED ON NODE 3
10233811 - AFTER PATCHING GRID HOME, UNABLE TO START RESOURCES DBFS AND GOLDEN
10253630 - TB:X:HANG DETECTED,"WAITING FOR INSTANCE RECOVERY OF GROUP 2" FOR 45 MINUTES
10272615 - TB:X:SHUTDOWN SERVICE CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMRCFGMGRTHREAD
10280665 - TB:X:STOP CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMVVERIFYPENDINGCONFIGVFS
10299006 - AFTER 11.2.0.2 UPGRADE, ORAAGENT.BIN CONNECTS TO DATABASE WITH TOO MANY SESSIONS
10322157 - 11202_GIBONE: PERM OF FILES UNDER $CH/CRS/SBS CHANGED AFTER PATCHED
10324594 - STATIC ENDPOINT IN THE LEASE BLOCKS OVERWRITTEN DURING UPGRADE
10331452 - SOL-11202-UD: 10205->11202 NETWORK RES USR_ORA_IF VALUE MISSED AFTER UPGRADE
10357258 - SOL-11202-UD: 10205->11202 [IPMP] HUNDREDS OF DUP IP AFTER INTRA-NODE FAILOVER
10361177 - LNX64-11203-GNS: MANY GNS SELF CHECK FAILURE ALERT MESSAGES
10385838 - TB:X:CSS CORE DUMP AT GIPCHAINTERNALSEND
10397652 - AIX-11202-GIPC:DISABLE SWITCH PORT FOR ONE PRIVATE NIC,HAIP DID NOT FAILOVER
10398810 - DOUBLE FREE IN SETUPWORK DUE TO TIMING
10419987 - PEER LISTENER IS ACCESSING A GROCK THAT IS ALREADY DELETED
10621175 - TB_RAC_X64:X: CLSSSCEXIT: CSSD SIGNAL 11 IN THREAD GMDEATHCHECK
10622973 - LOSS OF LEGACY FEATURES IN 11.2
10631693 - TB:X:CLSSNMHANDLEVFDISCOVERACK: NO PENDINGCONFIGURATION TO COMPLETE. CSS ABORT
10637483 - TB:X:REBOOT ONE CELL NODE, CSS ABORT AT CLSSNMVDDISCTHREAD
10637741 - HARD STOP DEPENDENCY CAN CAUSE WRONG FAIL-OVER ORDER
10638381 - 11202-OCE-SYMANTEC: HAIP FAIL TO START WHEN PRIVATE IP IS PLUMBED ON VIRTUAL NIC
11069614 - RDBMS INSTANCE CRASH DUE TO SLOW REAP OF GIPC MESSAGES ON CMT SYSTEMS
11071429 - PORT 11GR2 CRS TO EL6
11654726 - SCAN LISTENER STARTUP FAILS IF /VAR/OPT/ORACLE/LISTENER.ORA EXISTS.
11663339 - DBMV2:SHARED PROCESS SPINNING CAUSES DELAY IN PRIMARY MEMBER CLEANUP
11682409 - RE-USING OCI MEMORY ACROSS CONNECTIONS CAUSES A MEMORY CORRUPTION
11698552 - SRVCTL REPORT WRONG STATUS FOR DATABASE INSTANCE.
11741224 - INCORRECT ACTIVE VERSION CHECK WHILE ENABLING THE BATCH FUNCTIONALITY
11744313 - LNX64-11203-RACG: UNEXPECTED CRSD RESTART DURING PARALLEL STACK START
11775080 - ORA-29701/29702 OCCURS WHEN WORKLOAD TEST RUNNING FOR A LONG TIME AND IS RESTART
11781515 - EVMD/CRSD FAIL TO START AFTER REBOOT, EVEN AFTER CRSCTL START CLUSTERWARE
11807012 - LNX64-11203-RACG: DB SERVICE RUNS INTO "UNKNOWN" STATE AFTER STACK START
11812615 - LNX64-11203-DIT: INCONSISTENT PERMISSION BEFORE/AFTER ROOTCRS.PL -UNLOCK/-PATCH
11828633 - DATABASE SERVICE DID NOT FAIL OVER AND COULD NOT BE STARTED AFTER NODE FAILURE
11840629 - KERNEL CRASH DUMP AND REBOOT FAIL INSIDE SOLARIS CONTAINER
11866171 - ENABLE CRASHDUMP WHEN REBOOTING THE MACHINE (LINUX)
11877079 - HUNDREDS OF ORAAGENT.BIN@HOSTNAME SESSSIONS IN 11.2.0.2 DATABASE
11899801 - 11202_GIBTWO_HPI:AFTER KILL ASM PMON, POLICY AND ADMIN DB RUNNING ON SAME SERVER
11904778 - LNX64-OEL6-11202: CRS STACK CAN'T BE START AFTER RESTART
11933693 - 11.1.0.7 DATABASE INSTANCE TERMINATED BY 11.2.0.2 CRS AGENT
11936945 - CVU NOT RECOGNIZING THE OEL6 ON LINUX
12332919 - ORAAGENT KEEPS EXITING
12340501 - SRVCTL SHOWS INSTANCE AS DOWN AFTER RELOCATION
12340700 - EVMD CONF FILES CAN HAVE WRONG PERMISSIONS AFTER INSTALL
12349848 - LNX64-11203: VIPS FELL OFFLINE WHEN BRING DOWN 3/4 PUBLIC NICS ONE BY ONE
12378938 - THE LISTENER STOPS WHEN THE ORA.NET1.NETWORK'S STATE IS CHANGED TO UNKNOWN
12380213 - 11203_110415:ERROR EXCEPTION WHILE INSTALLATION 11202 DB WITH DATAFILES ON 11203
12399977 - TYPO IN SUB PERFORM_START_SERVICE RETURNS ZERO (SUCCESS) EVEN WHEN FAILED
12677816 - SCAN LISTENER FAILD TO STARTUP IF /VAR/OPT/ORACLE/LISTENER.ORA EXIST
Oracle Space Management
8223165 - ORA-00600 [KTSXTFFS2] AFTER DATABASE STARTUP
9443361 - WRONG RESULTS (ROWDATA) FOR SELECT IN SERIAL FROM COMPRESSED TABLE
10061015 - LNX64-11202:HIT MANY ORA-600 ARGUMENTS: [KTFBHGET:CLSVIOL_KCBGCUR_9] DURING DBCA
10132870 - INDEX BLOCK CORRUPTION - ORA-600 [KCBZPBUF_2], [6401] ON RECOVER
10324526 - ORA-600 [KDDUMMY_BLKCHK] [6106] WHEN UPDATE SUBPARTITION OF TABLE IN TTS
Oracle Transaction Management
10053725 - TS11.2.0.3V3 - TRC - K2GUPDATEGLOBALPREPARECOUNT
10233732 - ORA-600 [K2GUGPC: PTCNT >= TCNT] OCCURS IN A DATABASE LINK TRANSACTION
Oracle Universal Storage Management
9867867 - SUSE10-LNX64-11202:NODE REBOOT HANG WHILE ORACLE_HOME LOCATED ON ACFS
9936659 - LNX64-11202-CRS: ORACLE HOME PUT ON ACFS, DB INST FAILS TO RESTART AFTER CRASH
9942881 - TIGHTEN UP KILL SEMANTICS FOR 'CLEAN' ACTION.
10113899 - AIX KSPRINTTOBUFFER TIMESTAMPS NEEDS TIME SINCE BOOT AND WALL_CLOCK TIMES
10266447 - ROOTUPGRADE.SH FAILS: 'FATAL: MODULE ORACLEOKS NOT FOUND' , ACFS-9121, ACFS-9310
11789566 - ACFS RECOVERY PHASE 2
11804097 - GBM LOCK TAKEN WHEN DETERMINING WHETHER THE FILE SYSTEM IS MOUNTED AND ONLINE
11846686 - ACFSROOT FAILS ON ORACLELINUX-RELEASE-5-6.0.1 RUNNUNG A 2.6.18 KERNEL
12318560 - ALLOW IOS TO RESTART WHEN WRITE ERROR MESG RETURNS SUCCESS
12326246 - ASM TO RETURN DIFF VALUES WHEN OFFLINE MESG UNSUCCESSFUL
12378675 - AIX-11203-HA-ACFS: HIT INVALID ASM BLOCK HEADER WHEN CONFIGURE DG USING AIX LVS
12398567 - ACFS FILE SYSTEM NOT ACCESSIBLE
12545060 - CHOWN OR RM CMD TO LOST+FOUND DIR IN ACFS FAILS ON LINUX
Oracle Utilities
9735282 - GETTING ORA-31693, ORA-2354, ORA-1426 WHEN IMPORTING PARTITIONED TABLE
Oracle Virtual Operating System Services
10317487 - RMAN CONTROLFILE BACKUP FAILS WITH ODM ERROR ORA-17500 AND ORA-245
11651810 - STBH: HIGH HARD PARSING DUE TO FILEOPENBLOCK EATING UP SHARED POOL MEMORY
XML Database
10368698 - PERF ISSUE WITH UPDATE RESOURCE_VIEW DURING AND AFTER UPGRADING TO 11.2.0.2
5.3 Bugs Fixed in GI PSU 11.2.0.2.2
This section describes bugs fixed in the GI PSU 11.2.0.2.2 release.
ACFS
10015603 - KERNEL PANIC IN OKS DRIVER WHEN SHUTDOWING CRS STACK
10178670 - ACFS VOLUMES ARE NOT MOUNTING ONCE RESTARTED THE SERVER
10019796 - FAIL TO GET ENCRYPTION STATUS OF FILES UNTIL DOING ENCR OP FIRST
10029794 - THE DIR CAN'T READ EVEN IF THE DIR IS NOT IN ANY REALM
10056808 - MOUNT ACFS FS FAILED WHEN FS IS FULL
10061534 - DB INSTANCE TERMINATED DUE TO ORA-445 WHEN START INSTANCE ON ALL NODES
10069698 - THE EXISTING FILE COULD CORRUPT IF INPUT INCORRECT PKCS PASSOWRD
10070563 - MULTIPLE WRITES TO THE SAME BLOCK WITH REPLICATION ON CAN GO OUT OF ORDER
10087118 - UNMOUNT PANICS IF ANOTHER USER IS SITTING IN A SNAPSHOT ROOT DIRECTORY
10216878 - REPLI-RELATED RESOURCE FAILED TO FAILOVER WHEN DG DISMOUNTED
10228079 - MOUTING DG ORA-15196 [KFC.C:25316] [ENDIAN_KFBH] AFTER NODE REBOOT
10241696 - FAILED TO MOUNT ACFS FS TO DIRECTORY CREATED ON ANOTHER ACFS FS
10252497 - ADVM/ACFS FAILS TO INSTALL ON SLES10
9861790 - LX64: ADVM DRIVERS HANGING OS DURING ACFS START ATTEMPTS
9906432 - KERNEL PANIC WHILE DISMOUNT ACFS DG FORCE
9975343 - FAIL TO PREPARE SECURITY IF SET ENCRYPTION FIRST ON THE OTHER NODE
10283549 - FIX AIX PANIC AND REMOVE -DAIX_PERF
10283596 - ACFS:KERNEL PANIC DURING USM LABEL PATCHING - ON AIX
10326548 - WRITE-PROTETED ACFS FILES SHOULD NOT BE DELETED BY NON-ROOT USER
ADVM
10045316 - RAC DB INSTALL ON SHARED ACFS HANGS AT LINKING PHASE
10283167 - ASM INSTANCE CANNOT STARTUP DUE TO EXISTENCE OF VMBX PROCESS
10268642 - NODE PANIC FOR BAD TRAP IN "ORACLEADVM" FOR NULL POINTER
10150020 - LINUX HANGS IN ADVM MIRROR RECOVERY, AFTER ASM EVICTIONS
Automatic Storage Management
9788588 - STALENESS REGISTRY MAY GET CLEARED PREMATURELY
10022980 - DISK NOT EXPELLED WHEN COMPACT DISABLED
10040531 - ORA-600 [KFRHTADD01] TRYING TO MOUNT RECO DISKGROUP
10209232 - STBH: DB STUCK WITH A STALE EXTENT MAP AND RESULTS IN DATA CORRUPTIONS
10073683 - ORA-600 [KFCBINITSLOT40] ON ASM ON DBMV2 WITH BP5
9715581 - DBMV2: EXADATA AUTO MANAGEMENT FAILED TO BRING UP DISKS ONLINE
10019218 - ASM DROPPED DISKS BEFORE DISK_REPAIR_TIME EXPIRED
10084145 - DBMV2: ORA-600 [1427] MOUNTING DISKGROUP AFTER ALL CELLS RESTARTED
11067567 - KEPT GENERATING "ELAPSED TIME DID NOT ADVANCE " IN ASM ALERT LOG
10356513 - DISK OFFLINED WITH NON ZERO TIMEOUT EXPELLED IMMEDIATELY
10332589 - TB:X:MOUNT NORMAL REDUNDANCY DG, FAILED WITH ORA-00600:[KFCINITRQ20]
10329146 - MARKING DIFFERENT SR BITS FROM MULTIPLE DBWS CAN CAUSE A LOST WRITE
10299224 - TB:X:PIVOTING AN EXTENT ON AN OFFLINE DISK CAN CAUSE STALE XMAPS IN RDBMS
10245086 - ORA-01210 DURING CREATE TABLESPACE
10230571 - TB:X:REBOOT ONE CELL NODE, RBAL HIT ORA-600[17183]
10228151 - ASM DISKGROUPS NOT GETTING MOUNTED
10227288 - DG FORCIBLY DISMOUNTED AFTER ONE FG LOST DUE TO "COULD NOT READ PST FOR GRP 4"
10222719 - ASM INSTANCE HANGS WITH RBAL PROCESS WAITS ON "NO FREE BUFFER"
10102506 - DISK RESYNC TAKES A LONG TIME EVEN WITH NO STALE EXTENTS
10094201 - DISK OFFLINE IS SLOW
10190642 - ORA-00600: [1433] FOLLOWED BY INSTANCE CRASH WITH ASM ON EXADATA
11067567 - 11202_gibtwo: kept generating "elapsed time did not advance " in asm alert log
Buffer Cache Management
9651350 - ora-00308 and ora-27037 when ora-8103 without event 10736 been set
10110863 - trace files is still generated after applying patch:9651350
10205230 - tb_x64: hit ora-00600: [kclwcrs_6]
10332111 - sql running long in active standby
CRS Group
CLEANUP
9949676 - GNSD.BIN CORE DUMP AFTER KILL ASM PMON ON ALL NODES AT SAME TIME
9975837 - GNS INCORRECTLY PROCESSES IPV6 LOOKUP REQUESTS
10007185 - GNS DUMPS CORE IN CLSKGOPANIC AT CLSKPDVA 717
10028343 - GNS CAN NOT BE RELOCATED AFTER PUBLIC RESTARTED
CRS
9876201 - OHASD AGENT CORE DUMP AT EONSHTTP.C:162
10011084 - 11202 STEP3 MODIFY BINARY AFTER INSTALLATION CANNOT EXCUTE SUCCESSFULLY
10028235 - 'CLSNVIPAGENT.CPP', LINE 1522: ERROR: FORMAL ARGUMENT TYPE OF ...
10045436 - 'ORA.LISTENER.LSNR' FAILED TO BE FENCED OFF DURING CRSD CLEANUP
10062301 - VALUE FOR FIELD 'CLUSTER_NAME' IS MISSING IN CRSCONFIG_PARAMS
10110969 - PORTABILITY ISSUES IN FUNCTION TOLOWER_HOST
10175855 - FAILED TO UGPRADE 11.2.0.1 + ARU 12900951 -> 11.2.0.2
9891341 - CRSD CORE DUMP IN PROATH_MASTER_EXIT_HELPER AT PROATH.C:1834
11655840 - RAC1 DB' STATE_DETAILS IS WRONG AFTER KILL GIPCD
10634513 - OHASD DUMPS CORE WHEN PLUG IN UNPLUGGED PRIVATE NETWORK NIC
10236074 - ASM INSTANCES CRASH SEVERAL TIMES DURING PARALLEL CRS STARTUP
10052529 - DB INST OFFLINE AFTER STOP/START CRS STACK ON ALL NODES IN PARALLEL
10065216 - VIRTUAL MEMORY USAGE OF ORAROOTAGENT IS BIG(1321MB) AND NOT DECREASING
10168006 - ORAAGENT PROCESS MEMORY GROWTH PERIODICALLY.
CSS
9907089 - CSS CORE DUMP DURING EXADATA ROLLING UPGRADE
9926027 - NODE REBOOTED AFTER CRS CLEAN-UP SUCCEEDED 11202 GI + 10205 RAC DB
10014392 - CRSCTL DELETE NODE FAILS WITH CRS-4662 & CRS-4000
10015460 - REMOVAL OF WRONG INCARNATION OF A NODE DUE TO MANUAL SHUTDOWN STATE
10040109 - PMON KILL LEAD TO OS REBOOT
10048027 - ASM UPGRADE FAILS
10052721 - 11201- 11202 NON-ROLLING,CRSCTL.BIN CORE AT CLSSNSQANUM, SIGNAL 11
10083789 - A NODE DOESNT INITIATE A RECONFIG DUE TO INCORRECT RECONFIG STATE
9944978 - FALSE CSS EVICTION AFTER PRIVATE NIC RESUME
9978195 - STOP DB ACTION TIMED OUT AND AGENT EXITS DUE TO FAILURE TO STOP EVENT BRIDGE
10248739 - AFTER APPLY THE PATCH, THE NODE EVICTED DURING START CRS STACK
CVU
9679401 - OUI PREREQ CHECKS FAILED FOR WRONG OWNSHIP OF RESOLV.CONF_`HOST`
9959110 - GNS INTEGRITY PREREQUISITE FAILED WITH PRVF-5213
9979706 - COMP OCR CHECK FAILS TO VERIFY SIZE OF OCR LOCATION
10029900 - CVU PRE NODEADD CHECK VD ERROR
10033106 - ADDNODE.SH SHOULD INDICATE WHAT HAPPENS WHEN ERROR OCCURRING
10075643 - UNABLE TO CONTINUE CONFIG.SH FOR CRS UPGRAD
10083009 - GIPCD FAILS TO RETRIEVE INFORMATION FROM PEERS DUE TO INVALID ENDPOINT
GIPC
9812956 - STATUS OF CRSD AND EVMD GOES INTERMEDIATE FOR EVER WHEN KILL GIPC
9915329 - ORA-600 [603] IN DB AND ORA-603 IN ASM AFTER DOWN INTER-CONNECT NIC
9944948 - START RESOUCE HAIP FAILED WHEN RUN ROOT.SH
9971646 - ORAROOTAGENT CORE DUMPED AT NETWORKHAMAINTHREAD::READROUTEDATA
9974223 - GRID INFRASTRUCTURE NEEDS MULTICAST COMMUNICATION ON 230.0.1.0 ADDRESSES WORKING
10053985 - ERROR IN NETWORK ADDRESS ON SOLARIS 11
10057680 - OHASD ORAROOTAGENT.BIN SPIN CPU AFTER SIMULATE ASM DISK ERROR
10078086 - ROOTUPGRADE.SH FAIL FOR 'CRSCTL STARTUPGRADE' FAIL,10205-> 11202
10260251 - GRID INSTALLATION FAILS TO START HAIP DUE TO CHANGE IN NETWORK INTERFACE NAME
10111010 - CRSD HANGS FOR THE HANAME OF PEER CRSD
11782423 - OHASD.BIN TAKES CPU ABOUT 95% ~ 100%
11077756 - STARTUP FAILURE OF HAIP CAUSES INSTALLATION FAILURE
10375649 - DISABLE HAIP ON PRIMECLUSTER
10284828 - INTERFACE UPDATES GET LOST DURING BOUNCE OF CRSD PROCESS
10284693 - AIX EPIPE FAILURE
10233159 - NEED 20 MINS TO STARTUP CRS WHEN 1/2 GIPC NICS DOWN
10128191 - LRGSRG9 AND LRGSRGE FAILURE
GNS
9864003 - NODE REBOOT DUE TO 'ORA.GNS' FAILED TO BE FENCED OFF DURING CRSD
GPNP
9336825 - GPNPD FLUSH PROFILE PUSH ERROR MESSAGES IN CRS ALERT LOG
10314123 - GPNPD MAY NOT UPDATE PROFILE TO LATEST ON START
10105195 - PROC-32 ACCESSING OCR; CRS DOES NOT COME UP ON NODE
10205290 - DBCA FAILED WITH ERROR ORA-00132
10376847 - [ORA.CRF] [START] ERROR = ERROR 9 ENCOUNTERED WHEN CONNECTING TO MOND
IPD-OS
9812970 - IPD DO NOT MARK TYPE OF DISKS USED FOR VOTING DISK CORRECTLY
10057296 - IPD SPLIT BRAIN AFTER CHANGE BDB LOCATION
10069541 - IPD SPLIT BRAIN AFTER STOPPING ORA.CRF ON MASTER NODE
10071992 - UNREASONABLE VALUES FOR DISK STATISTICS
10072474 - A NODE IS NOT MONITORED AFTER STOP AND START THE ORA.CRF ON IT
10073075 - INVALID DATA RECEIVED FROM THE CLUSTER LOGGER SERVI
10107380 - IPD NOT STARTED DUE TO SCRFOSM_GET_IDS FAILED
OCR
9978765 - ROOTUPGRADE.SH HANG AND CRSD CRASHED ON OTHER NODES,10205-> 11202
10016083 - 'OCRCONFIG -ADD' NEEDS HELPFUL MESSAGE FOR ERROR ORA-15221
OPSM
9918485 - EMCONFIG FAIL WITH NULLPOINTEREXCEPTION AT RACTRANSFERCORE.JAVA
10018215 - RACONE DOES NOT SHUTDOWN INSTANCE DURING RELOCATION
10042143 - ORECORE11 LWSFDSEV CAUSED SEGV IN SRVM NATIVE METHODS
OTHERS
9963327 - CHMOD.PL GETS CALLED INSTEAD OF CHMOD.EXE
10008467 - FAILS DUE TO WRONG VERSION OF PERL USED:
10015210 - OCTSSD LEAK MEMORY 1.7M HR ON PE MASTER DURING 23 HOURS RUNNI
10027079 - CRS_SHUTDOWN_SYNCH EVENT NOT SENT IN SIHA
10028637 - SCLS.C COMPILE ERRORS ON AIX UNDECLARED IDENTIFIERS
10029119 - 11201-11202 CRS UPGRADE OUI ASKS TO RUN ROOTUPGRADE.SH
10036834 - PATCHES NOT FOUND ERROR WHILE UPGRADING GRID FROM 11201 TO 11202
10038791 - HAS SRG SRV GETTING MANY DIFS FOR AIX ON LABEL 100810 AND LATER
10040647 - LNX64-112022-UD; AQ AND RLB DO NOT WORK AFTER UPGRADING FROM 11201
10044622 - EVMD FAILED TO START AFTER KILL OHASD.BIN
10048487 - DIAGCOLLECTION CANNOT RETRIEVE IPD REPORTS
10073372 - DEINSTALL FAILED TO DELETE CRS_HOME ON REMOTE NODE IF OCR VD ON NFS
10089120 - WRONG PROMPT MESSAGE BY DEINSTALL COMMAND WHILE DELETING CRS HOME
10124517 - CRS STACK DOES NOT START AUTOMATICALLY AFTER NODE REBOOT
10157622 - 11.2.0.2 GI BUNDLE 1 HAS-CRS TRACKING BUG
RACG
10036193 - STANDBY NIC DOESN'T WORK IF DOWN PUBLIC NIC
10146768 - NETWORK RESOURCE FAILS TO START WITH IPMP ON SOLARIS 11
USM Miscellaneous
10146744 - ORA.REGISTRY.ACFS BECOME UNKOWN AND ACFS FS DISMOUNT
10283058 - RESOURCES ACFS NEEDS AN OPTION TO DISALLOW THE MOUNTING OF FILE SYSTEMS ON RESOURCE START
10193581 - ROOT.SH CRS-2674: START OF 'ORA.REGISTRY.ACFS' FAIL
10244210 - FAIL TO INSTALL ADVM/ACFS ON SOLARIS CONTAINER
10311856 - APPLY ASSERTION FAILURE:PBOARDENTRY>USRGBOARDRECENTRY_RECORD
Generic
9591812 - incorrect wait events in 11.2 ("cursor: mutex s" instead of "cursor: mutex x")
9905049 - ebr: ora-00600: internal error code, arguments: [kqlhdlod-bad-base-objn]
10052141 - exadata database crash with ora-7445 [_wordcopy_bwd_dest_aligned] and ora-600 [2
10052956 - ora-7445 [kjtdq()+176]
10157402 - lob segment has null data after long to lob conversion in parallel mode
10187168 - obsolete parent cursors if version count exceeds a threshold
10217802 - alter user rename raises ora-4030
10229719 - qrmp:12.2:ora-07445 while performing complete database import on solaris sparc
10264680 - incorrect version_number reported after patch for 10187168 applied
10411618 - add different wait schemes for mutex waits
11069199 - ora-600 [kksobsoletecursor:invalid stub] quering pq when pq is disabled
11818335 - additional changes when wait schemes for mutex waits is disabled
High Availability
10018789 - dbmv2-bigbh:spin in kgllock caused db hung and high library cache lock
10129643 - appsst gsi11g m9000: ksim generic wait event
10170431 - ctwr consuming lots of cpu cycles
Oracle Space Management
6523037 - et11.1dl: ora-600 [kddummy_blkchk] [6110] on update
9724970 - pdml fails with ora-600 [4511]. ora-600 [kdblkcheckerror] by block check
10218814 - dbmv2: ora-00600:[3020] data block corruption on standby
10219576 - ora-600 [ktsl_allocate_disp-fragment]
Oracle Transaction Management
10358019 - invalid results from flashback_transaction_query after applying patch:10322043
Oracle Utilities
10373381 - ora-600 [kkpo_rcinfo_defstg:objnotfound] after rerunning catupgrd.sql
Oracle Virtual Operating System Services
10127360 - dg4msql size increasing to 1.5gb after procedure executed 250 times
Server Manageability
11699057 - ora-00001: unique constraint (sys.wri$_sqlset_plans_tocap_pk) violated
6 Appendix A: Manual Steps for Apply/Rollback Patch
Steps for Applying the Patch
Note:
You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Execute the following on each node of the cluster in non-shared CRS and DB home environment to apply the patch.
Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed.
Run the pre root script.
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -unlock
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -unlock
Apply the CRS patch using.
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419353
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419331
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Apply the DB patch.
As the database home owner execute:
$/OPatch/opatch napply -oh -local /12419353/custom/server/12419353
$/OPatch/opatch napply -oh -local /12419331
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
Run the post script.
As the root user execute:
#/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -patch
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -patch
Start the CRS managed resources that were earlier running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s
Steps for Rolling Back the Patch
Execute the following on each node of the cluster in non-shared CRS and DB home environment to rollback the patch.
Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shut down before you proceed.
Run the pre root script.
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -unlock
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -unlock
Roll back the CRS patch.
As the GI home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Roll back the DB patch from the database home.
As the database home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
Run the post script.
As the root user execute:
$/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -patch
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -patch
Start the CRS managed resources that were earlier running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s
GI Patch Set Update (PSU) patches are cumulative. That is, the content of all previous PSUs (if any) is included in the latest GI PSU 11.2.0.2.3 patch.
Table 1 describes installation types and security content. For each installation type, it indicates the most recent PSUs, which includes new security fixes that are pertinent to that installation type. If there are no security fixes to be applied to an installation type, then "None" is indicated. If a specific PSUs is listed, then apply that or any later PSUs patch to be current with security fixes.
Table 1 Installation Types and Security Content
Installation Type Latest PSU with Security Fixes
Server homes
11.2.0.2.3 GI PSU
Client-Only Installations
None
Instant Client Installations
None
(The Instant Client installation is not the same as the client-only Installation. For additional information about Instant Client installations, see Oracle Database Concepts.)
2 Patch Installation and Deinstallation
This section includes the following sections:
Section 2.1, "Patch Installation Prerequisites"
Section 2.2, "OPatch Automation for GI"
Section 2.3, "One-off Patch Conflict Detection and Resolution"
Section 2.4, "Patch Installation"
Section 2.5, "Patch Post-Installation Instructions"
Section 2.6, "Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home"
Section 2.7, "Patch Deinstallation"
Section 2.8, "Unmounting ACFS File Systems"
Section 2.9, "Mounting ACFS File Systems"
Section 2.10, "Patch Post-Deinstallation Instructions for a RAC Environment"
2.1 Patch Installation Prerequisites
You must satisfy the conditions in the following sections before applying the patch:
OPatch Utility Information
OCM Configuration
Validation of Oracle Inventory
Downloading OPatch
Unzipping the GI PSU 11.2.0.2.3 Patch
2.1.1 OPatch Utility Information
You must use the OPatch utility version 11.2.0.1.5 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 11.2 releases, which is available for download from My Oracle Support patch 6880880 by selecting ARU link for the 11.2.0.0.0 release. It is recommended that you download the Opatch utility and the GI PSU 11.2.0.2.3 patch in a shared location to be able to access them from any node in the cluster for the patch application on each node.
Note:
When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.
The new opatch utility should be updated in all the Oracle RAC database homes and the GI home that are being patched. To update Opatch, use the following instructions.
Download the OPatch utility to a temporary directory.
For each Oracle RAC database homes and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
unzip -d
/OPatch/opatch version
The version output of the previous command should be 11.2.0.1.5 or later.
For information about OPatch documentation, including any known issues, see My Oracle Support Note 293369.1 OPatch documentation list.
2.1.2 OCM Configuration
The OPatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. You should enter a complete path of OCM response file if you already have created this in your environment.
If you do not have the OCM response file (ocm.rsp) and you wish to use one during the patch application, then you should run the following command to create it.
As the Grid home owner execute:
%/OPatch/ocm/bin/emocmrsp
You can also invoke opatch auto with the -ocmrf option to run opatch auto in silent mode.
2.1.3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.
%/OPatch/opatch lsinventory -detail -oh
If this command succeeds, it lists the Oracle components that are installed in the home. The command will fail if the Oracle Inventory is not set up properly. If this happens, contact Oracle Support Services for assistance.
2.1.4 Downloading OPatch
If you have not already done so, download OPatch 11.2.0.1.5 or later, as explained in Section 2.1.1, "OPatch Utility Information".
2.1.5 Unzipping the GI PSU 11.2.0.2.3 Patch
The patch application requires explicit user actions to run 'opatch auto' command on each node of Oracle clusterware. So, it is recommended that you download and unzip the GI PSU 11.2.0.2.3 patch in a shared location to be able to access it from any node in the cluster and then as the Grid home owner execute the unzip command.
Note:
Do not unzip the GI PSU 11.2.0.2.3 patch in the top level /tmp directory.
The unzipped patch location should have read permission for ORA_INSTALL group in order to patch Oracle homes owned by different owners. The ORA_INSTALL group is the primary group of the user who owns the GI home or the group owner of the Oracle central inventory.
(In this readme, the downloaded patch location directory is referred as .)
%cd
Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:
%unzip p12419353_112020_Linux.zip
For example, if in your environment is /u01/oracle/patches, enter the following command:
%cd /u01/oracle/patches
Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:
%unzip p12419353_112020_Linux.zip
2.2 OPatch Automation for GI
The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.
The utility must be executed by an operating system (OS) user with root privileges (usually the user root), and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in Non-shared storage. The utility should not be run in parallel on the cluster nodes.
Depending on command line options specified, one invocation of Opatch can patch the GI home, one or more Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version. You can also roll back the patch with the same selectivity.
Add the directory containing the opatch to the $PATH environment variable. For example:
export PATH=$PATH:/OPatch
To patch GI home and all Oracle RAC database homes of the same version:
#opatch auto
To patch only the GI home:
#opatch auto -oh
To patch one or more Oracle RAC database homes:
#opatch auto -oh ,
To roll back the patch from the GI home and each Oracle RAC database home:
#opatch auto -rollback
To roll back the patch from the GI home:
#opatch auto -oh -rollback
To roll back the patch from the Oracle RAC database home:
#opatch auto -oh -rollback
For more information about opatch auto, see My Oracle Support Note 293369.1 OPatch documentation list.
For detailed patch installation instructions, see Section 2.4, "Patch Installation".
2.3 One-off Patch Conflict Detection and Resolution
For an introduction to the PSU one-off patch concepts, see "Patch Set Updates Patch Conflict Resolution" in My Oracle Support Note 854428.1 Patch Set Updates for Oracle Products.
The fastest and easiest way to determine whether you have one-off patches in the Oracle home that conflict with the PSU, and to get the necessary conflict resolution patches, is to use the Patch Recommendations and Patch Plans features on the Patches & Updates tab in My Oracle Support. These features work in conjunction with the My Oracle Support Configuration Manager. Recorded training sessions on these features can be found in Note 603505.1.
However, if you are not using My Oracle Support Patch Plans, follow these steps:
Determine whether any currently installed one-off patches conflict with the PSU patch as follows:
In the unzipped directory as in Section 2.1.5.
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./12419331
The report will indicate the patches that conflict with PSU 12419331 and the patches for which PSU 12419331 is a superset.
Note that Oracle proactively provides PSU 11.2.0.2.3 one-off patches for common conflicts.
Use My Oracle Support Note 1061295.1 Patch Set Updates - One-off Patch Conflict Resolution to determine, for each conflicting patch, whether a conflict resolution patch is already available, and if you need to request a new conflict resolution patch or if the conflict may be ignored.
When all the one-off patches that you have requested are available at My Oracle Support, proceed with Section 2.4, "Patch Installation".
2.4 Patch Installation
This section will guide you through the steps required to apply this GI PSU 11.2.0.2.3 patch to RAC database homes, the Grid home, or all relevant homes on the cluster.
Note:
When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.
The patch instructions will differ based on the configuration of the Grid infrastructure and the Oracle RAC database homes.
The patch installations will also differ based on following aspects of your existing configuration:
GI home is shared or non-shared
The Oracle RAC database home is shared or non-shared
The Oracle RAC database home software is on ACFS or non-ACFS file systems.
Patch all the Oracle RAC database and the GI homes together, or patch each home individually
You must choose the most appropriate case that is suitable based on the existing configurations and your patch intention.
Note:
You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Case 1: Patching Oracle RAC Database Homes and the GI Home Together
Case 2: Patching Oracle RAC Database Homes
Case 3: Patching GI Home Alone
Case 4: Patching Oracle Restart Home
Case 5: Patching a Software Only GI Home Installation
Case 6: Patching a Software Only Oracle RAC Home Installation
Case 1: Patching Oracle RAC Database Homes and the GI Home Together
Follow the instructions in this section if you would like to patch all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.
Case 1.1: GI Home Is Shared
Follow these instructions in this section if the GI home is shared.
Note:
Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases which depend on GI stack, ASM for data files, or an ACFS file system.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database –d
ORACLE_HOME: Complete path of the Oracle database home.
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Patch the GI home.
On local node, as root user, execute the following command:
#opatch auto -oh
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.
For each database home execute the following as root user:
#opatch auto -oh
ORACLE_HOME: Complete path of Oracle database home.
Note:
The previous command should be executed only once on any one node if the database home is shared.
Restart the Oracle databases that you have previously stopped in step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 1.2: GI Home Is Not Shared
Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
As root user execute the following command on each node of the cluster:
#opatch auto
Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS
From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.
As the database home owner execute:
/bin/srvctl stop database –d
On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 1st node, apply the patch to the GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.
On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.
As root user, execute the following command:
opatch auto -oh
On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 2nd node, apply the patch to GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 2nd node, running the opatch auto command in Step 8 will restart the stack.
On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.
On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
Repeat Steps 7 through 10 for all remaining nodes of the cluster.
Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared
For each node, perform the following steps:
On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the local node, apply the patch to the GI home and to the Database home.
As root user, execute the following command:
opatch auto
This operation will patch both the CRS home and the Database home.
The opatch auto command will restart the stack on the local node and restarts the Database on the local node.
Repeat Steps 1 through 3 for all remaining nodes of the cluster.
Case 2: Patching Oracle RAC Database Homes
You should use the following instructions if you prefer to patch Oracle RAC databases alone with this GI PSU 11.2.0.2.3 patch.
Case 2.1: Non-Shared Oracle RAC Database Homes
Execute the following command on each node of the cluster.
As root user execute:
#opatch auto -oh
Case 2.2: Shared Oracle RAC Database Homes
Make sure to stop the databases running from the Oracle RAC database homes that you would like to patch. Execute the following command to stop each database.
As Oracle database home owner execute:
/bin/srvctl stop database -d
As root user execute only on the local node.
#opatch auto -oh
Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.
As Oracle database home owner execute:
/bin/srvctl start database -d
Case 3: Patching GI Home Alone
You should use the following instructions if you prefer to patch Oracle GI (Grid Infrastructure) home alone with this GI PSU 11.2.0.2.3 patch.
Case 3.1: Shared GI Home
Follow these instructions in this section if the GI home is shared.
Note:
Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database –d
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Execute the following command on the local node
As root user execute:
#opatch auto -oh
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 3.2: Non-Shared GI Home
If the GI home is not shared then use the following instructions to patch the home.
Case 3.2.1: ACFS File System Is Not Configured
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes use ACFS file system for its software files.
Execute the following on each node of the cluster.
As root user execute:
#opatch auto -oh
Case 3.2.2: ACFS File System Is Configured
Repeat Steps 1 through 5 for each node in the cluster:
From the Oracle database home, stop the Oracle RAC database running on that node.
As the database home owner execute:
/bin/srvctl stop instance –d -n
Unmount all ACFS filesystems on this node using instructions in Section 2.8.
Apply the patch to the GI home on that node using the opatch auto command.
Execute the following command on that node in the cluster.
As root user execute:
#opatch auto -oh
Remount ACFS file systems on that node. See Section 2.9 for instructions.
Restart the Oracle database on that node that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d
Case 4: Patching Oracle Restart Home
You must keep the Oracle Restart stack up and running when you are patching. Use the following instructions to patch Oracle Restart home.
As root user execute:
#opatch auto -oh
Case 5: Patching a Software Only GI Home Installation
Apply the CRS patch using.
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419353
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419331
Case 6: Patching a Software Only Oracle RAC Home Installation
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Apply the DB patch.
As the database home owner execute:
$/OPatch/opatch napply -oh -local /12419353/custom/server/12419353
$/OPatch/opatch napply -oh -local /12419331
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
2.5 Patch Post-Installation Instructions
After installing the patch, perform the following actions:
Apply conflict resolution patches as explained in Section 2.5.1.
Load modified SQL files into the database, as explained in Section 2.5.2.
Upgrade Oracle Recovery Manager catalog, as explained in Section 2.5.3.
2.5.1 Applying Conflict Resolution Patches
Apply the patch conflict resolution one-off patches that were determined to be needed when you performed the steps in Section 2.3, "One-off Patch Conflict Detection and Resolution".
2.5.2 Loading Modified SQL Files into the Database
The following steps load modified SQL files into the database. For a RAC environment, perform these steps on only one node.
For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT
The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.
For information about the catbundle.sql script, see My Oracle Support Note 605795.1 Introduction to Oracle Database catbundle.sql.
Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:
catbundle_PSU__APPLY_.log
catbundle_PSU__GENERATE_.log
where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".
2.5.3 Upgrade Oracle Recovery Manager Catalog
If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:
$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;
2.6 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home
These instructions are for a database that is created or upgraded after the installation of PSU 11.2.0.2.3.
You must execute the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" for any new database only if it was created by any of the following methods:
Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
Using a script that was created by DBCA that creates a database from a sample database
There are no actions required for databases that have been upgraded.
2.7 Patch Deinstallation
You can use the following steps to roll back GI and GI PSU 11.2.0.2.3 patches. Choose the instructions that apply to your needs.
Note:
You must stop the EM agent processes running from the database home, prior to rolling back the patch from Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Case 1: Rolling Back the Oracle RAC Database Homes and GI Homes Together
Case 2: Rolling Back from the Oracle RAC Database Homes
Case 3: Rolling Back from the GI Home Alone
Case 4: Rolling Back the Patch from Oracle Restart Home
Case 5: Rolling Back the Patch from a Software Only GI Home Installation
Case 6: Rolling Back the Patch from a Software Only Oracle RAC Home Installation
Case 1: Rolling Back the Oracle RAC Database Homes and GI Homes Together
Follow the instructions in this section if you would like to rollback the patch from all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.
Case 1.1 GI Home Is Shared
Follow these instructions in this section if the GI home is shared.
Note:
An operation on a shared GI home requires shutdown of the Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database -d
ORACLE_HOME: Complete path of the Oracle database home.
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for un-mounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Rollback the patch from the GI home.
On local node, as root user, execute the following command:
#opatch auto -oh -rollback
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.
For each database home, execute the following as root user:
#opatch auto -oh -rollback
ORACLE_HOME: Complete path of Oracle database home.
Note:
The previous command should be executed only once on any one node if the database home is shared.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database -d
Case 1.2: GI Home Is Not Shared
Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
As root user, execute the following command on each node of the cluster.
#opatch auto -rollback
Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS
From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.
As the database home owner execute:
/bin/srvctl stop database –d
On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 1st node, apply the patch to the GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.
On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.
As root user, execute the following command:
opatch auto -oh
On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the 2nd node, apply the patch to GI Home using the opatch auto command.
As root user, execute the following command:
opatch auto -oh
On the 2nd node, running the opatch auto command in Step 8 will restart the stack.
On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.
On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database –d -n
Repeat Steps 7 through 10 for all remaining nodes of the cluster.
Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared
For each node, perform the following steps:
On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.
On the local node, apply the patch to the GI home and to the Database home.
As root user, execute the following command:
opatch auto
This operation will patch both the CRS home and the Database home.
The opatch auto command will restart the stack on the local node and restarts the Database on the local node.
Repeat Steps 1 through 3 for all remaining nodes of the cluster.
Case 2: Rolling Back from the Oracle RAC Database Homes
You should use the following instructions if you prefer to rollback the patch from Oracle RAC databases homes alone.
Case 2.1: Non-Shared Oracle RAC Database Homes
Execute the following command on each node of the cluster.
As root user execute:
#opatch auto -oh -rollback
Case 2.2: Shared Oracle RAC Database Homes
Make sure to stop the databases running from the Oracle RAC database homes from which you would like to rollback the patch. Execute the following command to stop each database.
As Oracle database home owner execute:
/bin/srvctl stop database -d
As root user execute only on the local node.
#opatch auto -oh -rollback
Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.
As Oracle database home owner execute:
/bin/srvctl start database -d
Case 3: Rolling Back from the GI Home Alone
You should use the following instructions if you prefer to rollback patch from the Oracle GI (Grid Infrastructure) home alone.
Case 3.1 Shared GI Home
Follow these instructions in this section if the GI home is shared.
Note:
An operation in a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.
Make sure to stop the Oracle databases running from the Oracle RAC database homes.
As Oracle database home owner:
/bin/srvctl stop database -d
Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.
As root user, execute the following on all the remote nodes to stop the CRS stack:
/bin/crsctl stop crs
Execute the following command on the local node.
As root user execute:
#opatch auto -oh -rollback
Start the Oracle GI stack on all the remote nodes.
As root user execute:
#/bin/crsctl start crs
Mount ACFS file systems. See Section 2.9.
Restart the Oracle databases that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start database -d
Case 3.2: Non-Shared GI Home
If the GI home is not shared, then use the following instructions to rollback the patch from the GI home.
Case 3.2.1: ACFS File System Is Not Configured
Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.
Execute the following on each node of the cluster.
As root user execute:
#opatch auto -oh -rollback
Case 3.2.2: ACFS File System Is Configured
Repeat Steps 1 through 5 for each node in the cluster:
From the Oracle database home, stop the Oracle RAC database running on that node.
As the database home owner execute:
/bin/srvctl stop instance -d -n
Make sure the ACFS file systems are unmounted on that node. Use instructions in Section 2.8 for unmounting ACFS file systems.
Apply the patch to the GI home on that node using the opatch auto command.
Execute the following command on each node in the cluster.
As root user execute:
#opatch auto -oh -rollback
Remount ACFS file systems on that node. See Section 2.9 for instructions.
Restart the Oracle database on that node that you have previously stopped in Step 1.
As the database home owner execute:
/bin/srvctl start instance -d -n
Case 4: Rolling Back the Patch from Oracle Restart Home
You must keep the Oracle Restart stack up and running when you are rolling back the patch from the Oracle Restart home. Use the following instructions to roll back the patch from the Oracle Restart home.
As root user execute:
#opatch auto -oh -rollback
Case 5: Rolling Back the Patch from a Software Only GI Home Installation
Roll back the CRS patch.
As the GI home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Case 6: Rolling Back the Patch from a Software Only Oracle RAC Home Installation
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Roll back the DB patch from the database home.
As the database home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome -n
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.
Execute the following command to find all ACFS file system mount points.
As the root user execute:
#/sbin/acfsutil registry
Unmount ACFS file systems found in Step 1.
As the root user execute:
# /bin/umount
Note:
On Solaris operating system use: /sbin/umount.
On AIX operating system, use: /etc/umount.
Verify that the ACFS file systems are unmounted. Execute the following command to verify.
As the root user execute:
#/sbin/acfsutil info fs
The previous command should return the following message if there is no ACFS file systems mounted.
"acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
2.9 Mounting ACFS File Systems
If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.
Execute the following command to find the names of the CRS managed ACFS file system resource.
As root user execute:
# crsctl stat res -w "TYPE = ora.acfs.type"
Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
#crsctl start res -n
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.
Execute the following command to find all ACFS file system mount points.
As the root user execute:
#/sbin/acfsutil registry
Mount ACFS file systems found in Step 1.
As the root user execute:
# /bin/mount
Note:
On Solaris operating system use: /sbin/mount.
On AIX operating system, use: /etc/mount.
2.10 Patch Post-Deinstallation Instructions for a RAC Environment
Follow these steps only on the node for which the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" were executed during the patch application.:
Start all database instances running from the Oracle home. (For more information, see Oracle Database Administrator's Guide.)
For each database instance running out of the ORACLE_HOME, connect to the database using SQL*Plus as SYSDBA and run the rollback script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle_PSU__ROLLBACK.sql
SQL> QUIT
In a RAC environment, the name of the rollback script will have the format catbundle_PSU__ROLLBACK.sql.
Check the log file for any errors. The log file is found in $ORACLE_BASE/cfgtoollogs/catbundle and is named catbundle_PSU__ROLLBACK_.log where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".
All other instances can be started and accessed as usual while you are executing the deinstallation steps.
3 Known Issues
For information about OPatch issues, see My Oracle Support Note 293369.1 OPatch documentation list.
For issues documented after the release of this PSUs, see My Oracle Support Note 1272288.1 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues.
Other known issues are as follows.
Issue 1
Known Issues for Opatch Auto
Bug 10339274 - 'OPATCH AUTO' FAILED TO APPLY 11202 PATCH ON EXADATA RAC CLUSTER WITH 11201 RAC
Bug 10339251 - MIN OPATCH ISSUE FOR DB HOME SETUP IN EXADATA RAC CLUSTER USING 'OPATCH AUTO'
These two issues are observed in an environment where lower version database homes coexist with 11202 clusterware and database homes and opatch auto is used to apply the 11202 GIBundle.
Workaround:
Apply the 11202 GIBundle to the 11202 GI Home and Oracle RAC database home as follows:
#opatch auto -oh
#opatch auto -oh <11202 ORACLE_HOME1_PATH>,<11202 ORACLE_HOME1_PATH>
Issue 2
Bug 10226210 (11,11.2.0.2GIBTWO) 11202_GI_OPATCH_AUTO: OPATCH TAKES MORE STORAGE SPACE AFTER ROLLBACK SUCCEED
Workaround:
Execute the following command as Oracle home owner after a successful rollback to recover the storage used by the backup operation:
opatch util cleanup -silent
Issue 3
Bug 11799240 - 11202_GIBTWO:STEP 3 FAILED,CAN'T ACCESS THE NEXT OUI PAGE DURING SETTING UP GI
After applying GI PSU 11.2.0.2.3, the Grid Infrastructure Configuration Wizard fails with error INS-42017 when choosing the nodes of the cluster.
Workaround:
Apply the one off patch for the bug 10055663.
Issue 4
Bug 11856928 - 11202_GIBTWO_HPI:PATCH SUC CRS START FAIL FOR PERM DENY TO MKDIR $EXTERNAL_ORACL
This issue is seen only on the HPI platform when the opatch auto command is invoked from a directory that does not have write permission to the root user.
Workaround:
Execute the opatch auto from a directory that has write permission to the root user.
Issue 5
The following ignorable errors may be encountered while running the catbundle.sql script or its rollback script:
ORA-29809: cannot drop an operator with dependent objects
ORA-29931: specified association does not exist
ORA-29830: operator does not exist
ORA-00942: table or view does not exist
ORA-00955: name is already used by an existing object
ORA-01430: column being added already exists in table
ORA-01432: public synonym to be dropped does not exist
ORA-01434: private synonym to be dropped does not exist
ORA-01435: user does not exist
ORA-01917: user or role 'XDB' does not exist
ORA-01920: user name '' conflicts with another user or role name
ORA-01921: role name '' conflicts with another user or role name
ORA-01952: system privileges not granted to 'WKSYS'
ORA-02303: cannot drop or replace a type with type or table dependents
ORA-02443: Cannot drop constraint - nonexistent constraint
ORA-04043: object does not exist
ORA-29832: cannot drop or replace an indextype with dependent indexes
ORA-29844: duplicate operator name specified
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
ORA-06512: at line . If this error follow any of above errors, then can be safely ignored.
ORA-01927: cannot REVOKE privileges you did not grant
Issue 6
Bug 12619571 - 11202_GIBTHREE: PATCH FAILED IN MULTI-BYTES LANG ENV ISSUE SHOULD BE DOCUMENTED
This issue is seen when trying to run opatch auto to apply the GI PSU patch in the Japanese environment. The cause of the problem is that opatch auto currently only supports the English language environment.
Workaround:
Always keep the environmemt as the English language environmemt when running opatch auto to apply the GI PSU.
4 References
The following documents are references for this patch.
Note 293369.1 OPatch documentation list
Note 360870.1 Impact of Java Security Vulnerabilities on Oracle Products
Note 468959.1 Enterprise Manager Grid Control Known Issues
5 Bugs Fixed by This Patch
This patch includes the following bug fixes:
Section 5.1, "CPU Molecules"
Section 5.2, "Bugs Fixed in GI PSU 11.2.0.2.3"
Section 5.3, "Bugs Fixed in GI PSU 11.2.0.2.2"
5.1 CPU Molecules
CPU molecules in GI PSU 11.2.0.2.3:
GI PSU 11.2.0.2.3 contains the following new CPU 11.2.0.2 molecules:
12586486 - DB-11.2.0.2-MOLECULE-004-CPUJUL2011
12586487 - DB-11.2.0.2-MOLECULE-005-CPUJUL2011
12586488 - DB-11.2.0.2-MOLECULE-006-CPUJUL2011
12586489 - DB-11.2.0.2-MOLECULE-007-CPUJUL2011
12586490 - DB-11.2.0.2-MOLECULE-008-CPUJUL2011
12586491 - DB-11.2.0.2-MOLECULE-009-CPUJUL2011
12586492 - DB-11.2.0.2-MOLECULE-010-CPUJUL2011
12586493 - DB-11.2.0.2-MOLECULE-011-CPUJUL2011
12586494 - DB-11.2.0.2-MOLECULE-012-CPUJUL2011
12586495 - DB-11.2.0.2-MOLECULE-013-CPUJUL2011
12586496 - DB-11.2.0.2-MOLECULE-014-CPUJUL2011
5.2 Bugs Fixed in GI PSU 11.2.0.2.3
GI PSU 11.2.0.2.3 contains all fixes previously released in GI PSU 11.2.0.2.2 (see Section 5.3 for a list of these bug fixes) and the following new fixes:
Note:
ACFS is not supported on HP and therefore the bug fixes for ACFS do not apply to the HP GI PSU 3.
Automatic Storage Management
6892311 - PROVIDE REASON FOR MOUNT FORCE FAILURE WITHOUT REQUIRING PST DUMP
9078442 - ORA-19762 FROM ASMCMD CP COPYING FILE WITH DIFFERENT BYTE ORDER FROM FILESYSTEM
9572787 - LONG WAITS FOR ENQ: AM CONTENTION FOLLOWING CELL CRASH CAUSED CLUSTERWIDE OUTAGE
9953542 - TB_SOL_SP: HIT 7445 [KFKLCLOSE()+20] ERROR WHEN DG OFFLINE
10040921 - HUNG DATABASE WORKLOAD AND BACKGROUNDS AFTER INDUCING WRITE ERRORS ON AVD VOLUME
10155605 - 11201-OCE:DISABLE FC IN ONE NODE, ASM DISKGOUP FORCE DISMOUNTED IN OTHER NODES.
10278372 - TB:X:CONSISTENTLY PRINT "WARNING: ELAPSED TIME DID NOT ADVANCE" IN ASM ALERT LOG
10310299 - TB:X:LOST WRITES DUE TO RESYNC MISSING EXTENTS WHEN DISK GO OFFLINE DURING REBAL
10324294 - DBMV2: DBFS INSTANCE WAITS MUCH FOR "ASM METADATA FILE OPERATION"
10356782 - DBMV2+: ASM INSTANCE CRASH WITH ORA-600 : [KFCGET0_04], [25],
10367188 - TB:X:REBOOT 2 CELL NODES,ASM FOREGROUND PROCESS HIT ORA-600[KFNSMASTERWAIT01]
10621169 - FORCE DISMOUNT IN ASM RECOVERY MAY DROP REDO'S AND CAUSE METADATA CORRUPTIONS
11065646 - ASM MAY PICK INCORRECT PST WHEN MULTIPLE COPIES EXTANT
11664719 - 11203_ASM_X64:ARB0 STUCK IN DG REBALANCE
11695285 - ORA-15081 I/O WRITE ERROR OCCURED AFTER CELL NODE FAILURE TEST
11707302 - FOUND CORRUPTED ASM FILES AFTER CELL NODES FAILURE TESTING.
11707699 - DATABASE CANNOT MOUNT DUE TO ORA-00214: CONTROL FILE INCONSISTENCY
11800170 - ASM IN KSV WAIT AFTER APPLICATION OF 11.2.0.2 GRID PSU
11800854 - BUG TO TRACK LRG 5135625
12620422 - FAILED TO ONLINE DISKS BECAUSE OF A POSSIBLE RACING RESYNC
Buffer Cache Management
11674485 - LOST DISK WRITE INCORRECTLY SIGNALLED IN STANDBY DATABASE WHEN APPLYING REDO
Generic
9748749 - ORA-7445 [KOXSS2GPAGE]
10082277 - EXCESSIVE ALLOCATION IN PCUR OF "KKSCSADDCHILDNO" CAUSES ORA-4031 ERRORS
10126094 - ORA-600 [KGLLOCKOWNERSLISTDELETE] OR [KGLLOCKOWNERSLISTAPPEND-OVF]
10142788 - APPS 11I PL/SQL NCOMP:ORA-04030: OUT OF PROCESS MEMORY
10258337 - UNUSABLE INDEX SEGMENT NOT REMOVED FOR "ALTER TABLE MOVE"
10378005 - EXPDP RAISES ORA-00600[KOLRARFC: INVALID LOB TYPE], EXP IS SUCCESSFUL
10636231 - HIGH VERSION COUNT FOR INSERT STATEMENTS WITH REASON INST_DRTLD_MISMATCH
12431716 - UNEXPECTED CHANGE IN MUTEX WAIT BEHAVIOUR IN 11.2.0.2.2 PSU (HIGHER CPU POSSIBLE
High Availability
9869401 - REDO TRANSPORT COMPRESSION (RTC) MESSAGES APPEARING IN ALERT LOG
10157249 - CATALOG UPGRADE TO 11.2.0.2 FAILS WITH ORA-1
10193846 - RMAN DUPLICATE FAILS WITH ORA-19755 WHEN BCT FILE OF PRIMARY IS NOT ACCESSIBLE
10648873 - SR11.2.0.3TXN_REGRESS - TRC - KCRFW_REDO_WRITE
11664046 - STBH: WRONG SEQUENCE NUMBER GENERATED AFTER DB SWITCHOVER FROM STBY TO PRIMARY
Oracle Portable ClusterWare
8906163 - PE: NETWORK AND VIP RESOURCES FAIL TO START IN SOLARIS CONTAINERS
9593552 - GIPCCONNECT() IS NOT ASYNC 11.2.0.2GIBTWO
9897335 - TB-ASM: UNNECCESSARY OCR OPERATION LOG MESSAGES IN ASM ALERT LOG WITH ASM OCR
9902536 - LNX64-11202-MESSAGE: EXCESSIVE GNS LOGGING IN CRS ALERT FILE WHEN SELFCHECK FAIL
9916145 - LX64: INTERNAL ERROR IN CRSD.LOG, MISROUTED REQUEST, ASSERT IN CLSM2M.CPP
9916435 - ROOTCRS.PL FAILS TO CREATE NODEAPPS DURING ADD NODE OPERATION
9939306 - SERVICES NOT COMING UP AFTER SWITCHOVER USING SRVCTL START DATABASE
10012319 - ORA-600 [KFDVF_CSS], [19], [542] ON STARTUP OF ASM DURING ADDNODE
10019726 - MEMORY LEAK 1.2MB/HR IN CRSD.BIN ON NON-N NODE
10056713 - LNX64-11202-CSS: SPLIT BRAIN WHEN START CRS STACK IN PARALLEL WITH PRIV NIC DOWN
10103954 - INTERMITTENT "CANNOT COMMUNICATE WITH CRSD DAEMON" ERRORS
10104377 - GIPC ENSURE INITIAL MESSAGE IS NOT LOST DURING ESTABLISH PHASE
10115514 - SOL-X64-11202: CLIENT REGISTER IN GLOBAL GROUP MASTER#DISKMON#GROUP#MX NOT EXIT
10190153 - HPI-SG-11202 ORA.CTSSD AND ORA.CRSD GOES OFFLINE AFTER KILL GIPC ON CRS MASTER
10231906 - 11202-OCE-SYMANTEC:DOWN ONE OF PRIVAE LINKS ON NODE 3,OCSSD CRASHED ON NODE 3
10233811 - AFTER PATCHING GRID HOME, UNABLE TO START RESOURCES DBFS AND GOLDEN
10253630 - TB:X:HANG DETECTED,"WAITING FOR INSTANCE RECOVERY OF GROUP 2" FOR 45 MINUTES
10272615 - TB:X:SHUTDOWN SERVICE CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMRCFGMGRTHREAD
10280665 - TB:X:STOP CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMVVERIFYPENDINGCONFIGVFS
10299006 - AFTER 11.2.0.2 UPGRADE, ORAAGENT.BIN CONNECTS TO DATABASE WITH TOO MANY SESSIONS
10322157 - 11202_GIBONE: PERM OF FILES UNDER $CH/CRS/SBS CHANGED AFTER PATCHED
10324594 - STATIC ENDPOINT IN THE LEASE BLOCKS OVERWRITTEN DURING UPGRADE
10331452 - SOL-11202-UD: 10205->11202 NETWORK RES USR_ORA_IF VALUE MISSED AFTER UPGRADE
10357258 - SOL-11202-UD: 10205->11202 [IPMP] HUNDREDS OF DUP IP AFTER INTRA-NODE FAILOVER
10361177 - LNX64-11203-GNS: MANY GNS SELF CHECK FAILURE ALERT MESSAGES
10385838 - TB:X:CSS CORE DUMP AT GIPCHAINTERNALSEND
10397652 - AIX-11202-GIPC:DISABLE SWITCH PORT FOR ONE PRIVATE NIC,HAIP DID NOT FAILOVER
10398810 - DOUBLE FREE IN SETUPWORK DUE TO TIMING
10419987 - PEER LISTENER IS ACCESSING A GROCK THAT IS ALREADY DELETED
10621175 - TB_RAC_X64:X: CLSSSCEXIT: CSSD SIGNAL 11 IN THREAD GMDEATHCHECK
10622973 - LOSS OF LEGACY FEATURES IN 11.2
10631693 - TB:X:CLSSNMHANDLEVFDISCOVERACK: NO PENDINGCONFIGURATION TO COMPLETE. CSS ABORT
10637483 - TB:X:REBOOT ONE CELL NODE, CSS ABORT AT CLSSNMVDDISCTHREAD
10637741 - HARD STOP DEPENDENCY CAN CAUSE WRONG FAIL-OVER ORDER
10638381 - 11202-OCE-SYMANTEC: HAIP FAIL TO START WHEN PRIVATE IP IS PLUMBED ON VIRTUAL NIC
11069614 - RDBMS INSTANCE CRASH DUE TO SLOW REAP OF GIPC MESSAGES ON CMT SYSTEMS
11071429 - PORT 11GR2 CRS TO EL6
11654726 - SCAN LISTENER STARTUP FAILS IF /VAR/OPT/ORACLE/LISTENER.ORA EXISTS.
11663339 - DBMV2:SHARED PROCESS SPINNING CAUSES DELAY IN PRIMARY MEMBER CLEANUP
11682409 - RE-USING OCI MEMORY ACROSS CONNECTIONS CAUSES A MEMORY CORRUPTION
11698552 - SRVCTL REPORT WRONG STATUS FOR DATABASE INSTANCE.
11741224 - INCORRECT ACTIVE VERSION CHECK WHILE ENABLING THE BATCH FUNCTIONALITY
11744313 - LNX64-11203-RACG: UNEXPECTED CRSD RESTART DURING PARALLEL STACK START
11775080 - ORA-29701/29702 OCCURS WHEN WORKLOAD TEST RUNNING FOR A LONG TIME AND IS RESTART
11781515 - EVMD/CRSD FAIL TO START AFTER REBOOT, EVEN AFTER CRSCTL START CLUSTERWARE
11807012 - LNX64-11203-RACG: DB SERVICE RUNS INTO "UNKNOWN" STATE AFTER STACK START
11812615 - LNX64-11203-DIT: INCONSISTENT PERMISSION BEFORE/AFTER ROOTCRS.PL -UNLOCK/-PATCH
11828633 - DATABASE SERVICE DID NOT FAIL OVER AND COULD NOT BE STARTED AFTER NODE FAILURE
11840629 - KERNEL CRASH DUMP AND REBOOT FAIL INSIDE SOLARIS CONTAINER
11866171 - ENABLE CRASHDUMP WHEN REBOOTING THE MACHINE (LINUX)
11877079 - HUNDREDS OF ORAAGENT.BIN@HOSTNAME SESSSIONS IN 11.2.0.2 DATABASE
11899801 - 11202_GIBTWO_HPI:AFTER KILL ASM PMON, POLICY AND ADMIN DB RUNNING ON SAME SERVER
11904778 - LNX64-OEL6-11202: CRS STACK CAN'T BE START AFTER RESTART
11933693 - 11.1.0.7 DATABASE INSTANCE TERMINATED BY 11.2.0.2 CRS AGENT
11936945 - CVU NOT RECOGNIZING THE OEL6 ON LINUX
12332919 - ORAAGENT KEEPS EXITING
12340501 - SRVCTL SHOWS INSTANCE AS DOWN AFTER RELOCATION
12340700 - EVMD CONF FILES CAN HAVE WRONG PERMISSIONS AFTER INSTALL
12349848 - LNX64-11203: VIPS FELL OFFLINE WHEN BRING DOWN 3/4 PUBLIC NICS ONE BY ONE
12378938 - THE LISTENER STOPS WHEN THE ORA.NET1.NETWORK'S STATE IS CHANGED TO UNKNOWN
12380213 - 11203_110415:ERROR EXCEPTION WHILE INSTALLATION 11202 DB WITH DATAFILES ON 11203
12399977 - TYPO IN SUB PERFORM_START_SERVICE RETURNS ZERO (SUCCESS) EVEN WHEN FAILED
12677816 - SCAN LISTENER FAILD TO STARTUP IF /VAR/OPT/ORACLE/LISTENER.ORA EXIST
Oracle Space Management
8223165 - ORA-00600 [KTSXTFFS2] AFTER DATABASE STARTUP
9443361 - WRONG RESULTS (ROWDATA) FOR SELECT IN SERIAL FROM COMPRESSED TABLE
10061015 - LNX64-11202:HIT MANY ORA-600 ARGUMENTS: [KTFBHGET:CLSVIOL_KCBGCUR_9] DURING DBCA
10132870 - INDEX BLOCK CORRUPTION - ORA-600 [KCBZPBUF_2], [6401] ON RECOVER
10324526 - ORA-600 [KDDUMMY_BLKCHK] [6106] WHEN UPDATE SUBPARTITION OF TABLE IN TTS
Oracle Transaction Management
10053725 - TS11.2.0.3V3 - TRC - K2GUPDATEGLOBALPREPARECOUNT
10233732 - ORA-600 [K2GUGPC: PTCNT >= TCNT] OCCURS IN A DATABASE LINK TRANSACTION
Oracle Universal Storage Management
9867867 - SUSE10-LNX64-11202:NODE REBOOT HANG WHILE ORACLE_HOME LOCATED ON ACFS
9936659 - LNX64-11202-CRS: ORACLE HOME PUT ON ACFS, DB INST FAILS TO RESTART AFTER CRASH
9942881 - TIGHTEN UP KILL SEMANTICS FOR 'CLEAN' ACTION.
10113899 - AIX KSPRINTTOBUFFER TIMESTAMPS NEEDS TIME SINCE BOOT AND WALL_CLOCK TIMES
10266447 - ROOTUPGRADE.SH FAILS: 'FATAL: MODULE ORACLEOKS NOT FOUND' , ACFS-9121, ACFS-9310
11789566 - ACFS RECOVERY PHASE 2
11804097 - GBM LOCK TAKEN WHEN DETERMINING WHETHER THE FILE SYSTEM IS MOUNTED AND ONLINE
11846686 - ACFSROOT FAILS ON ORACLELINUX-RELEASE-5-6.0.1 RUNNUNG A 2.6.18 KERNEL
12318560 - ALLOW IOS TO RESTART WHEN WRITE ERROR MESG RETURNS SUCCESS
12326246 - ASM TO RETURN DIFF VALUES WHEN OFFLINE MESG UNSUCCESSFUL
12378675 - AIX-11203-HA-ACFS: HIT INVALID ASM BLOCK HEADER WHEN CONFIGURE DG USING AIX LVS
12398567 - ACFS FILE SYSTEM NOT ACCESSIBLE
12545060 - CHOWN OR RM CMD TO LOST+FOUND DIR IN ACFS FAILS ON LINUX
Oracle Utilities
9735282 - GETTING ORA-31693, ORA-2354, ORA-1426 WHEN IMPORTING PARTITIONED TABLE
Oracle Virtual Operating System Services
10317487 - RMAN CONTROLFILE BACKUP FAILS WITH ODM ERROR ORA-17500 AND ORA-245
11651810 - STBH: HIGH HARD PARSING DUE TO FILEOPENBLOCK EATING UP SHARED POOL MEMORY
XML Database
10368698 - PERF ISSUE WITH UPDATE RESOURCE_VIEW DURING AND AFTER UPGRADING TO 11.2.0.2
5.3 Bugs Fixed in GI PSU 11.2.0.2.2
This section describes bugs fixed in the GI PSU 11.2.0.2.2 release.
ACFS
10015603 - KERNEL PANIC IN OKS DRIVER WHEN SHUTDOWING CRS STACK
10178670 - ACFS VOLUMES ARE NOT MOUNTING ONCE RESTARTED THE SERVER
10019796 - FAIL TO GET ENCRYPTION STATUS OF FILES UNTIL DOING ENCR OP FIRST
10029794 - THE DIR CAN'T READ EVEN IF THE DIR IS NOT IN ANY REALM
10056808 - MOUNT ACFS FS FAILED WHEN FS IS FULL
10061534 - DB INSTANCE TERMINATED DUE TO ORA-445 WHEN START INSTANCE ON ALL NODES
10069698 - THE EXISTING FILE COULD CORRUPT IF INPUT INCORRECT PKCS PASSOWRD
10070563 - MULTIPLE WRITES TO THE SAME BLOCK WITH REPLICATION ON CAN GO OUT OF ORDER
10087118 - UNMOUNT PANICS IF ANOTHER USER IS SITTING IN A SNAPSHOT ROOT DIRECTORY
10216878 - REPLI-RELATED RESOURCE FAILED TO FAILOVER WHEN DG DISMOUNTED
10228079 - MOUTING DG ORA-15196 [KFC.C:25316] [ENDIAN_KFBH] AFTER NODE REBOOT
10241696 - FAILED TO MOUNT ACFS FS TO DIRECTORY CREATED ON ANOTHER ACFS FS
10252497 - ADVM/ACFS FAILS TO INSTALL ON SLES10
9861790 - LX64: ADVM DRIVERS HANGING OS DURING ACFS START ATTEMPTS
9906432 - KERNEL PANIC WHILE DISMOUNT ACFS DG FORCE
9975343 - FAIL TO PREPARE SECURITY IF SET ENCRYPTION FIRST ON THE OTHER NODE
10283549 - FIX AIX PANIC AND REMOVE -DAIX_PERF
10283596 - ACFS:KERNEL PANIC DURING USM LABEL PATCHING - ON AIX
10326548 - WRITE-PROTETED ACFS FILES SHOULD NOT BE DELETED BY NON-ROOT USER
ADVM
10045316 - RAC DB INSTALL ON SHARED ACFS HANGS AT LINKING PHASE
10283167 - ASM INSTANCE CANNOT STARTUP DUE TO EXISTENCE OF VMBX PROCESS
10268642 - NODE PANIC FOR BAD TRAP IN "ORACLEADVM" FOR NULL POINTER
10150020 - LINUX HANGS IN ADVM MIRROR RECOVERY, AFTER ASM EVICTIONS
Automatic Storage Management
9788588 - STALENESS REGISTRY MAY GET CLEARED PREMATURELY
10022980 - DISK NOT EXPELLED WHEN COMPACT DISABLED
10040531 - ORA-600 [KFRHTADD01] TRYING TO MOUNT RECO DISKGROUP
10209232 - STBH: DB STUCK WITH A STALE EXTENT MAP AND RESULTS IN DATA CORRUPTIONS
10073683 - ORA-600 [KFCBINITSLOT40] ON ASM ON DBMV2 WITH BP5
9715581 - DBMV2: EXADATA AUTO MANAGEMENT FAILED TO BRING UP DISKS ONLINE
10019218 - ASM DROPPED DISKS BEFORE DISK_REPAIR_TIME EXPIRED
10084145 - DBMV2: ORA-600 [1427] MOUNTING DISKGROUP AFTER ALL CELLS RESTARTED
11067567 - KEPT GENERATING "ELAPSED TIME DID NOT ADVANCE " IN ASM ALERT LOG
10356513 - DISK OFFLINED WITH NON ZERO TIMEOUT EXPELLED IMMEDIATELY
10332589 - TB:X:MOUNT NORMAL REDUNDANCY DG, FAILED WITH ORA-00600:[KFCINITRQ20]
10329146 - MARKING DIFFERENT SR BITS FROM MULTIPLE DBWS CAN CAUSE A LOST WRITE
10299224 - TB:X:PIVOTING AN EXTENT ON AN OFFLINE DISK CAN CAUSE STALE XMAPS IN RDBMS
10245086 - ORA-01210 DURING CREATE TABLESPACE
10230571 - TB:X:REBOOT ONE CELL NODE, RBAL HIT ORA-600[17183]
10228151 - ASM DISKGROUPS NOT GETTING MOUNTED
10227288 - DG FORCIBLY DISMOUNTED AFTER ONE FG LOST DUE TO "COULD NOT READ PST FOR GRP 4"
10222719 - ASM INSTANCE HANGS WITH RBAL PROCESS WAITS ON "NO FREE BUFFER"
10102506 - DISK RESYNC TAKES A LONG TIME EVEN WITH NO STALE EXTENTS
10094201 - DISK OFFLINE IS SLOW
10190642 - ORA-00600: [1433] FOLLOWED BY INSTANCE CRASH WITH ASM ON EXADATA
11067567 - 11202_gibtwo: kept generating "elapsed time did not advance " in asm alert log
Buffer Cache Management
9651350 - ora-00308 and ora-27037 when ora-8103 without event 10736 been set
10110863 - trace files is still generated after applying patch:9651350
10205230 - tb_x64: hit ora-00600: [kclwcrs_6]
10332111 - sql running long in active standby
CRS Group
CLEANUP
9949676 - GNSD.BIN CORE DUMP AFTER KILL ASM PMON ON ALL NODES AT SAME TIME
9975837 - GNS INCORRECTLY PROCESSES IPV6 LOOKUP REQUESTS
10007185 - GNS DUMPS CORE IN CLSKGOPANIC AT CLSKPDVA 717
10028343 - GNS CAN NOT BE RELOCATED AFTER PUBLIC RESTARTED
CRS
9876201 - OHASD AGENT CORE DUMP AT EONSHTTP.C:162
10011084 - 11202 STEP3 MODIFY BINARY AFTER INSTALLATION CANNOT EXCUTE SUCCESSFULLY
10028235 - 'CLSNVIPAGENT.CPP', LINE 1522: ERROR: FORMAL ARGUMENT TYPE OF ...
10045436 - 'ORA.LISTENER.LSNR' FAILED TO BE FENCED OFF DURING CRSD CLEANUP
10062301 - VALUE FOR FIELD 'CLUSTER_NAME' IS MISSING IN CRSCONFIG_PARAMS
10110969 - PORTABILITY ISSUES IN FUNCTION TOLOWER_HOST
10175855 - FAILED TO UGPRADE 11.2.0.1 + ARU 12900951 -> 11.2.0.2
9891341 - CRSD CORE DUMP IN PROATH_MASTER_EXIT_HELPER AT PROATH.C:1834
11655840 - RAC1 DB' STATE_DETAILS IS WRONG AFTER KILL GIPCD
10634513 - OHASD DUMPS CORE WHEN PLUG IN UNPLUGGED PRIVATE NETWORK NIC
10236074 - ASM INSTANCES CRASH SEVERAL TIMES DURING PARALLEL CRS STARTUP
10052529 - DB INST OFFLINE AFTER STOP/START CRS STACK ON ALL NODES IN PARALLEL
10065216 - VIRTUAL MEMORY USAGE OF ORAROOTAGENT IS BIG(1321MB) AND NOT DECREASING
10168006 - ORAAGENT PROCESS MEMORY GROWTH PERIODICALLY.
CSS
9907089 - CSS CORE DUMP DURING EXADATA ROLLING UPGRADE
9926027 - NODE REBOOTED AFTER CRS CLEAN-UP SUCCEEDED 11202 GI + 10205 RAC DB
10014392 - CRSCTL DELETE NODE FAILS WITH CRS-4662 & CRS-4000
10015460 - REMOVAL OF WRONG INCARNATION OF A NODE DUE TO MANUAL SHUTDOWN STATE
10040109 - PMON KILL LEAD TO OS REBOOT
10048027 - ASM UPGRADE FAILS
10052721 - 11201- 11202 NON-ROLLING,CRSCTL.BIN CORE AT CLSSNSQANUM, SIGNAL 11
10083789 - A NODE DOESNT INITIATE A RECONFIG DUE TO INCORRECT RECONFIG STATE
9944978 - FALSE CSS EVICTION AFTER PRIVATE NIC RESUME
9978195 - STOP DB ACTION TIMED OUT AND AGENT EXITS DUE TO FAILURE TO STOP EVENT BRIDGE
10248739 - AFTER APPLY THE PATCH, THE NODE EVICTED DURING START CRS STACK
CVU
9679401 - OUI PREREQ CHECKS FAILED FOR WRONG OWNSHIP OF RESOLV.CONF_`HOST`
9959110 - GNS INTEGRITY PREREQUISITE FAILED WITH PRVF-5213
9979706 - COMP OCR CHECK FAILS TO VERIFY SIZE OF OCR LOCATION
10029900 - CVU PRE NODEADD CHECK VD ERROR
10033106 - ADDNODE.SH SHOULD INDICATE WHAT HAPPENS WHEN ERROR OCCURRING
10075643 - UNABLE TO CONTINUE CONFIG.SH FOR CRS UPGRAD
10083009 - GIPCD FAILS TO RETRIEVE INFORMATION FROM PEERS DUE TO INVALID ENDPOINT
GIPC
9812956 - STATUS OF CRSD AND EVMD GOES INTERMEDIATE FOR EVER WHEN KILL GIPC
9915329 - ORA-600 [603] IN DB AND ORA-603 IN ASM AFTER DOWN INTER-CONNECT NIC
9944948 - START RESOUCE HAIP FAILED WHEN RUN ROOT.SH
9971646 - ORAROOTAGENT CORE DUMPED AT NETWORKHAMAINTHREAD::READROUTEDATA
9974223 - GRID INFRASTRUCTURE NEEDS MULTICAST COMMUNICATION ON 230.0.1.0 ADDRESSES WORKING
10053985 - ERROR IN NETWORK ADDRESS ON SOLARIS 11
10057680 - OHASD ORAROOTAGENT.BIN SPIN CPU AFTER SIMULATE ASM DISK ERROR
10078086 - ROOTUPGRADE.SH FAIL FOR 'CRSCTL STARTUPGRADE' FAIL,10205-> 11202
10260251 - GRID INSTALLATION FAILS TO START HAIP DUE TO CHANGE IN NETWORK INTERFACE NAME
10111010 - CRSD HANGS FOR THE HANAME OF PEER CRSD
11782423 - OHASD.BIN TAKES CPU ABOUT 95% ~ 100%
11077756 - STARTUP FAILURE OF HAIP CAUSES INSTALLATION FAILURE
10375649 - DISABLE HAIP ON PRIMECLUSTER
10284828 - INTERFACE UPDATES GET LOST DURING BOUNCE OF CRSD PROCESS
10284693 - AIX EPIPE FAILURE
10233159 - NEED 20 MINS TO STARTUP CRS WHEN 1/2 GIPC NICS DOWN
10128191 - LRGSRG9 AND LRGSRGE FAILURE
GNS
9864003 - NODE REBOOT DUE TO 'ORA.GNS' FAILED TO BE FENCED OFF DURING CRSD
GPNP
9336825 - GPNPD FLUSH PROFILE PUSH ERROR MESSAGES IN CRS ALERT LOG
10314123 - GPNPD MAY NOT UPDATE PROFILE TO LATEST ON START
10105195 - PROC-32 ACCESSING OCR; CRS DOES NOT COME UP ON NODE
10205290 - DBCA FAILED WITH ERROR ORA-00132
10376847 - [ORA.CRF] [START] ERROR = ERROR 9 ENCOUNTERED WHEN CONNECTING TO MOND
IPD-OS
9812970 - IPD DO NOT MARK TYPE OF DISKS USED FOR VOTING DISK CORRECTLY
10057296 - IPD SPLIT BRAIN AFTER CHANGE BDB LOCATION
10069541 - IPD SPLIT BRAIN AFTER STOPPING ORA.CRF ON MASTER NODE
10071992 - UNREASONABLE VALUES FOR DISK STATISTICS
10072474 - A NODE IS NOT MONITORED AFTER STOP AND START THE ORA.CRF ON IT
10073075 - INVALID DATA RECEIVED FROM THE CLUSTER LOGGER SERVI
10107380 - IPD NOT STARTED DUE TO SCRFOSM_GET_IDS FAILED
OCR
9978765 - ROOTUPGRADE.SH HANG AND CRSD CRASHED ON OTHER NODES,10205-> 11202
10016083 - 'OCRCONFIG -ADD' NEEDS HELPFUL MESSAGE FOR ERROR ORA-15221
OPSM
9918485 - EMCONFIG FAIL WITH NULLPOINTEREXCEPTION AT RACTRANSFERCORE.JAVA
10018215 - RACONE DOES NOT SHUTDOWN INSTANCE DURING RELOCATION
10042143 - ORECORE11 LWSFDSEV CAUSED SEGV IN SRVM NATIVE METHODS
OTHERS
9963327 - CHMOD.PL GETS CALLED INSTEAD OF CHMOD.EXE
10008467 - FAILS DUE TO WRONG VERSION OF PERL USED:
10015210 - OCTSSD LEAK MEMORY 1.7M HR ON PE MASTER DURING 23 HOURS RUNNI
10027079 - CRS_SHUTDOWN_SYNCH EVENT NOT SENT IN SIHA
10028637 - SCLS.C COMPILE ERRORS ON AIX UNDECLARED IDENTIFIERS
10029119 - 11201-11202 CRS UPGRADE OUI ASKS TO RUN ROOTUPGRADE.SH
10036834 - PATCHES NOT FOUND ERROR WHILE UPGRADING GRID FROM 11201 TO 11202
10038791 - HAS SRG SRV GETTING MANY DIFS FOR AIX ON LABEL 100810 AND LATER
10040647 - LNX64-112022-UD; AQ AND RLB DO NOT WORK AFTER UPGRADING FROM 11201
10044622 - EVMD FAILED TO START AFTER KILL OHASD.BIN
10048487 - DIAGCOLLECTION CANNOT RETRIEVE IPD REPORTS
10073372 - DEINSTALL FAILED TO DELETE CRS_HOME ON REMOTE NODE IF OCR VD ON NFS
10089120 - WRONG PROMPT MESSAGE BY DEINSTALL COMMAND WHILE DELETING CRS HOME
10124517 - CRS STACK DOES NOT START AUTOMATICALLY AFTER NODE REBOOT
10157622 - 11.2.0.2 GI BUNDLE 1 HAS-CRS TRACKING BUG
RACG
10036193 - STANDBY NIC DOESN'T WORK IF DOWN PUBLIC NIC
10146768 - NETWORK RESOURCE FAILS TO START WITH IPMP ON SOLARIS 11
USM Miscellaneous
10146744 - ORA.REGISTRY.ACFS BECOME UNKOWN AND ACFS FS DISMOUNT
10283058 - RESOURCES ACFS NEEDS AN OPTION TO DISALLOW THE MOUNTING OF FILE SYSTEMS ON RESOURCE START
10193581 - ROOT.SH CRS-2674: START OF 'ORA.REGISTRY.ACFS' FAIL
10244210 - FAIL TO INSTALL ADVM/ACFS ON SOLARIS CONTAINER
10311856 - APPLY ASSERTION FAILURE:PBOARDENTRY>USRGBOARDRECENTRY_RECORD
Generic
9591812 - incorrect wait events in 11.2 ("cursor: mutex s" instead of "cursor: mutex x")
9905049 - ebr: ora-00600: internal error code, arguments: [kqlhdlod-bad-base-objn]
10052141 - exadata database crash with ora-7445 [_wordcopy_bwd_dest_aligned] and ora-600 [2
10052956 - ora-7445 [kjtdq()+176]
10157402 - lob segment has null data after long to lob conversion in parallel mode
10187168 - obsolete parent cursors if version count exceeds a threshold
10217802 - alter user rename raises ora-4030
10229719 - qrmp:12.2:ora-07445 while performing complete database import on solaris sparc
10264680 - incorrect version_number reported after patch for 10187168 applied
10411618 - add different wait schemes for mutex waits
11069199 - ora-600 [kksobsoletecursor:invalid stub] quering pq when pq is disabled
11818335 - additional changes when wait schemes for mutex waits is disabled
High Availability
10018789 - dbmv2-bigbh:spin in kgllock caused db hung and high library cache lock
10129643 - appsst gsi11g m9000: ksim generic wait event
10170431 - ctwr consuming lots of cpu cycles
Oracle Space Management
6523037 - et11.1dl: ora-600 [kddummy_blkchk] [6110] on update
9724970 - pdml fails with ora-600 [4511]. ora-600 [kdblkcheckerror] by block check
10218814 - dbmv2: ora-00600:[3020] data block corruption on standby
10219576 - ora-600 [ktsl_allocate_disp-fragment]
Oracle Transaction Management
10358019 - invalid results from flashback_transaction_query after applying patch:10322043
Oracle Utilities
10373381 - ora-600 [kkpo_rcinfo_defstg:objnotfound] after rerunning catupgrd.sql
Oracle Virtual Operating System Services
10127360 - dg4msql size increasing to 1.5gb after procedure executed 250 times
Server Manageability
11699057 - ora-00001: unique constraint (sys.wri$_sqlset_plans_tocap_pk) violated
6 Appendix A: Manual Steps for Apply/Rollback Patch
Steps for Applying the Patch
Note:
You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.
As the Oracle RAC database home owner execute:
%/bin/emctl stop dbconsole
Execute the following on each node of the cluster in non-shared CRS and DB home environment to apply the patch.
Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed.
Run the pre root script.
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -unlock
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -unlock
Apply the CRS patch using.
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419353
As the GI home owner execute:
$/OPatch/opatch napply -oh -local /12419331
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Apply the DB patch.
As the database home owner execute:
$/OPatch/opatch napply -oh -local /12419353/custom/server/12419353
$/OPatch/opatch napply -oh -local /12419331
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
Run the post script.
As the root user execute:
#/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -patch
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -patch
Start the CRS managed resources that were earlier running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s
Steps for Rolling Back the Patch
Execute the following on each node of the cluster in non-shared CRS and DB home environment to rollback the patch.
Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl stop home -o -s
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shut down before you proceed.
Run the pre root script.
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -unlock
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -unlock
Roll back the CRS patch.
As the GI home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the pre script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome
Roll back the DB patch from the database home.
As the database home owner execute:
$/OPatch/opatch rollback -local -id 12419353 -oh
$/OPatch/opatch rollback -local -id 12419331 -oh
Run the post script for DB component of the patch.
As the database home owner execute:
$/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome
Run the post script.
As the root user execute:
$/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
#/crs/install/rootcrs.pl -patch
If this is an Oracle Restart Home, as the root user execute:
#/crs/install/roothas.pl -patch
Start the CRS managed resources that were earlier running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s -n
If this is an Oracle Restart Home environment, as the database home owner execute:
$/bin/srvctl start home -o -s
Subscribe to:
Posts (Atom)