ORA-19815: WARNING: db_recovery_file_dest_size of 42949672960 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Errors in file /u00/app/oracle/diag/rdbms/sid/SID/trace/SID_ora_20214.trc:
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 1063256064 bytes disk space from 42949672960 limit
ARCH: Error 19809 Creating archive log file to ‘+FRA’
Translate
Thursday, 29 December 2011
Monday, 26 December 2011
Finding the 10 largest objects in an Oracle database
col owner format a15
col segment_name format a30
col segment_type format a15
col mb format 999,999,999
select owner
, segment_name
, segment_type
, mb
from (
select owner
, segment_name
, segment_type
, bytes / 1024 / 1024 "MB"
from dba_segments
order by bytes desc
)
where rownum < 11
/
col segment_name format a30
col segment_type format a15
col mb format 999,999,999
select owner
, segment_name
, segment_type
, mb
from (
select owner
, segment_name
, segment_type
, bytes / 1024 / 1024 "MB"
from dba_segments
order by bytes desc
)
where rownum < 11
/
HOME ABOUT ORACLE SITES RSS Backing up your CRONTAB
If you are running a lot of scripts through CRON you may want to have a current backup. Since your CRON is simply a text schedule I prefer a daily email backup. It’s simple and does what I need. I add this line to my CRON:
00 0 * * * crontab -l > crontablist.txt;
mail -s "DW CRON - Quality Control" nitish@domain.com < crontablist.txt
Everyday at midnight I get an email with the contents of my CRON.
00 0 * * * crontab -l > crontablist.txt;
mail -s "DW CRON - Quality Control" nitish@domain.com < crontablist.txt
Everyday at midnight I get an email with the contents of my CRON.
Dynamically Delete Old Partitions
Are you looking to delete older partitions in an Oracle database? Here is a basic SQL query that will give you what you need.
select ‘ALTER TABLE ‘||table_owner||’.'||table_name
||’ DROP PARTITION ‘
||partition_name||’ UPDATE INDEXES;’
from dba_tab_partitions
where table_owner = ‘AVCO’
and table_name = ‘LOAN_DETAIL’
order by PARTITION_POSITION;
Modify to your specific needs.
select ‘ALTER TABLE ‘||table_owner||’.'||table_name
||’ DROP PARTITION ‘
||partition_name||’ UPDATE INDEXES;’
from dba_tab_partitions
where table_owner = ‘AVCO’
and table_name = ‘LOAN_DETAIL’
order by PARTITION_POSITION;
Modify to your specific needs.
Change a table’s default tablespace
Do you need to change a table’s default tablespace? It’s pretty simple:
ALTER TABLE {OWNER}.{TABLE NAME}
MODIFY DEFAULT ATTRIBUTES TABLESPACE {NEW TABLESPACE NAME};
This does not move the table it only changes the attribute for the default tablespace. This is helpful if you are adding partitions, but not specifying the tablespace.
ALTER TABLE {OWNER}.{TABLE NAME}
MODIFY DEFAULT ATTRIBUTES TABLESPACE {NEW TABLESPACE NAME};
This does not move the table it only changes the attribute for the default tablespace. This is helpful if you are adding partitions, but not specifying the tablespace.
Remove non-ASCII characters from a column
Do you need to remove special (non-ASCII) characters from a VarChar2 column in Oracle? Create this function:
CREATE OR REPLACE FUNCTION O1DW.RECTIFY_NON_ASCII(INPUT_STR IN VARCHAR2)
RETURN VARCHAR2
IS
str VARCHAR2(2000);
act number :=0;
cnt number :=0;
askey number :=0;
OUTPUT_STR VARCHAR2(2000);
begin
str:=’^'||TO_CHAR(INPUT_STR)||’^';
cnt:=length(str);
for i in 1 .. cnt loop
askey :=0;
select ascii(substr(str,i,1)) into askey
from dual;
if askey < 32 or askey >=127 then
str :=’^'||REPLACE(str, CHR(askey),”);
end if;
end loop;
OUTPUT_STR := trim(ltrim(rtrim(trim(str),’^'),’^'));
RETURN (OUTPUT_STR);
end;
/
CREATE OR REPLACE FUNCTION O1DW.RECTIFY_NON_ASCII(INPUT_STR IN VARCHAR2)
RETURN VARCHAR2
IS
str VARCHAR2(2000);
act number :=0;
cnt number :=0;
askey number :=0;
OUTPUT_STR VARCHAR2(2000);
begin
str:=’^'||TO_CHAR(INPUT_STR)||’^';
cnt:=length(str);
for i in 1 .. cnt loop
askey :=0;
select ascii(substr(str,i,1)) into askey
from dual;
if askey < 32 or askey >=127 then
str :=’^'||REPLACE(str, CHR(askey),”);
end if;
end loop;
OUTPUT_STR := trim(ltrim(rtrim(trim(str),’^'),’^'));
RETURN (OUTPUT_STR);
end;
/
Move and partitioned index to a new tablespace
To dynamically move a partitioned index to a new tablespace run the following SQL
select ‘Alter index ‘||INDEX_OWNER||’.'||INDEX_NAME||’
rebuild partition ‘||PARTITION_NAME||’ tablespace ABC_DETAIL_INDEX1;’
from dba_ind_partitions where TABLESPACE_NAME = ‘DW_ABC_DETAIL’;
select ‘Alter index ‘||INDEX_OWNER||’.'||INDEX_NAME||’
rebuild partition ‘||PARTITION_NAME||’ tablespace ABC_DETAIL_INDEX1;’
from dba_ind_partitions where TABLESPACE_NAME = ‘DW_ABC_DETAIL’;
Wednesday, 21 December 2011
How to resolve “Unable to create Patch Object” error?
FTU % opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./12419278
Invoking OPatch 11.1.0.6.0
Oracle Interim Patch Installer version 11.1.0.6.0 Copyright (c) 2007, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/app/oracle/product/11.2.0.1/FTU
Central Inventory : /u01/app/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.1.0.6.0
OUI version : 11.2.0.1.0
OUI location : /u02/app/oracle/product/11.2.0.1/FTU/oui Log file location : /u02/app/oracle/product/11.2.0.1/FTU/cfgtoollogs/opatch/opatch2011-11-05_01-50-34AM.log
Invoking prereq “checkconflictagainstohwithdetail”
The location “./12419278/README.html” is not a directory or a valid patch zip file.
Prereq “checkConflictAgainstOHWithDetail” not executed
PrereqSession failed: Unable to create Patch Object.
Exception occured : Patch ID is null.
OPatch failed with error code 73
We have two solutions for this problem
SOLUTION # 1
——————-
Move the text files like readme.html, readme.txt etc from the patch directory to some other location and try checking conflicts and it will be successful
But the problem is, this is not a permanent solution and just a workaround
SOLUTION # 2
——————–
This is the permanent solution for this problem. Cause for this error is we don’t have latest opatch version. As you can see in the output above my database version is 11.2.0.1, but my opatch version is 11.1.0.6. So, download latest opatch version from My Oracle Support with the help of patch # 6880880 and install it
Steps to install new opatch
1. copy the downloaded zip file for patch # 6880880 to $ORACLE_HOME
2. change location to ORACLE_HOME and rename the old OPatch directory
example : FTU% cd $ORACLE_HOME
FTU% mv OPatch OPatch.old
3. unzip the zip file which will create new OPatch directory
4. run below command to check if version is changed
FTU% $ORACLE_HOME/OPatch/opatch version
The above steps are also available in README file of the patch
Invoking OPatch 11.1.0.6.0
Oracle Interim Patch Installer version 11.1.0.6.0 Copyright (c) 2007, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/app/oracle/product/11.2.0.1/FTU
Central Inventory : /u01/app/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.1.0.6.0
OUI version : 11.2.0.1.0
OUI location : /u02/app/oracle/product/11.2.0.1/FTU/oui Log file location : /u02/app/oracle/product/11.2.0.1/FTU/cfgtoollogs/opatch/opatch2011-11-05_01-50-34AM.log
Invoking prereq “checkconflictagainstohwithdetail”
The location “./12419278/README.html” is not a directory or a valid patch zip file.
Prereq “checkConflictAgainstOHWithDetail” not executed
PrereqSession failed: Unable to create Patch Object.
Exception occured : Patch ID is null.
OPatch failed with error code 73
We have two solutions for this problem
SOLUTION # 1
——————-
Move the text files like readme.html, readme.txt etc from the patch directory to some other location and try checking conflicts and it will be successful
But the problem is, this is not a permanent solution and just a workaround
SOLUTION # 2
——————–
This is the permanent solution for this problem. Cause for this error is we don’t have latest opatch version. As you can see in the output above my database version is 11.2.0.1, but my opatch version is 11.1.0.6. So, download latest opatch version from My Oracle Support with the help of patch # 6880880 and install it
Steps to install new opatch
1. copy the downloaded zip file for patch # 6880880 to $ORACLE_HOME
2. change location to ORACLE_HOME and rename the old OPatch directory
example : FTU% cd $ORACLE_HOME
FTU% mv OPatch OPatch.old
3. unzip the zip file which will create new OPatch directory
4. run below command to check if version is changed
FTU% $ORACLE_HOME/OPatch/opatch version
The above steps are also available in README file of the patch
how to define RMAN retention policy time?
Many a times, I heard DBA’s complaining about RMAN retention policy through which they face some problems during restore or recovery.
For example a friend posted that his retention policy is set to recovery window of 90 days, but when tried to restore the database to 30 days back date, RMAN saying recovery till that time is not possible. he wondered why this cannot be done if he has retention of 90 days.
Here is the answer for the same….
we all know that RMAN will store backup metadata in control file’s reusable location if recovery catalog is not configured. Generally this information will be stored for 7 days and can be changed by modifying CONTROLFILE_RECORD_KEEP_TIME parameter.
whenever we define retention policy of RMAN either to recovery window or redundancy to some X value, we need to remember that RMAN still follow the value defined for CONTROLFILE_RECOD_KEEP_TIME. That means, if recovery window is set to 30 days, but the above parameter is set to 7 days (default), you cannot recover beyond 7 days… which is very unfortunate…
Right ! so lets see what can be done for this?
we need to define CONTROLFILE_RECORD_KEEP_TIME always a higher value than what we set for retention policy. As per the formula, it should be
CONTROLFILE_RECORD_KEEP_TIME=retention policy value+ level 0 backup interval+ 1
let me throw some example for easy understanding…
If suppose your retention time is set to recovery window of 90 days and a weekly full level 0 backup is scheduled, then CONTROLFILE_RECORD_KEEP_TIME = 90 (retention policy) + 7 (level 0 backup interval) + 1 = 98
Note: In worst case, atleast you should set that same (equal to) as retention policy value
What happens if I don’t set to higher value?
In such case, RMAN will overwrite the content of backup metadata prior to obsolete them by RMAN and even though physically backup pieces are existing, you cannot use them
For example a friend posted that his retention policy is set to recovery window of 90 days, but when tried to restore the database to 30 days back date, RMAN saying recovery till that time is not possible. he wondered why this cannot be done if he has retention of 90 days.
Here is the answer for the same….
we all know that RMAN will store backup metadata in control file’s reusable location if recovery catalog is not configured. Generally this information will be stored for 7 days and can be changed by modifying CONTROLFILE_RECORD_KEEP_TIME parameter.
whenever we define retention policy of RMAN either to recovery window or redundancy to some X value, we need to remember that RMAN still follow the value defined for CONTROLFILE_RECOD_KEEP_TIME. That means, if recovery window is set to 30 days, but the above parameter is set to 7 days (default), you cannot recover beyond 7 days… which is very unfortunate…
Right ! so lets see what can be done for this?
we need to define CONTROLFILE_RECORD_KEEP_TIME always a higher value than what we set for retention policy. As per the formula, it should be
CONTROLFILE_RECORD_KEEP_TIME=retention policy value+ level 0 backup interval+ 1
let me throw some example for easy understanding…
If suppose your retention time is set to recovery window of 90 days and a weekly full level 0 backup is scheduled, then CONTROLFILE_RECORD_KEEP_TIME = 90 (retention policy) + 7 (level 0 backup interval) + 1 = 98
Note: In worst case, atleast you should set that same (equal to) as retention policy value
What happens if I don’t set to higher value?
In such case, RMAN will overwrite the content of backup metadata prior to obsolete them by RMAN and even though physically backup pieces are existing, you cannot use them
Tuesday, 20 December 2011
Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
Applies to:
Oracle Server - Enterprise Edition - Version: 11.2.0.2 and later [Release: 11.2 and later ]
Oracle Server - Standard Edition - Version: 11.2.0.2 and later [Release: 11.2 and later]
Oracle Server - Personal Edition - Version: 11.2.0.2 and later [Release: 11.2 and later]
Information in this document applies to any platform.
What is being announced?
Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
What has changed?
Packaging: Starting with the first patch set for Oracle Database 11g Release 2 (11.2.0.2), Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of a set of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full installations that replace existing installations.
Because Oracle Database 11g Release 2 and later patch sets are a full installation, Oracle patch sets are now released as several installation bundles, just as are the base releases:
Oracle Database Server
Oracle Grid Infrastructure
Oracle Database Client
Companion/examples
Oracle Gateways
Simplified new installation: Because the release 11.2.0.2 patch set is a full installation package, if you are installing Oracle Database on a system without an existing Oracle Database installation, then you can simply install the release 11.2.0.2 patch set. You are no longer required to install the base release, and then apply the patch set.
Upgrade process: Beginning with the release 11.2.0.2 patch set, you have two ways to apply a patch set:
Out-of-place upgrade – This is Oracle's recommended way to apply a patch set. You install the patch set into a new, separate Oracle home location. After you install the patch upgrade, you then migrate the Oracle Database from the older Oracle home. The patch set upgrade is now the same process to upgrade from one version to another. Oracle recommends that you perform an out-of-place patch set upgrade, because this patch set application option requires much less downtime, and is safer because it does not require patching an ORACLE_HOME that is already being used in production. However, you must ensure that you have sufficient free disk space to accommodate two Oracle home directories at the same time.
In-place upgrade – You install the patch set into an existing Oracle home location. Oracle recommends that you select this option only if you do not have sufficient free disk space to perform an out-of-place upgrade, as the upgrade removes the existing Oracle installation. This patch option requires less disk space, but requires more time, and is riskier, because if you encounter an installation failure, then you must recover the entire existing Oracle home from a backup. If you choose this more risky option, then before you begin the patch installation, complete the following tasks:
Make a complete backup of your existing Oracle home
Read through the entire Upgrade Guide section dealing with in-place upgrades
Note: All Oracle Grid Infrastructure patch set upgrades must be out-of-place upgrades, in which you install the patch set into a new Grid home. In-place patch set upgrades are not supported.
Direct upgrade from previous releases: You can upgrade from a previous Oracle Database release directly to the latest patch set, without having to install the base release. For example, if you want to upgrade from Oracle Database 10g Release 2, or Oracle Database 11g Release 1, then you can upgrade directly to Oracle Database 11g Release 2, patch set 2 (11.2.0.2) using an out-of-place upgrade. Oracle now supports direct upgrades to the release 11.2.0.2 patch set from any release where direct upgrade to the base release (11.2.0.1) is supported, as well as direct upgrades from 11.2.0.1 to 11.2.0.2.
New Features Available in Patch Sets
In addition to the usual set of fixes, the release 11.2.0.2 patch set contains a small number of features that complete the Database 11g Release 2 feature set. The new features fall into the following five categories:
General improvements
Oracle Automatic Storage Management Cluster File System (ACFS) improvements
Quality of Service Management
Database Replay
Management
How to Obtain the Patch Set
As with all previous patch sets, the release 11.2.0.2 patch set is available for download from the Patches tab on My Oracle Support.
Important Changes to Note:
Disk space requirement: Because the out-of-place upgrade requires the installation of a new Oracle home, you must have enough free disk space to accommodate the new Oracle home, as well as disk space needed for temporary files during installation.
ORACLE_HOME location change: When using the out-of-place upgrade path, the location of your Oracle home is different after you complete the upgrade. You must change any direct references to the old location of your Oracle home (such as in /etc/oratab or in tnsnames.ora) to the new location, and update user environment variables for the Oracle home.
In-place upgrade process differs from previous patch set installation processes: Even though it seems similar, the in-place upgrade process for the release 11.2.0.2 patch set differs from previous patch set installation procedures. It is very important to read the entire in-place upgrade process before you begin to install the release 11.2.0.2 patch set, because the preinstallation steps are different.
Frequently Asked Questions
Q. Is this really a patch set?
A. Yes – This patch set still consists primarily of incremental bug fixes to the base release (11.2.0.1) but with different packaging. While new features are included in 11.2.0.2, the number of new features is very small, and is included to complete the 11.2 feature set.
Q. Why did Oracle make this change?
A. In response to customer feedback, Oracle is making this change for several reasons:
Out-of-place upgrade has been a best practice of many customers for patch set application for years. Out-of-place upgrades reduce downtime, because you can complete the initial installation and configuration of the upgrade before you take down the existing installation. Out-of-place patch upgrades are easier, safer, and provide faster fallback if needed: If you encounter issues with the new installation, then simply point clients back to the original Oracle Database installation Oracle home, and restart the database.
Out-of-place upgrades allow customers running multiple instances out of one Oracle home to upgrade each instance individually rather than having to upgrade all instances sharing an Oracle home at the same time (and incur downtime on them all simultaneously).
Full installations provide a better customer experience during the upgrade. By creating a full installation package that uses the same installation process as a base release (as opposed to a differential install of the changes objects), Oracle is able to use the same tools to build and test the patch set installation as Oracle uses with the base release. Improved testing enables a smoother, more trouble-free upgrade process that uses significantly less down time.
Full installations provide an easier upgrade for customers moving to release 11.2.0.2 from releases earlier than Oracle Database 11g release 2. Instead of having to install the base release, and then applying a patch release, you can now upgrade to the latest release in one step.
Q. Is this new packaging unique to the 11.2.0.2 patch set, or will Oracle follow this model for future patch sets?
A. This will be the new model going forward. Oracle will release patch set release 11.2.0.3 and later patch sets as full installation packages. Direct out-of-place upgrade is the Oracle best practices recommendation for patch set upgrades.
Q. There are many new features in 11.2.0.2. Are patch sets not just for bug fixes?
A. Oracle intends to maintain the focus of patch sets as a fix delivery vehicle. Oracle has included a very small number of new features in past patch sets. In this case, after the release of Oracle Database 11g release 2 (11.2.0.1), Oracle completed development of a small number of features that are closely related to the 11.2 release, and Oracle is including these features so that you can use them now, rather than wait for their inclusion in next major release.
Reference Note: Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]
Oracle Server - Enterprise Edition - Version: 11.2.0.2 and later [Release: 11.2 and later ]
Oracle Server - Standard Edition - Version: 11.2.0.2 and later [Release: 11.2 and later]
Oracle Server - Personal Edition - Version: 11.2.0.2 and later [Release: 11.2 and later]
Information in this document applies to any platform.
What is being announced?
Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2
What has changed?
Packaging: Starting with the first patch set for Oracle Database 11g Release 2 (11.2.0.2), Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of a set of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full installations that replace existing installations.
Because Oracle Database 11g Release 2 and later patch sets are a full installation, Oracle patch sets are now released as several installation bundles, just as are the base releases:
Oracle Database Server
Oracle Grid Infrastructure
Oracle Database Client
Companion/examples
Oracle Gateways
Simplified new installation: Because the release 11.2.0.2 patch set is a full installation package, if you are installing Oracle Database on a system without an existing Oracle Database installation, then you can simply install the release 11.2.0.2 patch set. You are no longer required to install the base release, and then apply the patch set.
Upgrade process: Beginning with the release 11.2.0.2 patch set, you have two ways to apply a patch set:
Out-of-place upgrade – This is Oracle's recommended way to apply a patch set. You install the patch set into a new, separate Oracle home location. After you install the patch upgrade, you then migrate the Oracle Database from the older Oracle home. The patch set upgrade is now the same process to upgrade from one version to another. Oracle recommends that you perform an out-of-place patch set upgrade, because this patch set application option requires much less downtime, and is safer because it does not require patching an ORACLE_HOME that is already being used in production. However, you must ensure that you have sufficient free disk space to accommodate two Oracle home directories at the same time.
In-place upgrade – You install the patch set into an existing Oracle home location. Oracle recommends that you select this option only if you do not have sufficient free disk space to perform an out-of-place upgrade, as the upgrade removes the existing Oracle installation. This patch option requires less disk space, but requires more time, and is riskier, because if you encounter an installation failure, then you must recover the entire existing Oracle home from a backup. If you choose this more risky option, then before you begin the patch installation, complete the following tasks:
Make a complete backup of your existing Oracle home
Read through the entire Upgrade Guide section dealing with in-place upgrades
Note: All Oracle Grid Infrastructure patch set upgrades must be out-of-place upgrades, in which you install the patch set into a new Grid home. In-place patch set upgrades are not supported.
Direct upgrade from previous releases: You can upgrade from a previous Oracle Database release directly to the latest patch set, without having to install the base release. For example, if you want to upgrade from Oracle Database 10g Release 2, or Oracle Database 11g Release 1, then you can upgrade directly to Oracle Database 11g Release 2, patch set 2 (11.2.0.2) using an out-of-place upgrade. Oracle now supports direct upgrades to the release 11.2.0.2 patch set from any release where direct upgrade to the base release (11.2.0.1) is supported, as well as direct upgrades from 11.2.0.1 to 11.2.0.2.
New Features Available in Patch Sets
In addition to the usual set of fixes, the release 11.2.0.2 patch set contains a small number of features that complete the Database 11g Release 2 feature set. The new features fall into the following five categories:
General improvements
Oracle Automatic Storage Management Cluster File System (ACFS) improvements
Quality of Service Management
Database Replay
Management
How to Obtain the Patch Set
As with all previous patch sets, the release 11.2.0.2 patch set is available for download from the Patches tab on My Oracle Support.
Important Changes to Note:
Disk space requirement: Because the out-of-place upgrade requires the installation of a new Oracle home, you must have enough free disk space to accommodate the new Oracle home, as well as disk space needed for temporary files during installation.
ORACLE_HOME location change: When using the out-of-place upgrade path, the location of your Oracle home is different after you complete the upgrade. You must change any direct references to the old location of your Oracle home (such as in /etc/oratab or in tnsnames.ora) to the new location, and update user environment variables for the Oracle home.
In-place upgrade process differs from previous patch set installation processes: Even though it seems similar, the in-place upgrade process for the release 11.2.0.2 patch set differs from previous patch set installation procedures. It is very important to read the entire in-place upgrade process before you begin to install the release 11.2.0.2 patch set, because the preinstallation steps are different.
Frequently Asked Questions
Q. Is this really a patch set?
A. Yes – This patch set still consists primarily of incremental bug fixes to the base release (11.2.0.1) but with different packaging. While new features are included in 11.2.0.2, the number of new features is very small, and is included to complete the 11.2 feature set.
Q. Why did Oracle make this change?
A. In response to customer feedback, Oracle is making this change for several reasons:
Out-of-place upgrade has been a best practice of many customers for patch set application for years. Out-of-place upgrades reduce downtime, because you can complete the initial installation and configuration of the upgrade before you take down the existing installation. Out-of-place patch upgrades are easier, safer, and provide faster fallback if needed: If you encounter issues with the new installation, then simply point clients back to the original Oracle Database installation Oracle home, and restart the database.
Out-of-place upgrades allow customers running multiple instances out of one Oracle home to upgrade each instance individually rather than having to upgrade all instances sharing an Oracle home at the same time (and incur downtime on them all simultaneously).
Full installations provide a better customer experience during the upgrade. By creating a full installation package that uses the same installation process as a base release (as opposed to a differential install of the changes objects), Oracle is able to use the same tools to build and test the patch set installation as Oracle uses with the base release. Improved testing enables a smoother, more trouble-free upgrade process that uses significantly less down time.
Full installations provide an easier upgrade for customers moving to release 11.2.0.2 from releases earlier than Oracle Database 11g release 2. Instead of having to install the base release, and then applying a patch release, you can now upgrade to the latest release in one step.
Q. Is this new packaging unique to the 11.2.0.2 patch set, or will Oracle follow this model for future patch sets?
A. This will be the new model going forward. Oracle will release patch set release 11.2.0.3 and later patch sets as full installation packages. Direct out-of-place upgrade is the Oracle best practices recommendation for patch set upgrades.
Q. There are many new features in 11.2.0.2. Are patch sets not just for bug fixes?
A. Oracle intends to maintain the focus of patch sets as a fix delivery vehicle. Oracle has included a very small number of new features in past patch sets. In this case, after the release of Oracle Database 11g release 2 (11.2.0.1), Oracle completed development of a small number of features that are closely related to the 11.2 release, and Oracle is including these features so that you can use them now, rather than wait for their inclusion in next major release.
Reference Note: Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]
Step By Step Manual Database Upgrade to 11.2.0.2
Compatibility Matrix
Minimum Version of the database that can be directly upgraded to Oracle 11g Release 2
Source Database Target Database
9.2.0.8 or higher 11.2.x
10.1.0.5 or higher 11.2.x
10.2.0.2 or higher 11.2.x
11.1.0.6 or higher 11.2.x
The following database version will require an indirect upgrade path.
Source Database ---> Upgrade Path for Target Database--->Target Database
7.3.3 (or lower)-----> 7.3.4 ---> 9.2.0.8 ---->11.2.x
8.0.5 (or lower)----> 8.0.6 ---> 9.2.0.8 ---->11.2.x
8.1.7 (or lower)----> 8.1.7.4---> 10.2.0.4---->11.2.x
9.0.1.3 (or lower)----> 9.0.1.4-- ->10.2.0.4---->11.2.x
9.2.0.7(or lower)---->9.2.0.8---->11.2.x
Here I am upgrading my Oracle 10.2.0.4 database to Oracle 11.2.0.2
1. Copy Pre upgrade Information gathering Script:
Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory.
Copy the Pre-Upgrade Information Tool (utlu112i.sql) and utltzuv2.sql from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on your system.
$ORACLE_HOME/rdbms/admin/utlu112i.sql
2. Execute Pre Upgrade Script:
Should be change to the directory where utlu112i.sql had been copied in the previous step.
Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu112i.sql file. Please note that the database should be started using the Source Oracle Home.
$ sqlplus '/ as sysdba'
SQL> spool upgrade_info.log
SQL> @utlu112i.sql
SQL> spool off
SQL>
Check the spool file and examine the output of the upgrade information tool.
3. Check for the integrity of the source database:
Check for the integrity of the source database prior to starting the upgrade by downloading and running dbupgdiag.sql script from below My Oracle Support article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) (Avoid this step if don’t have support access)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql
After validating the invalid objects, re-run dbupgdiag.sql in the database once again and make sure that everything is fine.
4. Check that National Characterset:
Check that National Characterset (NLS_NCHAR_CHARACTERSET) is UTF8 and AL16UTF16.
select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_NCHAR_CHARACTERSET';
If this is UTF8 or AL16UTF16 then no action is needed.
If is not UTF8 or AL16UTF16 then refer the following article
Note 276914.1 The National Character Set in Oracle 9i and 10g.
5. Optimizer Statistics:
When upgrading to Oracle Database 11g Release 2 (11.2), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade
$ sqlplus "/as sysdba"
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
6. Ensure that no files need media recovery:
Sql> SELECT * FROM v$recover_file;
7. Stop the listener for the database:
Make pfile from spfile;
$ lsnrctl stop
8. Suhtdown the Database:
Shutdown the database.
$ sqlplus "/as sysdba"
SQL> shutdown immediate;
9. Back Up the Database:
1- Perform Cold Backup
(or)
2- Take a backup using RMAN
Connect to RMAN:
rman "target / nocatalog"
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT '%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO '';
}
10 Backup and change pfile:
Make a backup of the init.ora file.
Comment out obsolete parameters
* The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST.
Note: Once the Parameter file is modified as per your requirement, copy the file to $ORACLE_HOME/dbs (11g Oracle Home )
11 Set Environment Variables:
If your operating system is UNIX then complete this step, else skip to next Step.
1. Make sure the following environment variables point to the Oracle 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
$ export ORACLE_HOME=
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export ORACLE_BASE=
Note : If ORACLE_BASE is not known, after setting Path towards 11g Oracle Home, execute 'orabase', which will point the location of base.
$ orabase
/uo1/app/oracle
2. Update the oratab entry, to set the new ORACLE_HOME pointing towards ORCL and disable automatic startup
Sample /etc/oratab
#orcl:/opt/oracle/product/10.2/db_1:N
orcl:/opt/oracle/product/11.2/db_1:N
Note : After /etc/oratab is updated to have sid and Oracle Home (11.2), you can execute oraenv (/usr/local/bin/oraenv) and set the environment. The input has to be the sid which is entered in /etc/oratab against 11g home.
for Instance,
[oracle@localhost ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2/db_1 is /u01/app/oracle
[oracle@localhost ~]$
12 Upgrade Database:
At the operating system prompt, change to the $ORACLE_HOME/rdbms/admin directory of 11gR2 Oracle Home.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> startup UPGRADE
Set the system to spool results to a log file for later verification after the upgrade is completed and start the upgrade script.
SQL> set echo on
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
SQL> spool off
These measures are an important final step to ensure the integrity and consistency of the newly upgraded Oracle Database software. Also, if you encountered a message listing obsolete initialization parameters when you started the database for upgrade, then remove the obsolete initialization parameters from the parameter file before restarting. If necessary, convert the SPFILE to a PFILE so you can edit the file to delete parameters.
Post Upgrade Steps
13 Post Upgrade:
Start the database and run the Post-Upgrade Status Tool $ORACLE_HOME/rdbms/admin/utlu112s.sql which provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed
$ sqlplus "/as sysdba"
SQL> STARTUP
SQL> @utlu112s.sql
14 Recompile Invalid Objects:
This script can be run concurrently with utlrp.sql. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code in another session.
SQL> @utlrp.sql
15 Check for the integrity of the source database:
Check for the integrity of the upgraded database by running dbupgdiag.sql script from below Metalink article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.
After validating the invalid objects, re-run dbupgdiag.sql in the upgraded database once again and make sure that everything is fine.
16 Configure & Start Listener.ora :
Modify the listener.ora file:
For the upgraded instance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME. Start the listener :
lsnrctl start
17 Crosscheck Environment Variables:
Set Environment Variables
1. Make sure the following environment variables point to the Oracle 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
Also check that your oratab file and any client scripts that set the value of ORACLE_HOME point to the Oracle Database 11g Release 2 (11.2) home.
Note : If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database has instances configured.
2. Modify /etc/oratab entry to use automatic startup
SID:ORACLE_HOME:Y
For Instance,
orcl:/opt/oracle/product/11.2/db_1:Y
18 Spfile from Pfile:
Edit init.ora:
- If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE
- Migrate your initialization parameter file to a server parameter file.
Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;
This will create a spfile as a copy of the init.ora file located in $ORACLE_HOME/dbs (UNIX) & %ORACLE_HOME%\database (Windows).
19 Start the Database with spfile:
Shutdown the database:
Sql> shutdown immediate
Now start the database it will user spfile
Sql> startup
Check the Alert log file for any Error.
Database is ready to use now.
Minimum Version of the database that can be directly upgraded to Oracle 11g Release 2
Source Database Target Database
9.2.0.8 or higher 11.2.x
10.1.0.5 or higher 11.2.x
10.2.0.2 or higher 11.2.x
11.1.0.6 or higher 11.2.x
The following database version will require an indirect upgrade path.
Source Database ---> Upgrade Path for Target Database--->Target Database
7.3.3 (or lower)-----> 7.3.4 ---> 9.2.0.8 ---->11.2.x
8.0.5 (or lower)----> 8.0.6 ---> 9.2.0.8 ---->11.2.x
8.1.7 (or lower)----> 8.1.7.4---> 10.2.0.4---->11.2.x
9.0.1.3 (or lower)----> 9.0.1.4-- ->10.2.0.4---->11.2.x
9.2.0.7(or lower)---->9.2.0.8---->11.2.x
Here I am upgrading my Oracle 10.2.0.4 database to Oracle 11.2.0.2
1. Copy Pre upgrade Information gathering Script:
Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory.
Copy the Pre-Upgrade Information Tool (utlu112i.sql) and utltzuv2.sql from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on your system.
$ORACLE_HOME/rdbms/admin/utlu112i.sql
2. Execute Pre Upgrade Script:
Should be change to the directory where utlu112i.sql had been copied in the previous step.
Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu112i.sql file. Please note that the database should be started using the Source Oracle Home.
$ sqlplus '/ as sysdba'
SQL> spool upgrade_info.log
SQL> @utlu112i.sql
SQL> spool off
SQL>
Check the spool file and examine the output of the upgrade information tool.
3. Check for the integrity of the source database:
Check for the integrity of the source database prior to starting the upgrade by downloading and running dbupgdiag.sql script from below My Oracle Support article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) (Avoid this step if don’t have support access)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql
After validating the invalid objects, re-run dbupgdiag.sql in the database once again and make sure that everything is fine.
4. Check that National Characterset:
Check that National Characterset (NLS_NCHAR_CHARACTERSET) is UTF8 and AL16UTF16.
select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_NCHAR_CHARACTERSET';
If this is UTF8 or AL16UTF16 then no action is needed.
If is not UTF8 or AL16UTF16 then refer the following article
Note 276914.1 The National Character Set in Oracle 9i and 10g.
5. Optimizer Statistics:
When upgrading to Oracle Database 11g Release 2 (11.2), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade
$ sqlplus "/as sysdba"
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
6. Ensure that no files need media recovery:
Sql> SELECT * FROM v$recover_file;
7. Stop the listener for the database:
Make pfile from spfile;
$ lsnrctl stop
8. Suhtdown the Database:
Shutdown the database.
$ sqlplus "/as sysdba"
SQL> shutdown immediate;
9. Back Up the Database:
1- Perform Cold Backup
(or)
2- Take a backup using RMAN
Connect to RMAN:
rman "target / nocatalog"
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT '%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO '';
}
10 Backup and change pfile:
Make a backup of the init.ora file.
Comment out obsolete parameters
* The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST.
Note: Once the Parameter file is modified as per your requirement, copy the file to $ORACLE_HOME/dbs (11g Oracle Home )
11 Set Environment Variables:
If your operating system is UNIX then complete this step, else skip to next Step.
1. Make sure the following environment variables point to the Oracle 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
$ export ORACLE_HOME=
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export ORACLE_BASE=
Note : If ORACLE_BASE is not known, after setting Path towards 11g Oracle Home, execute 'orabase', which will point the location of base.
$ orabase
/uo1/app/oracle
2. Update the oratab entry, to set the new ORACLE_HOME pointing towards ORCL and disable automatic startup
Sample /etc/oratab
#orcl:/opt/oracle/product/10.2/db_1:N
orcl:/opt/oracle/product/11.2/db_1:N
Note : After /etc/oratab is updated to have sid and Oracle Home (11.2), you can execute oraenv (/usr/local/bin/oraenv) and set the environment. The input has to be the sid which is entered in /etc/oratab against 11g home.
for Instance,
[oracle@localhost ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2/db_1 is /u01/app/oracle
[oracle@localhost ~]$
12 Upgrade Database:
At the operating system prompt, change to the $ORACLE_HOME/rdbms/admin directory of 11gR2 Oracle Home.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> startup UPGRADE
Set the system to spool results to a log file for later verification after the upgrade is completed and start the upgrade script.
SQL> set echo on
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
SQL> spool off
These measures are an important final step to ensure the integrity and consistency of the newly upgraded Oracle Database software. Also, if you encountered a message listing obsolete initialization parameters when you started the database for upgrade, then remove the obsolete initialization parameters from the parameter file before restarting. If necessary, convert the SPFILE to a PFILE so you can edit the file to delete parameters.
Post Upgrade Steps
13 Post Upgrade:
Start the database and run the Post-Upgrade Status Tool $ORACLE_HOME/rdbms/admin/utlu112s.sql which provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed
$ sqlplus "/as sysdba"
SQL> STARTUP
SQL> @utlu112s.sql
14 Recompile Invalid Objects:
This script can be run concurrently with utlrp.sql. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code in another session.
SQL> @utlrp.sql
15 Check for the integrity of the source database:
Check for the integrity of the upgraded database by running dbupgdiag.sql script from below Metalink article
Note 556610.1 Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.
After validating the invalid objects, re-run dbupgdiag.sql in the upgraded database once again and make sure that everything is fine.
16 Configure & Start Listener.ora :
Modify the listener.ora file:
For the upgraded instance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME. Start the listener :
lsnrctl start
17 Crosscheck Environment Variables:
Set Environment Variables
1. Make sure the following environment variables point to the Oracle 11g Release directories:
- ORACLE_BASE
- ORACLE_HOME
- PATH
Also check that your oratab file and any client scripts that set the value of ORACLE_HOME point to the Oracle Database 11g Release 2 (11.2) home.
Note : If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database has instances configured.
2. Modify /etc/oratab entry to use automatic startup
SID:ORACLE_HOME:Y
For Instance,
orcl:/opt/oracle/product/11.2/db_1:Y
18 Spfile from Pfile:
Edit init.ora:
- If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE
- Migrate your initialization parameter file to a server parameter file.
Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;
This will create a spfile as a copy of the init.ora file located in $ORACLE_HOME/dbs (UNIX) & %ORACLE_HOME%\database (Windows).
19 Start the Database with spfile:
Shutdown the database:
Sql> shutdown immediate
Now start the database it will user spfile
Sql> startup
Check the Alert log file for any Error.
Database is ready to use now.
How to see free space on unix system.
[oracle@itc-799 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 100G 72G 24G 76% /
/dev/sda2 341G 205G 119G 64% /data
/dev/sda1 92M 12M 76M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
df -h is command to check space usage on unix system, type this command on unix prompt and the out will be like above detail. In the above example, we have two mount points “/” and “/data” which we can use. In local environment usually we have two or one mount point.
Let’s talk about these mount points one by one.
/dev/sda3 100G 72G 24G 76% /
This mount point is known as root mount point and has total 100G of space out of which 72G is used and 24G is available.
/dev/sda2 341G 205G 119G 64% /data
This mount point is known as data mount point as given name and have 341G of space out of which 205G is used and 119G is free and the used percentage is 64%.
Some systems may have more mount points like
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 100G 72G 24G 76% /
/dev/sda2 341G 205G 119G 64% /data
/dev/sda4 341G 205G 119G 64% /data1
/dev/sda1 92M 12M 76M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
This system has there mount points “/” , “/data” and “/data1”.
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 100G 72G 24G 76% /
/dev/sda2 341G 205G 119G 64% /data
/dev/sda1 92M 12M 76M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
df -h is command to check space usage on unix system, type this command on unix prompt and the out will be like above detail. In the above example, we have two mount points “/” and “/data” which we can use. In local environment usually we have two or one mount point.
Let’s talk about these mount points one by one.
/dev/sda3 100G 72G 24G 76% /
This mount point is known as root mount point and has total 100G of space out of which 72G is used and 24G is available.
/dev/sda2 341G 205G 119G 64% /data
This mount point is known as data mount point as given name and have 341G of space out of which 205G is used and 119G is free and the used percentage is 64%.
Some systems may have more mount points like
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 100G 72G 24G 76% /
/dev/sda2 341G 205G 119G 64% /data
/dev/sda4 341G 205G 119G 64% /data1
/dev/sda1 92M 12M 76M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
This system has there mount points “/” , “/data” and “/data1”.
Moving between two Oracle Homes
I have two Oracle versions installed on my system. Oracle 11.2.0.1 and Oracle 11.2.0.2.
Here are the steps how to switch between them.
Make Oracle 11.2.0.1 as Oracle Home
oracle@suse32:~> cd /opt/oracle/product/11.2/db_1/bin/
oracle@suse32:/opt/oracle/product/11.2/db_1/bin> export ORACLE_HOME=/opt/oracle/product/11.2/db_1 product/11.2/db_1
oracle@suse32:/opt/oracle/product/11.2/db_1/bin> ./sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 7 23:47:37 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> exit;
oracle@suse32:/opt/oracle/product> ls
11.2 11.2.0.2
Make Oracle 11.2.0.2 as Oracle Home
oracle@suse32:/opt/oracle/product> cd 11.2.0.2/
oracle@suse32:/opt/oracle/product/11.2.0.2> cd db_1/bin/
oracle@suse32:/opt/oracle/product/11.2.0.2/db_1/bin> export ORACLE_HOME=/opt/oracle/product/11.2.0.2/db_1
oracle@suse32:/opt/oracle/product/11.2.0.2/db_1/bin> ./sqlplus /nolog
SQL*Plus: Release 11.2.0.2.0 Production on Mon Nov 7 23:58:40 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
SQL>
Here are the steps how to switch between them.
Make Oracle 11.2.0.1 as Oracle Home
oracle@suse32:~> cd /opt/oracle/product/11.2/db_1/bin/
oracle@suse32:/opt/oracle/product/11.2/db_1/bin> export ORACLE_HOME=/opt/oracle/product/11.2/db_1 product/11.2/db_1
oracle@suse32:/opt/oracle/product/11.2/db_1/bin> ./sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 7 23:47:37 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> exit;
oracle@suse32:/opt/oracle/product> ls
11.2 11.2.0.2
Make Oracle 11.2.0.2 as Oracle Home
oracle@suse32:/opt/oracle/product> cd 11.2.0.2/
oracle@suse32:/opt/oracle/product/11.2.0.2> cd db_1/bin/
oracle@suse32:/opt/oracle/product/11.2.0.2/db_1/bin> export ORACLE_HOME=/opt/oracle/product/11.2.0.2/db_1
oracle@suse32:/opt/oracle/product/11.2.0.2/db_1/bin> ./sqlplus /nolog
SQL*Plus: Release 11.2.0.2.0 Production on Mon Nov 7 23:58:40 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
SQL>
Saturday, 26 November 2011
10046 trace file content
The 10046 trace file contents differ greatly between different versions of Oracle.
Indeed, format of trace files changes sometimes between Oracle versions but definitely not greatly.
Somewhere near 9.2.0.2 STAT lines changed a bit (row source execution statistics was added).
Instead of just:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$'
they started look like this:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$ (cr=4 pr=0 pw=0 time=91 us)'
And in 10.2 format of WAIT lines and BINDS sections changed significantly.
BINDS sections can not be processed by tkprof in any versions of Oracle so I just skip them.
As for WAIT lines, here's the pre-10.2 WAIT line:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 p1=1111838976 p2=1 p3=0 p3=62
Compare it with 10.2:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 driver id=1111838976 #bytes=1 p3=0 obj#=62 tim=42601199281
Wait events parameters (p1,p2,p3) got a meaningful name (as a side note: but sometimes not a meaningful value -- #bytes in 'SQL*Net messages' still always equal to 1) and every WAIT line now have tim-stamp.
The funny thing is that tkprof from Oracle 9.2 correctly process trace files from 10.2! Looks like tkprof parses WAIT line by splitting line on '=' character and just gets the first three values.
In summary, I beleive that when tkprof from old version can correctly process trace files from the new one then it's more natural to say that 'trace files contents does not differ greatly between different versions of Oracle'.
Indeed, format of trace files changes sometimes between Oracle versions but definitely not greatly.
Somewhere near 9.2.0.2 STAT lines changed a bit (row source execution statistics was added).
Instead of just:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$'
they started look like this:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$ (cr=4 pr=0 pw=0 time=91 us)'
And in 10.2 format of WAIT lines and BINDS sections changed significantly.
BINDS sections can not be processed by tkprof in any versions of Oracle so I just skip them.
As for WAIT lines, here's the pre-10.2 WAIT line:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 p1=1111838976 p2=1 p3=0 p3=62
Compare it with 10.2:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 driver id=1111838976 #bytes=1 p3=0 obj#=62 tim=42601199281
Wait events parameters (p1,p2,p3) got a meaningful name (as a side note: but sometimes not a meaningful value -- #bytes in 'SQL*Net messages' still always equal to 1) and every WAIT line now have tim-stamp.
The funny thing is that tkprof from Oracle 9.2 correctly process trace files from 10.2! Looks like tkprof parses WAIT line by splitting line on '=' character and just gets the first three values.
In summary, I beleive that when tkprof from old version can correctly process trace files from the new one then it's more natural to say that 'trace files contents does not differ greatly between different versions of Oracle'.
Thursday, 24 November 2011
How to Configure OEM 10g Database Control after Clone
After cloning an EBS environment I had tried to start the OEM 10g Database Control (DBConsole) and I got this message:
[oratest@testsrv1 ~]$ emctl start dbconsole
TZ set to Israel
OC4J Configuration issue.
/oracle/test/db/testdb/10.2/ oc4j/j2ee/OC4J_DBConsole_testsrv1.domain_TEST not found.
However, it was an acceptable message since DB Console wasn't configured to this instance yet.
Using emca I've tried to recreate the DB Control configuration files as follow:
[oratest@testsrv1 ~]$ emca -config dbcontrol db
...
INFO: Database Control started successfully
...
INFO: >>>>>>>>>>> The Database Control URL is http://testsrv1.domain:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully
Well, I thought it looks fine, but when I opened the DB Console I saw this message at the top of the page:
error:
java.lang.Exception: Exception in sending Request :: null
After some research, logs digging and Metalink searches, I found that the Agent component should be relinked manually.
But before there are two issues in env_sysman.mk file that should be fixed before executing the relink command.
I will show my actions step by step:
Stop DB Console:
emctl stop dbconsole
cd $ORACLE_HOME/sysman/lib
Edit env_sysman.mk and look for "JRE_LIB_DIR"
In my instance JRE_LIB_DIR was populated with the old environment name..
I changed it from:
JRE_LIB_DIR=/oracle/prod/db/proddb/10.2/jdk/jre/lib/i386
To:
JRE_LIB_DIR=/oracle/test/db/testdb/10.2/jdk/jre/lib/i386
One more issue for env_sysman.mk, look for "$(LIBNMEM) $(LIBNMER)"
And change from:
$(LIBNMEM) $(LIBNMER) \
To:
$(LIBNMEM) $(LIBNMER) $(LIBNMO) \
Now relink the Agent component by issuing this command:
make -f ins_sysman.mk agent
Recreate the DB Control configuration files (Again, even if you did it before)
emca -config dbcontrol db
Following this procedure will make the DBConsole configured and work for the new cloned instance.
[oratest@testsrv1 ~]$ emctl start dbconsole
TZ set to Israel
OC4J Configuration issue.
/oracle/test/db/testdb/10.2/ oc4j/j2ee/OC4J_DBConsole_testsrv1.domain_TEST not found.
However, it was an acceptable message since DB Console wasn't configured to this instance yet.
Using emca I've tried to recreate the DB Control configuration files as follow:
[oratest@testsrv1 ~]$ emca -config dbcontrol db
...
INFO: Database Control started successfully
...
INFO: >>>>>>>>>>> The Database Control URL is http://testsrv1.domain:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully
Well, I thought it looks fine, but when I opened the DB Console I saw this message at the top of the page:
error:
java.lang.Exception: Exception in sending Request :: null
After some research, logs digging and Metalink searches, I found that the Agent component should be relinked manually.
But before there are two issues in env_sysman.mk file that should be fixed before executing the relink command.
I will show my actions step by step:
Stop DB Console:
emctl stop dbconsole
cd $ORACLE_HOME/sysman/lib
Edit env_sysman.mk and look for "JRE_LIB_DIR"
In my instance JRE_LIB_DIR was populated with the old environment name..
I changed it from:
JRE_LIB_DIR=/oracle/prod/db/proddb/10.2/jdk/jre/lib/i386
To:
JRE_LIB_DIR=/oracle/test/db/testdb/10.2/jdk/jre/lib/i386
One more issue for env_sysman.mk, look for "$(LIBNMEM) $(LIBNMER)"
And change from:
$(LIBNMEM) $(LIBNMER) \
To:
$(LIBNMEM) $(LIBNMER) $(LIBNMO) \
Now relink the Agent component by issuing this command:
make -f ins_sysman.mk agent
Recreate the DB Control configuration files (Again, even if you did it before)
emca -config dbcontrol db
Following this procedure will make the DBConsole configured and work for the new cloned instance.
ADPATCH with "options=prereq" - what really happens there?
With adpatch utility we have a possibility to ask for a prerequisite check prior to running patch driver files.
Actually, some patches must apply with prerequisite check before applying them.
The adpatch command should look like: adpatch option=prereq
This flag indicate to adpatch to check prerequisite before applying patch.
adpatch checks the prerequisite based on information from patch files and current snapshot on APPL_TOP.
When running adpatch with "prereq" flag, we might get an error message like:
Analyzing prerequisite patch information...
AutoPatch error: This patch has some prerequisites specified, but a "snapshot" of this APPL-TOP's file-system has never been taken, thereby rendering it impossible to check for the prerequisites.
Please take a "snapshot" of this APPL-TOP using "AD Administration" first.
This error message will show up if a snapshot on current APPL_TOP doesn't exists.
To create such snapshot on APPL_TOP:
1) run adadmin
2) Select "Maintain Applications Files menu"
3) Select "Update current view snapshot"
4) Rerun adpatch
**It might take couple of hours depends on your hardware and APPL_TOP size.
So how adpatch check the prerequisites?
1) Check if a snapshot on current APPL_TOP exist.
using sql script - adbkflsn.sql (if not, will terminate with above error message.....)
2) adpatch uploads a ldt file with FNDLOAD utility into system (bug-fixes).
ldt file name is: b[PATCH_NUMER].ldt - comes from patch root directory.
3) Execute the UMS analysis engine based on the snapshot and bug-fixes to check if all prerequisites exists.
Actually, some patches must apply with prerequisite check before applying them.
The adpatch command should look like: adpatch option=prereq
This flag indicate to adpatch to check prerequisite before applying patch.
adpatch checks the prerequisite based on information from patch files and current snapshot on APPL_TOP.
When running adpatch with "prereq" flag, we might get an error message like:
Analyzing prerequisite patch information...
AutoPatch error: This patch has some prerequisites specified, but a "snapshot" of this APPL-TOP's file-system has never been taken, thereby rendering it impossible to check for the prerequisites.
Please take a "snapshot" of this APPL-TOP using "AD Administration" first.
This error message will show up if a snapshot on current APPL_TOP doesn't exists.
To create such snapshot on APPL_TOP:
1) run adadmin
2) Select "Maintain Applications Files menu"
3) Select "Update current view snapshot"
4) Rerun adpatch
**It might take couple of hours depends on your hardware and APPL_TOP size.
So how adpatch check the prerequisites?
1) Check if a snapshot on current APPL_TOP exist.
using sql script - adbkflsn.sql (if not, will terminate with above error message.....)
2) adpatch uploads a ldt file with FNDLOAD utility into system (bug-fixes).
ldt file name is: b[PATCH_NUMER].ldt - comes from patch root directory.
3) Execute the UMS analysis engine based on the snapshot and bug-fixes to check if all prerequisites exists.
How to check conflicts while applying CPU patch?
Please use below command to check the conflicts aganist the oracle_home and avoid to land in problems
step 1: unzip your patch zip file
step 2: run below command
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
Example:
$ unzip p9655017_10204_linux.zip
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 9655017
The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error
Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
6600051, 8836683
Whenever you get this type of error message, plz contact oracle support by raising a service request(SR)
In my case, Oracle support suggested to apply a merge patch 9347333 before applying Jul CPU 2011. Once done with applying merge patch, without any further issues I successfully applied CPU patch
Sometimes apart from above message you may see below warning messages which you can ignore
Summary of Conflict Analysis:
Patches that can be applied now without any conflicts are :
10013975, 10014009, 10014012, 10014015, 10325878, 10325885, 11787762, 11787763, 11787765, 11787766, 12419249, 12566121, 12566124, 12566126, 12566129, 12566131, 12566134, 12566136, 12566137, 12566139, 12566141, 12566142, 12566143, 7155248, 7155249, 7155250, 7155251, 7155252, 7155253, 7155254, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695, 9678697
Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
10249540, 8836681, 8568405
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
10013975, 10014009, 10014012, 10014015, 10325878, 10249540, 8836681, 8568405, 7155248, 7155249, 7155250, 7155251, 7155252, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695
step 1: unzip your patch zip file
step 2: run below command
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
Example:
$ unzip p9655017_10204_linux.zip
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 9655017
The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error
Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
6600051, 8836683
Whenever you get this type of error message, plz contact oracle support by raising a service request(SR)
In my case, Oracle support suggested to apply a merge patch 9347333 before applying Jul CPU 2011. Once done with applying merge patch, without any further issues I successfully applied CPU patch
Sometimes apart from above message you may see below warning messages which you can ignore
Summary of Conflict Analysis:
Patches that can be applied now without any conflicts are :
10013975, 10014009, 10014012, 10014015, 10325878, 10325885, 11787762, 11787763, 11787765, 11787766, 12419249, 12566121, 12566124, 12566126, 12566129, 12566131, 12566134, 12566136, 12566137, 12566139, 12566141, 12566142, 12566143, 7155248, 7155249, 7155250, 7155251, 7155252, 7155253, 7155254, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695, 9678697
Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
10249540, 8836681, 8568405
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
10013975, 10014009, 10014012, 10014015, 10325878, 10249540, 8836681, 8568405, 7155248, 7155249, 7155250, 7155251, 7155252, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695
How to resolve ORA-00490 error?
Root Cause:
This error will occur when there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn’t able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.
Solution :
There are 2 solutions for this problem which are mentioned below
1. Check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process.
2. Check “ulimit” setting for Oracle. “ulimit” is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the “ulimit” setting for Oracle user.
This error will occur when there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn’t able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.
Solution :
There are 2 solutions for this problem which are mentioned below
1. Check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process.
2. Check “ulimit” setting for Oracle. “ulimit” is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the “ulimit” setting for Oracle user.
Purge old files on Linux/Unix using “find” command
I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.
I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:
find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;
It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.
After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:
find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;
All csv files in /interface/inbound that are older than 60 days will be deleted.
But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files.
I tried several things, like add another "-name" to the find command:
find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;
But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).
After struggling a liitle with the find command, I managed to make it works:
find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;
I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:
find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;
It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.
After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:
find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;
All csv files in /interface/inbound that are older than 60 days will be deleted.
But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files.
I tried several things, like add another "-name" to the find command:
find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;
But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).
After struggling a liitle with the find command, I managed to make it works:
find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;
ORA-00821 Specified value of sga_target is too small....
Resized your SGA_TARGET too small and found you can’t now start your database.
If you are using a PFILE then just edit it and set the SGA_TARGET to a larger value. But what if you’re using an SPFILE. One possibility is to create the pfile from the spfile edit the pfile, and then either start the database using the pfile and remove the spfile and start the database as normal and the new pfile will be picked up.
The problem arises when the spfile is in an ASM, creating the pfile from this can be a problem. One solution is to create a pfile which calls the spfile in the ASM but after the call to the spfile add an extra line which alters the SGA_TARGET as follows
SPFILE='+DATA1/PROD1/spfilePROD1.ora'
SGA_TARGET=1024M
This pfile can be places in $OH/dbs thus, the next time you start the database this pfile will be run. Alternatively, you could explicitly use the ‘pfile=’ parameter when starting the database thus
Startup pfile=$OH/dbs/initPROD1.ora
If you are using a PFILE then just edit it and set the SGA_TARGET to a larger value. But what if you’re using an SPFILE. One possibility is to create the pfile from the spfile edit the pfile, and then either start the database using the pfile and remove the spfile and start the database as normal and the new pfile will be picked up.
The problem arises when the spfile is in an ASM, creating the pfile from this can be a problem. One solution is to create a pfile which calls the spfile in the ASM but after the call to the spfile add an extra line which alters the SGA_TARGET as follows
SPFILE='+DATA1/PROD1/spfilePROD1.ora'
SGA_TARGET=1024M
This pfile can be places in $OH/dbs thus, the next time you start the database this pfile will be run. Alternatively, you could explicitly use the ‘pfile=’ parameter when starting the database thus
Startup pfile=$OH/dbs/initPROD1.ora
Wednesday, 23 November 2011
Oracle Applications Schema Password Change Utility - (FNDCPASS)
Changing passwords periodically helps ensure database security. Oracle Applications provides a command line utility, FNDCPASS, to set Oracle Applications schema passwords. In addition to changing the schema password in the database, this utility changes the password registered in Oracle Applications tables (FND Tables). This utility can also change Applications End User passwords.
FNDCPASS changes
Oracle Applications Database System Users (APPS, APPLSYS)
Oracle Applications Product Schema Passwords (GL, AR, AP, etc,)
Oracle Applications End User Account Passwords (SYSADMIN, OPERATIONS etc)
Note: the utility, FNDCPASS, cannot be used for changing the database SYSTEM and SYS users. Only users that are registered in FND meta data tables need to be changed using FNDCPASS. Normally, the APPS database user password and APPLSYS password need to be the same. When changing the APPLSYS password using FNDCPASS, the APPS password is also changed.
Syntax of FNDCPASS command:
FNDCPASS logon 0 Y system/password mode username new_password
Where logon is username/password[@connect]
System/password is password of the system account of that database
Mode is SYSTEM/USER/ORACLE
Username is the username where you want to change its password
new_password is the new password in unencrypted format
Example:
$ FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS WELCOME
$ FNDCPASS apps/apps 0 Y system/manager ORACLE GL GL1
$ FNDCPASS apps/apps 0 Y system/manager USER VISION WELCOME
Note: The FNDCPASS has a new mode, "ALLORACLE", in which all Oracle Application schema passwords can be changed in one call. Apply the patch (Patch No# 4745998) to have this option, if not available currently with your Apps.
Syntax:
FNDCPASS 0 Y ALLORACLE
Example:
$ FNDCPASS apps/apps 0 Y system/manager ALLORACLE WELCOME
To change APPS/APPLSYS password, we need to give mode as SYSTEM
To change product schema passwords, i.e., GL, AP, AR, etc., we need to give mode as ORACLE
To change end user passwords, i.e., SYSADMIN, OPERATIONS etc., we need give mode as USER
Note: Till 11.5.9 there is bug in FNDCPASS, which allows FNDCPASS to change APPS&APPLSYS passwords. Doing so will corrupt the data in FND meta data tables and cause to the application unusable. Because of that it is recommend taking backup of the tables FND_USER and FND_ORACLE_USERID before changing the passwords.
After changing the APPS/APPLSYS or APPLSYSPUB user, following extra manual steps needs to be done.
If you changed the APPS (and APPLSYS) password, update the password in these files:
iAS_TOP/Apache/modplsql/cfg/wdbsvr.app
ORACLE_HOME/reports60/server/CGIcmd.dat
If you changed the APPLSYSPUB password, update the password in these files:
FND_TOP/resource/appsweb.cfg
OA_HTML/bin/appsweb.cfg
FND_TOP/secure/HOSTNAME_DBNAME.dbc
Note: I would you suggest you to first try changing the passwords using FNDCPASS on your test Apps Instances, once you are done with this without any errors or issues then you can move this to production, and also request you to search in metalink for more information about FNDCPASS utility and it's usage.
FNDCPASS changes
Oracle Applications Database System Users (APPS, APPLSYS)
Oracle Applications Product Schema Passwords (GL, AR, AP, etc,)
Oracle Applications End User Account Passwords (SYSADMIN, OPERATIONS etc)
Note: the utility, FNDCPASS, cannot be used for changing the database SYSTEM and SYS users. Only users that are registered in FND meta data tables need to be changed using FNDCPASS. Normally, the APPS database user password and APPLSYS password need to be the same. When changing the APPLSYS password using FNDCPASS, the APPS password is also changed.
Syntax of FNDCPASS command:
FNDCPASS logon 0 Y system/password mode username new_password
Where logon is username/password[@connect]
System/password is password of the system account of that database
Mode is SYSTEM/USER/ORACLE
Username is the username where you want to change its password
new_password is the new password in unencrypted format
Example:
$ FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS WELCOME
$ FNDCPASS apps/apps 0 Y system/manager ORACLE GL GL1
$ FNDCPASS apps/apps 0 Y system/manager USER VISION WELCOME
Note: The FNDCPASS has a new mode, "ALLORACLE", in which all Oracle Application schema passwords can be changed in one call. Apply the patch (Patch No# 4745998) to have this option, if not available currently with your Apps.
Syntax:
FNDCPASS 0 Y ALLORACLE
Example:
$ FNDCPASS apps/apps 0 Y system/manager ALLORACLE WELCOME
To change APPS/APPLSYS password, we need to give mode as SYSTEM
To change product schema passwords, i.e., GL, AP, AR, etc., we need to give mode as ORACLE
To change end user passwords, i.e., SYSADMIN, OPERATIONS etc., we need give mode as USER
Note: Till 11.5.9 there is bug in FNDCPASS, which allows FNDCPASS to change APPS&APPLSYS passwords. Doing so will corrupt the data in FND meta data tables and cause to the application unusable. Because of that it is recommend taking backup of the tables FND_USER and FND_ORACLE_USERID before changing the passwords.
After changing the APPS/APPLSYS or APPLSYSPUB user, following extra manual steps needs to be done.
If you changed the APPS (and APPLSYS) password, update the password in these files:
iAS_TOP/Apache/modplsql/cfg/wdbsvr.app
ORACLE_HOME/reports60/server/CGIcmd.dat
If you changed the APPLSYSPUB password, update the password in these files:
FND_TOP/resource/appsweb.cfg
OA_HTML/bin/appsweb.cfg
FND_TOP/secure/HOSTNAME_DBNAME.dbc
Note: I would you suggest you to first try changing the passwords using FNDCPASS on your test Apps Instances, once you are done with this without any errors or issues then you can move this to production, and also request you to search in metalink for more information about FNDCPASS utility and it's usage.
DB Link to Oracle 11g
As you know in Oracle 11g passwords are case sensitive by default. This applies to connecting via SQL*Plus or other client tools. And it also applies to database links between databases. So when you link from Oracle 10g to Oracle 11g create database link like this:
CREATE DATABASE LINK my_link
CONNECT TO remote_user IDENTIFIED BY "CaSe_SeNsItIvE_PaSsWoRd"
USING 'TNS_ID_11G';
Do not forget to enclose password by double-quote marks!
When you do not set password this way, you will be getting:
ORA-01017: invalid username/password; logon denied.
Hope this small tip will be useful for someone.
CREATE DATABASE LINK my_link
CONNECT TO remote_user IDENTIFIED BY "CaSe_SeNsItIvE_PaSsWoRd"
USING 'TNS_ID_11G';
Do not forget to enclose password by double-quote marks!
When you do not set password this way, you will be getting:
ORA-01017: invalid username/password; logon denied.
Hope this small tip will be useful for someone.
Tuesday, 15 November 2011
ORA-28576: lost RPC connection to external procedure agent
Cause: of this error is abnormal termination of the invoked "C" routine. If this is not the case, check for network problems. Correct the problem if you find it. If all components appear to be normal but the problem persists, the problem could be an internal logic error in the RPC transfer code. Contact your customer support representative.
Action: First check the 3GL code you are invoking; the most likely
Action: First check the 3GL code you are invoking; the most likely
ORA-04031: unable to allocate string bytes of shared memory ("string","string","string","string")
Cause: More shared memory is needed than was allocated in the shared pool.
Action: If the shared pool is out of memory, either use the dbms_shared_pool package to pin large packages, reduce your use of shared memory, or increase the amount of available shared memory by increasing the value of the INIT.ORA parameters "shared_pool_reserved_size" and "shared_pool_size". If the large pool is out of memory, increase the INIT.ORA parameter "large_pool_size".
Action: If the shared pool is out of memory, either use the dbms_shared_pool package to pin large packages, reduce your use of shared memory, or increase the amount of available shared memory by increasing the value of the INIT.ORA parameters "shared_pool_reserved_size" and "shared_pool_size". If the large pool is out of memory, increase the INIT.ORA parameter "large_pool_size".
Monday, 14 November 2011
How to know which columns are in the overflow segment?
Remember that the INCLUDE_COLUMN of the USER_INDEXES view gives you the last column included in the index and so in the top segment, so joining this view with the USER_TAB_COLUMNS you can separate the columns that are in the top segment from those that are in the overflow one as with the following statement:
SQL> SELECT c.table_name, c.column_name,
2 CASE WHEN c.column_id <= i.include_column THEN 'TOP' ELSE 'OVERFLOW' END segment
3 FROM user_tab_columns c, user_indexes i
4 WHERE i.table_name = c.table_name
5 ORDER by table_name, column_id;
TABLE_NAME COLUMN_NAME SEGMENT
------------------------------ ------------------------------ --------
MY_IOT ID TOP
MY_IOT VALUE TOP
MY_IOT COMMENTS OVERFLOW
SQL> SELECT c.table_name, c.column_name,
2 CASE WHEN c.column_id <= i.include_column THEN 'TOP' ELSE 'OVERFLOW' END segment
3 FROM user_tab_columns c, user_indexes i
4 WHERE i.table_name = c.table_name
5 ORDER by table_name, column_id;
TABLE_NAME COLUMN_NAME SEGMENT
------------------------------ ------------------------------ --------
MY_IOT ID TOP
MY_IOT VALUE TOP
MY_IOT COMMENTS OVERFLOW
How to move an IOT into another tablespace?
SQL> SELECT segment_name, segment_type, tablespace_name FROM user_segments ORDER BY 1;
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_D01
SYS_IOT_TOP_71142 INDEX TS_D01
Let's assume you want to move all the segments from their current tablespace to another tablespace. You'll quickly discover that you cannot use the standard commands:
SQL> ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01;
ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01
*
ERROR at line 1:
ORA-25191: cannot reference overflow table of an index-organized table
SQL> ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01;
ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt
You have to first find the name of the associated IOT table:
SQL> SELECT iot_name FROM user_tables WHERE table_name = 'SYS_IOT_OVER_71142';
IOT_NAME
------------------------------
MY_IOT
SQL> SELECT table_name FROM user_indexes WHERE index_name = 'SYS_IOT_TOP_71142';
TABLE_NAME
------------------------------
MY_IOT
Then you can move the segments:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_i01;
Table altered.
SQL> ALTER TABLE my_iot MOVE OVERFLOW TABLESPACE ts_i01;
Table altered.
SQL> SELECT segment_name, segment_type, tablespace_name FROM user_segments ORDER BY 1;
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_I01
SYS_IOT_TOP_71142 INDEX TS_I01
The first statement moves the top segment and the second one the overflow segment. This can be done in one statement using:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_d01 OVERFLOW TABLESPACE ts_d01;
Table altered.
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_D01
SYS_IOT_TOP_71142 INDEX TS_D01
Let's assume you want to move all the segments from their current tablespace to another tablespace. You'll quickly discover that you cannot use the standard commands:
SQL> ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01;
ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01
*
ERROR at line 1:
ORA-25191: cannot reference overflow table of an index-organized table
SQL> ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01;
ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt
You have to first find the name of the associated IOT table:
SQL> SELECT iot_name FROM user_tables WHERE table_name = 'SYS_IOT_OVER_71142';
IOT_NAME
------------------------------
MY_IOT
SQL> SELECT table_name FROM user_indexes WHERE index_name = 'SYS_IOT_TOP_71142';
TABLE_NAME
------------------------------
MY_IOT
Then you can move the segments:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_i01;
Table altered.
SQL> ALTER TABLE my_iot MOVE OVERFLOW TABLESPACE ts_i01;
Table altered.
SQL> SELECT segment_name, segment_type, tablespace_name FROM user_segments ORDER BY 1;
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_I01
SYS_IOT_TOP_71142 INDEX TS_I01
The first statement moves the top segment and the second one the overflow segment. This can be done in one statement using:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_d01 OVERFLOW TABLESPACE ts_d01;
Table altered.
What is Oracle Exadata ???
Oracle recently announced a wide array of products and solutions spanning database, middleware, applications and hardware in the Indian market. Among all these products and solution - Exadata, drew our attention.
According to the company, Oracle "Exadata Database Machine" provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. A complete package of servers, storage, networking, and software, the product is scalable, secure, and redundant.
Till now more than 1000 Exadata machines have been installed in 67 countries and across 23 industries. With more than 60 per cent market share, the company aims to leverage its existing customer base, which consists of more than 7000 Indian organizations to drive sales for Exadata.
Currently, Sun Pharmaceuticals, Tech Mahindra, Religare, Reliance Consumer Finance and Stock Holding Corporation and Stock Holding Corporation of India have adopted Exadata for their datawarhousing , database and OLTP requirement.
Placed among the top five vendors worldwide, Oracle believes that Exadata has made a strong contribution to Oracle’s growth in Q4 FY10, IT PRO India explores the product with Sheshagiri Anegondi, Vice President – Technology, Oracle India.
Bringing in technical innovation
"Exadata brings forward key technical innovations like Intelligent Storage, Smart Flash Cache and Hybrid Columnar Compression. With these innovations, Exadata provides extreme performance while ensuring maximum ROI," says Sheshagiri Anegondi, Vice President – Technology, Oracle India.
He further adds, "the smart storage software in Exadata offloads data-intensive query processing from Oracle Database 11g servers to Exadata’s storage layer for parallel data processing. Because there’s less data moving through the higher-bandwidth connections, you get dramatically improved performance as well as concurrency for simple and complex data warehousing queries.”
Exadata packs in some impressive features. It performs 1.5 million I/O operations in 1 second, speeding database performance by 10x. It also reduces electricity consumption by 87.5 percent and floor space by 75 percent – thereby shrinking datacenter from an entire building to a single room.
Playing pivotal role in security issues
Exadata will also play a pivotal role in data security. "As organizations consolidate their data, more and more sensitive information ranging from email addresses to credit card numbers now resides in a single database, giving organizations the ability to secure and monitor that data more efficiently than ever before," according to Anegondi.
As a matter of fact Exadata, can utilize Oracle’s industry leading database security solutions to block threats and detect unauthorized activities like misuse of privileged user credentials, insider threats, and SQL injection.
For further security Oracle is offering Oracle's Advanced Security for Exadata. This would include Oracle Database Vault that protects against misuse of stolen login credentials, application bypass, and unauthorized changes to applications, including attempts to make copies of application tables, Oracle Audit Vault that provides auditors and internal security personnel an efficient means of reviewing of activity inside the Oracle Exadata Database Machine and Oracle Database Firewall to monitor in-bound SQL traffic over Oracle SQL Net and the TCP/IP protocol."
According to the company, Oracle "Exadata Database Machine" provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. A complete package of servers, storage, networking, and software, the product is scalable, secure, and redundant.
Till now more than 1000 Exadata machines have been installed in 67 countries and across 23 industries. With more than 60 per cent market share, the company aims to leverage its existing customer base, which consists of more than 7000 Indian organizations to drive sales for Exadata.
Currently, Sun Pharmaceuticals, Tech Mahindra, Religare, Reliance Consumer Finance and Stock Holding Corporation and Stock Holding Corporation of India have adopted Exadata for their datawarhousing , database and OLTP requirement.
Placed among the top five vendors worldwide, Oracle believes that Exadata has made a strong contribution to Oracle’s growth in Q4 FY10, IT PRO India explores the product with Sheshagiri Anegondi, Vice President – Technology, Oracle India.
Bringing in technical innovation
"Exadata brings forward key technical innovations like Intelligent Storage, Smart Flash Cache and Hybrid Columnar Compression. With these innovations, Exadata provides extreme performance while ensuring maximum ROI," says Sheshagiri Anegondi, Vice President – Technology, Oracle India.
He further adds, "the smart storage software in Exadata offloads data-intensive query processing from Oracle Database 11g servers to Exadata’s storage layer for parallel data processing. Because there’s less data moving through the higher-bandwidth connections, you get dramatically improved performance as well as concurrency for simple and complex data warehousing queries.”
Exadata packs in some impressive features. It performs 1.5 million I/O operations in 1 second, speeding database performance by 10x. It also reduces electricity consumption by 87.5 percent and floor space by 75 percent – thereby shrinking datacenter from an entire building to a single room.
Playing pivotal role in security issues
Exadata will also play a pivotal role in data security. "As organizations consolidate their data, more and more sensitive information ranging from email addresses to credit card numbers now resides in a single database, giving organizations the ability to secure and monitor that data more efficiently than ever before," according to Anegondi.
As a matter of fact Exadata, can utilize Oracle’s industry leading database security solutions to block threats and detect unauthorized activities like misuse of privileged user credentials, insider threats, and SQL injection.
For further security Oracle is offering Oracle's Advanced Security for Exadata. This would include Oracle Database Vault that protects against misuse of stolen login credentials, application bypass, and unauthorized changes to applications, including attempts to make copies of application tables, Oracle Audit Vault that provides auditors and internal security personnel an efficient means of reviewing of activity inside the Oracle Exadata Database Machine and Oracle Database Firewall to monitor in-bound SQL traffic over Oracle SQL Net and the TCP/IP protocol."
Oracle RAC Background processes
RAC Background Processes:
1. Lock Monitor Processes ( LMON)
2. Lock Monitor Services (LMS)
3. Lock Monitor Daemon Process ( LMD)
4. LCKn ( Lock Process)
5. DIAG (Diagnostic Daemon)
1. Lock Monitor Processes ( LMON)
It Maintains GCS memory structures.
Handles the abnormal termination of processes and instances.
Reconfiguration of locks & resources when an instance joins or leaves the cluster are handled by LMON ( During reconfiguration LMON generate the trace files)
It responsible for executing dynamic lock remastering every 10 mins ( Only in 10g R2 & later versions).
LMON Processes manages the global locks & resources.
It monitors all instances in cluster, primary for dictionary cache locks,library cache locks & deadlocks on deadlock sensitive on enqueue & resources.
LMON also provides cluster group services.
Also called Global enqueue service monitor.
2. Lock Monitor Services (LMS)
LMS is most very active background processes.
Consuming significant amount of CPU time. ( 10g R2 - ensure that LMS process does not encounter the CPU starvation).
Its primary job is to transport blocks across the nodes for cache-fusion requests.
If there is a consistent-read request, the LMS process rolls back the block, makes a Consistent-Read image of the block and then ship this block across the HSI (High Speed Interconnect) to the process requesting from a remote node.
LMS must also check constantly with the LMD background process (or our GES process) to get the lock requests placed by the LMD process.
Each node have 2 or more LMS processes.
GCS_SERVER_PROCESSES --> no of LMS processes specified in init. ora parameter.
Above parameter value set based on number of cpu's ( MIN(CPU_COUNT/2,2))
10gR2, single CPU instance,only one LMS processes started.
Increasing the parameter value,if global cache activity is very high.
Also called the GCS (Global Cache Services) processes.
Internal View: X$KJMSDP
3. Lock Monitor Daemon Process ( LMDn)
LMD process performs global lock deadlock detection.
Also monitors for lock conversion timeouts.
Also sometimes referred to as the GES (Global Enqueue Service) daemon since its job is to manage the global enqueue and global resource access.
LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests originating from another instance.
Internal View: X$KJMDDP
4. LCKn ( Lock Process)
Manages instance resource requests & cross instance calls for shared resources.
During instance recovery,it builds a list of invalid lock elements and validates lock elements.
5. DIAG (Diagnostic Daemon)
Oracle 10g - this one new background processes ( New enhanced diagnosability framework).
Regularly monitors the health of the instance.
Also checks instance hangs & deadlocks.
It captures the vital diagnostics data for instance & process failures.
1. Lock Monitor Processes ( LMON)
2. Lock Monitor Services (LMS)
3. Lock Monitor Daemon Process ( LMD)
4. LCKn ( Lock Process)
5. DIAG (Diagnostic Daemon)
1. Lock Monitor Processes ( LMON)
It Maintains GCS memory structures.
Handles the abnormal termination of processes and instances.
Reconfiguration of locks & resources when an instance joins or leaves the cluster are handled by LMON ( During reconfiguration LMON generate the trace files)
It responsible for executing dynamic lock remastering every 10 mins ( Only in 10g R2 & later versions).
LMON Processes manages the global locks & resources.
It monitors all instances in cluster, primary for dictionary cache locks,library cache locks & deadlocks on deadlock sensitive on enqueue & resources.
LMON also provides cluster group services.
Also called Global enqueue service monitor.
2. Lock Monitor Services (LMS)
LMS is most very active background processes.
Consuming significant amount of CPU time. ( 10g R2 - ensure that LMS process does not encounter the CPU starvation).
Its primary job is to transport blocks across the nodes for cache-fusion requests.
If there is a consistent-read request, the LMS process rolls back the block, makes a Consistent-Read image of the block and then ship this block across the HSI (High Speed Interconnect) to the process requesting from a remote node.
LMS must also check constantly with the LMD background process (or our GES process) to get the lock requests placed by the LMD process.
Each node have 2 or more LMS processes.
GCS_SERVER_PROCESSES --> no of LMS processes specified in init. ora parameter.
Above parameter value set based on number of cpu's ( MIN(CPU_COUNT/2,2))
10gR2, single CPU instance,only one LMS processes started.
Increasing the parameter value,if global cache activity is very high.
Also called the GCS (Global Cache Services) processes.
Internal View: X$KJMSDP
3. Lock Monitor Daemon Process ( LMDn)
LMD process performs global lock deadlock detection.
Also monitors for lock conversion timeouts.
Also sometimes referred to as the GES (Global Enqueue Service) daemon since its job is to manage the global enqueue and global resource access.
LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests originating from another instance.
Internal View: X$KJMDDP
4. LCKn ( Lock Process)
Manages instance resource requests & cross instance calls for shared resources.
During instance recovery,it builds a list of invalid lock elements and validates lock elements.
5. DIAG (Diagnostic Daemon)
Oracle 10g - this one new background processes ( New enhanced diagnosability framework).
Regularly monitors the health of the instance.
Also checks instance hangs & deadlocks.
It captures the vital diagnostics data for instance & process failures.
What's blocking my lock?
Create a blocking lock
To begin, create a situation where one user is actively blocking another. Open two sessions. Issue the following commands in Session 1 to build the test table:
SQL> create table tstlock (foo varchar2(1), bar varchar2(1));
Table created.
SQL> insert into tstlock values (1,'a');
1 row created.
SQL> insert into tstlock values (2, 'b');
1 row created.
SQL> select * from tstlock ;
FOO BAR
--- ---
1 a
2 b
2 rows selected.
SQL> commit ;
Commit complete.
Now grab a lock on the whole table, still in Session 1:
SQL> select * from tstlock for update ;
And in Session 2, try to update a row:
SQL> update tstlock set bar=
2 'a' where bar='a' ;
This statement will hang, blocked by the lock that Session 1 is holding on the entire table.
Identify the blocking session
Oracle provides a view, DBA_BLOCKERS, which lists the SIDs of all blocking sessions. But this view is often, in my experience, a good bit slower than simply querying V$LOCK, and it doesn't offer any information beyond the SIDs of any sessions that are blocking other sessions. The V$LOCK view is faster to query, makes it easy to identify the blocking session, and has a lot more information.
SQL> select * from v$lock ;
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
.... .... ... ... .... .... .... .... .... ....
Note the BLOCK column. If a session holds a lock that's blocking another session, BLOCK=1. Further, you can tell which session is being blocked by comparing the values in ID1 and ID2. The blocked session will have the same values in ID1 and ID2 as the blocking session, and, since it is requesting a lock it's unable to get, it will have REQUEST > 0.
In the query above, we can see that SID 422 is blocking SID 479. SID 422 corresponds to Session 1 in our example, and SID 479 is our blocked Session 2.
To avoid having to stare at the table and cross-compare ID1's and ID2's, put this in a query:
SQL> select l1.sid, ' IS BLOCKING ', l2.sid
2 from v$lock l1, v$lock l2
3 where l1.block =1 and l2.request > 0
4 and l1.id1=l2.id1
5 and l1.id2=l2.id2
SQL> /
SID 'ISBLOCKING' SID
---------- ------------- ----------
422 IS BLOCKING 479
1 row selected.
Even better, if we throw a little v$session into the mix, the results are highly readable:
SQL> select s1.username || '@' || s1.machine
2 || ' ( SID=' || s1.sid || ' ) is blocking '
3 || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
4 from v$lock l1, v$session s1, v$lock l2, v$session s2
5 where s1.sid=l1.sid and s2.sid=l2.sid
6 and l1.BLOCK=1 and l2.request > 0
7 and l1.id1 = l2.id1
8 and l2.id2 = l2.id2 ;
BLOCKING_STATUS
----------------------------------------------------------------------------------------------------
BULKLOAD@yttrium ( SID=422 ) is blocking BULKLOAD@yttrium ( SID=479 )
1 row selected.
There's still more information in the v$lock table, but in order to read that information, we need to understand a bit more about lock types and the cryptically-named ID1 and ID2 columns.
Lock type and the ID1 / ID2 columns
In this case, we already know that the blocking lock is an exclusive DML lock, since we're the ones who issued the locking statement. But most of the time, you won't be so lucky. Fortunately, you can read this information from the v$lock table with little effort.
The first place to look is the TYPE column. There are dozens of lock types, but the vast majority are system types. System locks are normally only held for a very brief amount of time, and it's not generally helpful to try to tune your library cache, undo logs, etc. by looking in v$lock! (See the V$LOCK chapter in the Oracle Database Reference for a list of system lock types.)
There are only three types of user locks, TX, TM and UL. UL is a user-defined lock -- a lock defined with the DBMS_LOCK package. The TX lock is a row transaction lock; it's acquired once for every transaction that changes data, no matter how many objects you change in that transaction. The ID1 and ID2 columns point to the rollback segment and transaction table entries for that transaction.
The TM lock is a DML lock. It's acquired once for each object that's being changed. The ID1 column identifies the object being modified.
Lock Modes
You can see more information on TM and TX locks just by looking at the lock modes. The LMODE and REQUEST columns both use the same numbering for lock modes, in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock. A session must obtain an exclusive TX lock in order to change data; LMODE will be 6. If it can't obtain an exclusive lock because some of the rows it wants to change are locked by another session, then it will request a TX in exclusive mode; LMODE will be 0 since it does not have the lock, and REQUEST will be 6. You can see this interaction in the rows we selected earlier from v$lock:
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6), point back to the rollback and transaction entries for Session 1. That's what lets us determine the blocking session for Session 2.
You may also see TX locks in mode 4, Shared mode. If a block containing rows to be changed doesn't have any interested transaction list (ITL) entries left, then the session acquires a TX lock in mode 4 while waiting for an ITL entry. If you see contention for TX-4 locks on an object, you probably need to increase INITRANS for the object.
TM locks are generally requested and acquired in modes 3, aka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock. (Note that CREATE TABLE doesn't require a TM lock -- it doesn't need to lock any objects, because the object in question doesn't exist yet!) DML requires a Shared-Row Exclusive lock. So, in the rows we selected earlier from v$lock, you can see from the TM locking levels that these are DML locks:
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
Identifying the locked object
Now that we know that each TM row points to a locked object, we can use ID1 to identify the object.
SQL> select object_name from dba_objects where object_id=88519 ;
OBJECT_NAME
--------------
TSTLOCK
Sometimes just knowing the object is enough information; but we can dig even deeper. We can identify not just the object, but the block and even the row in the block that Session 2 is waiting on.
Identifying the locked row
We can get this information from v$session by looking at the v$session entry for the blocked session:
SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2* from v$session where sid=479 ;
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
88519 16 171309 0
This gives us the object ID, the relative file number, the block in the datafile, and the row in the block that the session is waiting on. If that list of data sounds familiar, it's because those are the four components of an extended ROWID. We can build the row's actual extended ROWID from these components using the DBMS_ROWID package. The ROWID_CREATE function takes these arguments and returns the ROWID:
SQL> select do.object_name,
2 row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
3 dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
4 from v$session s, dba_objects do
5 where sid=543
6 and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;
OBJECT_NAME ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# DBMS_ROWID.ROWID_C
--------------- ------------- -------------- --------------- ------------- ------------------
TSTLOCK 88519 16 171309 0 AAAVnHAAQAAAp0tAAA
And, of course, this lets us inspect the row directly.
SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;
FOO BAR
--- ---
1 a
Conclusion
We've seen how to identify a blocking session, and how to inspect the very row that the waiting session is waiting for. And, I hope, learned a bit about v$lock in the process.
To begin, create a situation where one user is actively blocking another. Open two sessions. Issue the following commands in Session 1 to build the test table:
SQL> create table tstlock (foo varchar2(1), bar varchar2(1));
Table created.
SQL> insert into tstlock values (1,'a');
1 row created.
SQL> insert into tstlock values (2, 'b');
1 row created.
SQL> select * from tstlock ;
FOO BAR
--- ---
1 a
2 b
2 rows selected.
SQL> commit ;
Commit complete.
Now grab a lock on the whole table, still in Session 1:
SQL> select * from tstlock for update ;
And in Session 2, try to update a row:
SQL> update tstlock set bar=
2 'a' where bar='a' ;
This statement will hang, blocked by the lock that Session 1 is holding on the entire table.
Identify the blocking session
Oracle provides a view, DBA_BLOCKERS, which lists the SIDs of all blocking sessions. But this view is often, in my experience, a good bit slower than simply querying V$LOCK, and it doesn't offer any information beyond the SIDs of any sessions that are blocking other sessions. The V$LOCK view is faster to query, makes it easy to identify the blocking session, and has a lot more information.
SQL> select * from v$lock ;
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
.... .... ... ... .... .... .... .... .... ....
Note the BLOCK column. If a session holds a lock that's blocking another session, BLOCK=1. Further, you can tell which session is being blocked by comparing the values in ID1 and ID2. The blocked session will have the same values in ID1 and ID2 as the blocking session, and, since it is requesting a lock it's unable to get, it will have REQUEST > 0.
In the query above, we can see that SID 422 is blocking SID 479. SID 422 corresponds to Session 1 in our example, and SID 479 is our blocked Session 2.
To avoid having to stare at the table and cross-compare ID1's and ID2's, put this in a query:
SQL> select l1.sid, ' IS BLOCKING ', l2.sid
2 from v$lock l1, v$lock l2
3 where l1.block =1 and l2.request > 0
4 and l1.id1=l2.id1
5 and l1.id2=l2.id2
SQL> /
SID 'ISBLOCKING' SID
---------- ------------- ----------
422 IS BLOCKING 479
1 row selected.
Even better, if we throw a little v$session into the mix, the results are highly readable:
SQL> select s1.username || '@' || s1.machine
2 || ' ( SID=' || s1.sid || ' ) is blocking '
3 || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
4 from v$lock l1, v$session s1, v$lock l2, v$session s2
5 where s1.sid=l1.sid and s2.sid=l2.sid
6 and l1.BLOCK=1 and l2.request > 0
7 and l1.id1 = l2.id1
8 and l2.id2 = l2.id2 ;
BLOCKING_STATUS
----------------------------------------------------------------------------------------------------
BULKLOAD@yttrium ( SID=422 ) is blocking BULKLOAD@yttrium ( SID=479 )
1 row selected.
There's still more information in the v$lock table, but in order to read that information, we need to understand a bit more about lock types and the cryptically-named ID1 and ID2 columns.
Lock type and the ID1 / ID2 columns
In this case, we already know that the blocking lock is an exclusive DML lock, since we're the ones who issued the locking statement. But most of the time, you won't be so lucky. Fortunately, you can read this information from the v$lock table with little effort.
The first place to look is the TYPE column. There are dozens of lock types, but the vast majority are system types. System locks are normally only held for a very brief amount of time, and it's not generally helpful to try to tune your library cache, undo logs, etc. by looking in v$lock! (See the V$LOCK chapter in the Oracle Database Reference for a list of system lock types.)
There are only three types of user locks, TX, TM and UL. UL is a user-defined lock -- a lock defined with the DBMS_LOCK package. The TX lock is a row transaction lock; it's acquired once for every transaction that changes data, no matter how many objects you change in that transaction. The ID1 and ID2 columns point to the rollback segment and transaction table entries for that transaction.
The TM lock is a DML lock. It's acquired once for each object that's being changed. The ID1 column identifies the object being modified.
Lock Modes
You can see more information on TM and TX locks just by looking at the lock modes. The LMODE and REQUEST columns both use the same numbering for lock modes, in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock. A session must obtain an exclusive TX lock in order to change data; LMODE will be 6. If it can't obtain an exclusive lock because some of the rows it wants to change are locked by another session, then it will request a TX in exclusive mode; LMODE will be 0 since it does not have the lock, and REQUEST will be 6. You can see this interaction in the rows we selected earlier from v$lock:
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6), point back to the rollback and transaction entries for Session 1. That's what lets us determine the blocking session for Session 2.
You may also see TX locks in mode 4, Shared mode. If a block containing rows to be changed doesn't have any interested transaction list (ITL) entries left, then the session acquires a TX lock in mode 4 while waiting for an ITL entry. If you see contention for TX-4 locks on an object, you probably need to increase INITRANS for the object.
TM locks are generally requested and acquired in modes 3, aka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock. (Note that CREATE TABLE doesn't require a TM lock -- it doesn't need to lock any objects, because the object in question doesn't exist yet!) DML requires a Shared-Row Exclusive lock. So, in the rows we selected earlier from v$lock, you can see from the TM locking levels that these are DML locks:
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
Identifying the locked object
Now that we know that each TM row points to a locked object, we can use ID1 to identify the object.
SQL> select object_name from dba_objects where object_id=88519 ;
OBJECT_NAME
--------------
TSTLOCK
Sometimes just knowing the object is enough information; but we can dig even deeper. We can identify not just the object, but the block and even the row in the block that Session 2 is waiting on.
Identifying the locked row
We can get this information from v$session by looking at the v$session entry for the blocked session:
SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2* from v$session where sid=479 ;
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
88519 16 171309 0
This gives us the object ID, the relative file number, the block in the datafile, and the row in the block that the session is waiting on. If that list of data sounds familiar, it's because those are the four components of an extended ROWID. We can build the row's actual extended ROWID from these components using the DBMS_ROWID package. The ROWID_CREATE function takes these arguments and returns the ROWID:
SQL> select do.object_name,
2 row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
3 dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
4 from v$session s, dba_objects do
5 where sid=543
6 and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;
OBJECT_NAME ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# DBMS_ROWID.ROWID_C
--------------- ------------- -------------- --------------- ------------- ------------------
TSTLOCK 88519 16 171309 0 AAAVnHAAQAAAp0tAAA
And, of course, this lets us inspect the row directly.
SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;
FOO BAR
--- ---
1 a
Conclusion
We've seen how to identify a blocking session, and how to inspect the very row that the waiting session is waiting for. And, I hope, learned a bit about v$lock in the process.
Monday, 10 October 2011
How to Export and Import Statistics ?
How to Export and Import Statistics.
A critical application suddenly seems to hang, wait events show long table scans running on the OLTP environment.
It comes out that the DBA in charge of this system did run statistics on the tables of the user that owns the application.
The gather statistics got stuck and the process was killed. Since this moment the application started to perform extremely slowly.
The production database has several clones; we decide to export back statistics from one of these clones, to the production database.
This document will descripe the steps to perform the export and import statistics.
1. Create the stat table
SQL> exec DBMS_STATS.CREATE_STAT_TABLE('SCOTT','OLD_STATS');
PL/SQL procedure successfully completed.
SQL> GRANT SELECT,INSERT,UPDATE,DELETE ON SCOTT.OLD_STATS TO PUBLIC;
Grant succeeded.
SQL>
SQL> select 'exec dbms_stats.export_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',nu
2 ||chr(39)||',null,true,'||chr(39)||'SCOTT'||chr(39)||')'
3 from dba_tables where owner ='&usrname'
4
SQL> /
Enter value for usrname: SCOTT
old 3: from dba_tables where owner ='&usrname'
new 3: from dba_tables where owner ='SCOTT'
exec dbms_stats.export_table_stats('SCOTT','DEPT',null,'DEPT',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','EMP',null,'EMP',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','BONUS',null,'BONUS',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','SALGRADE',null,'SALGRADE',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','OLD_STATS',null,'OLD_STATS',null,true,'INV')
SQL>
SQL> select num_rows,last_analyzed from dba_tables where owner='SCOTT';
NUM_ROWS LAST_ANAL
---------- ---------
4 25-DEC-10
14 25-DEC-10
0 25-DEC-10
5 25-DEC-10
SQL> exec DBMS_STATS.EXPORT_SCHEMA_STATS('&username','&stattablename');
Enter value for username: SCOTT
Enter value for stattablename: OLD_STATS
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL>
SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
E:\>exp system/sys file=soctt_stat.dmp tables=scott.old_stats
Export: Release 10.2.0.1.0 - Production on Sat Dec 25 14:58:20 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Product
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
Current user changed to SCOTT
. . exporting table OLD_STATS 24 rows exported
Export terminated successfully without warnings.
E:\>
I'm deleting statistics by using the below method.
SQL> exec DBMS_STATS.delete_SCHEMA_STATS('&username');
Enter value for username: SCOTT
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL>
SQL> seLEct stale_stats ,num_rows from dba_tab_statistics where owner='SCOTT' and stale_stats is not null;
no rows selected
Here, Im not importing existing statistics by using export dump. Im trying to generate the new stats
SQL> exec dbms_stats.gather_schema_stats -
> (ownname=> 'SCOTT', -
> cascade=>TRUE, -
> degree => 4, -
> estimate_percent=>dbms_stats.auto_sample_size,-
> options =>'GATHER');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> seLEct stale_stats ,num_rows from dba_tab_statistics where owner='SCOTT' and stale_stats is not null;
NO 4
NO 14
NO 0
NO 5
NO 24
SQL>
A critical application suddenly seems to hang, wait events show long table scans running on the OLTP environment.
It comes out that the DBA in charge of this system did run statistics on the tables of the user that owns the application.
The gather statistics got stuck and the process was killed. Since this moment the application started to perform extremely slowly.
The production database has several clones; we decide to export back statistics from one of these clones, to the production database.
This document will descripe the steps to perform the export and import statistics.
1. Create the stat table
SQL> exec DBMS_STATS.CREATE_STAT_TABLE('SCOTT','OLD_STATS');
PL/SQL procedure successfully completed.
SQL> GRANT SELECT,INSERT,UPDATE,DELETE ON SCOTT.OLD_STATS TO PUBLIC;
Grant succeeded.
SQL>
SQL> select 'exec dbms_stats.export_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',nu
2 ||chr(39)||',null,true,'||chr(39)||'SCOTT'||chr(39)||')'
3 from dba_tables where owner ='&usrname'
4
SQL> /
Enter value for usrname: SCOTT
old 3: from dba_tables where owner ='&usrname'
new 3: from dba_tables where owner ='SCOTT'
exec dbms_stats.export_table_stats('SCOTT','DEPT',null,'DEPT',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','EMP',null,'EMP',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','BONUS',null,'BONUS',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','SALGRADE',null,'SALGRADE',null,true,'INV')
exec dbms_stats.export_table_stats('SCOTT','OLD_STATS',null,'OLD_STATS',null,true,'INV')
SQL>
SQL> select num_rows,last_analyzed from dba_tables where owner='SCOTT';
NUM_ROWS LAST_ANAL
---------- ---------
4 25-DEC-10
14 25-DEC-10
0 25-DEC-10
5 25-DEC-10
SQL> exec DBMS_STATS.EXPORT_SCHEMA_STATS('&username','&stattablename');
Enter value for username: SCOTT
Enter value for stattablename: OLD_STATS
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL>
SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
E:\>exp system/sys file=soctt_stat.dmp tables=scott.old_stats
Export: Release 10.2.0.1.0 - Production on Sat Dec 25 14:58:20 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Product
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
Current user changed to SCOTT
. . exporting table OLD_STATS 24 rows exported
Export terminated successfully without warnings.
E:\>
I'm deleting statistics by using the below method.
SQL> exec DBMS_STATS.delete_SCHEMA_STATS('&username');
Enter value for username: SCOTT
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL>
SQL> seLEct stale_stats ,num_rows from dba_tab_statistics where owner='SCOTT' and stale_stats is not null;
no rows selected
Here, Im not importing existing statistics by using export dump. Im trying to generate the new stats
SQL> exec dbms_stats.gather_schema_stats -
> (ownname=> 'SCOTT', -
> cascade=>TRUE, -
> degree => 4, -
> estimate_percent=>dbms_stats.auto_sample_size,-
> options =>'GATHER');
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> seLEct stale_stats ,num_rows from dba_tab_statistics where owner='SCOTT' and stale_stats is not null;
NO 4
NO 14
NO 0
NO 5
NO 24
SQL>
How to Disable RecycleBin in Oracle 10g R1 & R2
On 10gR1, in case we want to disable the behavior of recycling, there is an underscore parameter "_recyclebin" which defaults to TRUE. We can disable recyclebin by setting it to FALSE.
From SYSDBA user:
SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf FROM x$ksppi a, x$ksppcv b
WHERE a.indx = b.indx AND a.ksppinm like '%recycle%'
ORDER BY a.ksppinm
/
Parameter Value Default?
---------------------------- ---------------------------------------- --------
_recyclebin TRUE TRUE
From BH user:
SQL> CREATE TABLE t1(a NUMBER);
Table created.
SQL> DROP TABLE t1;
Table dropped.
SQL> SELECT original_name FROM user_recyclebin;
ORIGINAL_NAME
--------------
T1
From SYSDBA user:
SQL> ALTER SYSTEM SET "_recyclebin"=FALSE SCOPE = BOTH;
System altered.
SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf FROM x$ksppi a, x$ksppcv b
WHERE a.indx = b.indx AND a.ksppinm like '%recycle%'
ORDER BY a.ksppinm
/
Parameter Value Default?
---------------------------- ---------------------------------------- --------
_recyclebin FALSE TRUE
From BH user:
SQL> CREATE TABLE t1(a NUMBER);
Table created.
SQL> DROP TABLE t1;
Table dropped.
SQL> SELECT original_name
FROM user_recyclebin;
no rows selected
There is no need to PURGE.
PS: Please avoid to user oracle hidden parameter. But before using hidden parameter please discuss with oracle support people.
In oracle 10g R2
On 10gR2 recyclebin is a initialization parameter and bydefault its ON. We can disable recyclebin by using the following commands:
SQL> ALTER SESSION SET recyclebin = OFF;
SQL> ALTER SYSTEM SET recyclebin = OFF;
The dropped objects, when recyclebin was ON will remain in the recyclebin even if we set the recyclebin parameter to OFF.
Posted by NITISH at Saturday, June 14, 2008 0 comments
Labels: Flashback Technology
Wednesday, July 11, 2007
Flashback Tech
In Oracle 10g, the flashback functionality has been greatly extended.
Flashback Database
Flashback Database is faster than traditional point-in-time recovery. Traditional recovery uses redo log files and backups. Flashback Database is implemented using a new type of log file called Flashback Database logs. The Oracle databa se server periodically logs before images of data blocks in the Flashback Database logs. The data block images are used to quickly back out changes to the database during Flashback Database.
Flashback Database reduces the time required to recover the database to a previous point. The time to restore a database is proportional to the number of changes that need to be backed out, not the size of the database.
RVWR Background Process
When Flashback Database is enabled, a new RVWR background process is started. This process is similar to the LGWR (log writer) process. The new process writes Flashback Database data to the Flashback Database logs.
RVWR Background process and Flashback Database Logs,
The list below shows all the background processes for the 'grid' instance.
$ ps -ef | grep grid
oracle 25169 1 0 16:32:22 ? 0:00 ora_rvwr_grid
Enabling Flashback Database
You can enable Flashback Database using the following steps:
1. Make sure the database is in archive mode.
2. Configure the recovery area by setting the two parameters:
- db_recovery_file_dest
- db_recovery_file_dest_size
3. Open the database in MOUNT EXCLUSIVE mode and turn on the flashback feature:
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE FLASHBACK ON;
4. Set the Flashback Database retention target:
- db_flashback_retention_target
5. Determine if Flashback Database is enabled:
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------
YES
Flashback Drop
Prior to Oracle 10g, a DROP command permanently removed objects from the database. In Oracle 10g, a DROP command places the object in the recycle bin. The extents allocated to the segment are not reallocated until you purge the object. You can restore the object from the recycle bin at any time.
This feature eliminates the need to perform a point-in-time recovery operation. Therefore, it has minimum impact to other database users.
Recycle Bin
A recycle bin contains all the dropped database objects until,
- You permanently drop them with the PURGE command.
- Recover the dropped objects with the UNDROP command.
- There is no room in the tablespace for new rows or updates to existing rows.
- The tablespace needs to be extended.
You can view the dropped objects in the recycle bin from two dictionary views:
- user_recyclebin - lists all dropped user objects
- dba_recyclebin - lists all dropped system-wide objects
Example :
QL> show user
USER is "SCOTT"
SQL>
SQL> create table tst (a int);
Table created.
SQL> select object_name,original_name,operation,type from user_recyclebin;
no rows selected
SQL> drop table tst;
Table dropped.
SQL> select object_name,original_name,operation,type from user_recyclebin;
OBJECT_NAME ORIGINAL_NAME OPERATION TYPE
------------------------------ -------------------------------- --------- -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TST DROP TABLE
SQL> create table tst (a int);
Table created.
SQL> drop table tst;
Table dropped.
SQL> select object_name,original_name,operation,type from user_recyclebin
SQL> /
OBJECT_NAME ORIGINAL_NAME OPERATION TYPE
------------------------------ -------------------------------- --------- -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TST DROP TABLE
BIN$NPgvUqA422DgQKjAAwIuMw==$0 TST DROP TABLE
Restoring a Dropped Object
SQL> flashback table TST to before drop;
Flashback complete.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TABLE
TST TABLE
6 rows selected.
SQL> select object_name,type from user_recyclebin;
OBJECT_NAME TYPE
------------------------------ -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TABLE
Dropping a Table Permanently
SQL> DROP TABLE TST PURGE;
Table dropped.
SQL> purge table tst;
Table purged.
SQL> purge recyclebin;
Recyclebin purged.
SQL> purge dba_recyclebin;
purge dba_recyclebin
*
ERROR at line 1:
ORA-01031: insufficient privileges
This statement purges all objects from tablespace users in the recycle bin
SQL> purge tablespace users;
Tablespace purged.
SQL>
Flashback Table
Flashback Table allows you to recover a table or tables to a specific point in time without restoring a backup. When you use the Flashback Table feature to restore a table to a specific point in time, all associated objects, such as indexes, constraints, and triggers will be restored.
Flashback Table operations are not valid for the following object types:
- Tables that are part of a cluster
- Materialized views
- Advanced Queuing tables
- Static data dictionary tables
- System tables
- Partitions of a table
- Remote tables (via database link)
Flashback Table is extremely useful when a user accidentally inserts, deletes, or updates the wrong rows in a table. It provides a way for users to easily and quickly recover a table to a previous point in time.
However, if the following DDL commands are issued, the flashback table command does not work:
- ALTER TABLE ... DROP COLUMN
- ALTER TABLE ... DROP PARTITION - CREATE CLUSTER - TRUNCATE TABLE
- ALTER TABLE ... MOVE
Flashback versions
Flashback Query was first introduced in Oracle9i to provide a way to view historical data. In Oracle 10g, this feature has been extended. You can now retrieve all versions of the rows that exist or ever existed between the time the query was issued and a point back in time. This type of query is called Flashback Row History.
You can use the VERSIONS BETWEEN clauses to retrieve all historical data related to a row.
Example:
SQL> CREATE TABLE NITISH (S INT);
Table created.
SQL> INSERT INTO NITISH VALUES (123);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575002
SQL> UPDATE NITISH SET S=321;
1 row updated.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575090
SQL> DELETE FROM NITISH WHERE S=321;
1 row deleted.
SQL> L
1* DELETE FROM NITISH WHERE S=321
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575105
SQL>
SQL> INSERT INTO NITISH VALUES (258);
1 row created.
SQL>
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575112
SQL> UPDATE NITISH SET S=456;
1 row updated.
SQL> COMMIT;
Commit complete.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575120
SQL> select versions_startscn as startscn, versions_endscn as endscn, versions_xid as xid,
2* versions_operation as oper, s from NITISH versions between scn 575002 and 575120
SQL> /
STARTSCN ENDSCN XID O S
---------- ---------- ---------------- - ----------
575118 09000E003A010000 I 456
575118 09000E003A010000 D 123
575118 123
Flashback Transaction
The Flashback Transaction History feature provides a way to view changes made to the database at the transaction level. It allows you to diagnose problems in your database and perform analysis and audit transactions. You can use this feature in conjunction with the Flash Row History feature to roll back the changes made by a transaction. You can also use this feature to audit user and application transactions. The Flashback Transaction History provides a faster way to undo a transaction than LogMiner.
You can retrieve the transaction history from dba_transaction_queryview:
Example :
SQL> select start_scn,commit_scn,logon_user,operation,table_name,undo_sql
2 from flashback_transaction_query
3* where xid= '09000E003A010000'
SQL> /
START_SCN COMMIT_SCN LOGON_USER OPERATION TABLE_NAME
---------- ---------- ------------------------------ -------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UNDO_SQL
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
574990 575118 SCOTT UPDATE NITISH
update "SCOTT"."NITISH" set "S" = '258' where ROWID = 'AAAM0ZAAEAAAAGEAAB';
574990 575118 SCOTT INSERT NITISH
delete from "SCOTT"."NITISH" where ROWID = 'AAAM0ZAAEAAAAGEAAB';
574990 575118 SCOTT DELETE NITISH
insert into "SCOTT"."NITISH"("S") values ('321');
From SYSDBA user:
SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf FROM x$ksppi a, x$ksppcv b
WHERE a.indx = b.indx AND a.ksppinm like '%recycle%'
ORDER BY a.ksppinm
/
Parameter Value Default?
---------------------------- ---------------------------------------- --------
_recyclebin TRUE TRUE
From BH user:
SQL> CREATE TABLE t1(a NUMBER);
Table created.
SQL> DROP TABLE t1;
Table dropped.
SQL> SELECT original_name FROM user_recyclebin;
ORIGINAL_NAME
--------------
T1
From SYSDBA user:
SQL> ALTER SYSTEM SET "_recyclebin"=FALSE SCOPE = BOTH;
System altered.
SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf FROM x$ksppi a, x$ksppcv b
WHERE a.indx = b.indx AND a.ksppinm like '%recycle%'
ORDER BY a.ksppinm
/
Parameter Value Default?
---------------------------- ---------------------------------------- --------
_recyclebin FALSE TRUE
From BH user:
SQL> CREATE TABLE t1(a NUMBER);
Table created.
SQL> DROP TABLE t1;
Table dropped.
SQL> SELECT original_name
FROM user_recyclebin;
no rows selected
There is no need to PURGE.
PS: Please avoid to user oracle hidden parameter. But before using hidden parameter please discuss with oracle support people.
In oracle 10g R2
On 10gR2 recyclebin is a initialization parameter and bydefault its ON. We can disable recyclebin by using the following commands:
SQL> ALTER SESSION SET recyclebin = OFF;
SQL> ALTER SYSTEM SET recyclebin = OFF;
The dropped objects, when recyclebin was ON will remain in the recyclebin even if we set the recyclebin parameter to OFF.
Posted by NITISH at Saturday, June 14, 2008 0 comments
Labels: Flashback Technology
Wednesday, July 11, 2007
Flashback Tech
In Oracle 10g, the flashback functionality has been greatly extended.
Flashback Database
Flashback Database is faster than traditional point-in-time recovery. Traditional recovery uses redo log files and backups. Flashback Database is implemented using a new type of log file called Flashback Database logs. The Oracle databa se server periodically logs before images of data blocks in the Flashback Database logs. The data block images are used to quickly back out changes to the database during Flashback Database.
Flashback Database reduces the time required to recover the database to a previous point. The time to restore a database is proportional to the number of changes that need to be backed out, not the size of the database.
RVWR Background Process
When Flashback Database is enabled, a new RVWR background process is started. This process is similar to the LGWR (log writer) process. The new process writes Flashback Database data to the Flashback Database logs.
RVWR Background process and Flashback Database Logs,
The list below shows all the background processes for the 'grid' instance.
$ ps -ef | grep grid
oracle 25169 1 0 16:32:22 ? 0:00 ora_rvwr_grid
Enabling Flashback Database
You can enable Flashback Database using the following steps:
1. Make sure the database is in archive mode.
2. Configure the recovery area by setting the two parameters:
- db_recovery_file_dest
- db_recovery_file_dest_size
3. Open the database in MOUNT EXCLUSIVE mode and turn on the flashback feature:
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE FLASHBACK ON;
4. Set the Flashback Database retention target:
- db_flashback_retention_target
5. Determine if Flashback Database is enabled:
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------
YES
Flashback Drop
Prior to Oracle 10g, a DROP command permanently removed objects from the database. In Oracle 10g, a DROP command places the object in the recycle bin. The extents allocated to the segment are not reallocated until you purge the object. You can restore the object from the recycle bin at any time.
This feature eliminates the need to perform a point-in-time recovery operation. Therefore, it has minimum impact to other database users.
Recycle Bin
A recycle bin contains all the dropped database objects until,
- You permanently drop them with the PURGE command.
- Recover the dropped objects with the UNDROP command.
- There is no room in the tablespace for new rows or updates to existing rows.
- The tablespace needs to be extended.
You can view the dropped objects in the recycle bin from two dictionary views:
- user_recyclebin - lists all dropped user objects
- dba_recyclebin - lists all dropped system-wide objects
Example :
QL> show user
USER is "SCOTT"
SQL>
SQL> create table tst (a int);
Table created.
SQL> select object_name,original_name,operation,type from user_recyclebin;
no rows selected
SQL> drop table tst;
Table dropped.
SQL> select object_name,original_name,operation,type from user_recyclebin;
OBJECT_NAME ORIGINAL_NAME OPERATION TYPE
------------------------------ -------------------------------- --------- -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TST DROP TABLE
SQL> create table tst (a int);
Table created.
SQL> drop table tst;
Table dropped.
SQL> select object_name,original_name,operation,type from user_recyclebin
SQL> /
OBJECT_NAME ORIGINAL_NAME OPERATION TYPE
------------------------------ -------------------------------- --------- -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TST DROP TABLE
BIN$NPgvUqA422DgQKjAAwIuMw==$0 TST DROP TABLE
Restoring a Dropped Object
SQL> flashback table TST to before drop;
Flashback complete.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TABLE
TST TABLE
6 rows selected.
SQL> select object_name,type from user_recyclebin;
OBJECT_NAME TYPE
------------------------------ -------------------------
BIN$NPgvUqA322DgQKjAAwIuMw==$0 TABLE
Dropping a Table Permanently
SQL> DROP TABLE TST PURGE;
Table dropped.
SQL> purge table tst;
Table purged.
SQL> purge recyclebin;
Recyclebin purged.
SQL> purge dba_recyclebin;
purge dba_recyclebin
*
ERROR at line 1:
ORA-01031: insufficient privileges
This statement purges all objects from tablespace users in the recycle bin
SQL> purge tablespace users;
Tablespace purged.
SQL>
Flashback Table
Flashback Table allows you to recover a table or tables to a specific point in time without restoring a backup. When you use the Flashback Table feature to restore a table to a specific point in time, all associated objects, such as indexes, constraints, and triggers will be restored.
Flashback Table operations are not valid for the following object types:
- Tables that are part of a cluster
- Materialized views
- Advanced Queuing tables
- Static data dictionary tables
- System tables
- Partitions of a table
- Remote tables (via database link)
Flashback Table is extremely useful when a user accidentally inserts, deletes, or updates the wrong rows in a table. It provides a way for users to easily and quickly recover a table to a previous point in time.
However, if the following DDL commands are issued, the flashback table command does not work:
- ALTER TABLE ... DROP COLUMN
- ALTER TABLE ... DROP PARTITION - CREATE CLUSTER - TRUNCATE TABLE
- ALTER TABLE ... MOVE
Flashback versions
Flashback Query was first introduced in Oracle9i to provide a way to view historical data. In Oracle 10g, this feature has been extended. You can now retrieve all versions of the rows that exist or ever existed between the time the query was issued and a point back in time. This type of query is called Flashback Row History.
You can use the VERSIONS BETWEEN clauses to retrieve all historical data related to a row.
Example:
SQL> CREATE TABLE NITISH (S INT);
Table created.
SQL> INSERT INTO NITISH VALUES (123);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575002
SQL> UPDATE NITISH SET S=321;
1 row updated.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575090
SQL> DELETE FROM NITISH WHERE S=321;
1 row deleted.
SQL> L
1* DELETE FROM NITISH WHERE S=321
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575105
SQL>
SQL> INSERT INTO NITISH VALUES (258);
1 row created.
SQL>
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575112
SQL> UPDATE NITISH SET S=456;
1 row updated.
SQL> COMMIT;
Commit complete.
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
GET_SYSTEM_CHANGE_NUMBER
------------------------
575120
SQL> select versions_startscn as startscn, versions_endscn as endscn, versions_xid as xid,
2* versions_operation as oper, s from NITISH versions between scn 575002 and 575120
SQL> /
STARTSCN ENDSCN XID O S
---------- ---------- ---------------- - ----------
575118 09000E003A010000 I 456
575118 09000E003A010000 D 123
575118 123
Flashback Transaction
The Flashback Transaction History feature provides a way to view changes made to the database at the transaction level. It allows you to diagnose problems in your database and perform analysis and audit transactions. You can use this feature in conjunction with the Flash Row History feature to roll back the changes made by a transaction. You can also use this feature to audit user and application transactions. The Flashback Transaction History provides a faster way to undo a transaction than LogMiner.
You can retrieve the transaction history from dba_transaction_queryview:
Example :
SQL> select start_scn,commit_scn,logon_user,operation,table_name,undo_sql
2 from flashback_transaction_query
3* where xid= '09000E003A010000'
SQL> /
START_SCN COMMIT_SCN LOGON_USER OPERATION TABLE_NAME
---------- ---------- ------------------------------ -------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UNDO_SQL
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
574990 575118 SCOTT UPDATE NITISH
update "SCOTT"."NITISH" set "S" = '258' where ROWID = 'AAAM0ZAAEAAAAGEAAB';
574990 575118 SCOTT INSERT NITISH
delete from "SCOTT"."NITISH" where ROWID = 'AAAM0ZAAEAAAAGEAAB';
574990 575118 SCOTT DELETE NITISH
insert into "SCOTT"."NITISH"("S") values ('321');
What is Oracle Stream?
What is Oracle Stream?
Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Streams without sacrificing existing capabilities.
Streams Information Flow
What Can Streams Do?
The following sections provide an overview of what Streams can do.
· Capture Messages at a Database
· Stage Messages in a Queue
· Propagate Messages from One Queue to Another
· Consume Messages
· Other Capabilities of Streams
A capture process can capture database events, such as changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The rules used by a capture process determine which changes it captures, and these captured changes are called captured messages.
Etc…
What Are the Uses of Streams?
The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.
• Message Queuing
• Data Replication
• Event Management and Notification
• Data Warehouse Loading
• Data Protection
• Database Availability During Upgrade and Maintenance Operations
Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Streams without sacrificing existing capabilities.
Streams Information Flow
What Can Streams Do?
The following sections provide an overview of what Streams can do.
· Capture Messages at a Database
· Stage Messages in a Queue
· Propagate Messages from One Queue to Another
· Consume Messages
· Other Capabilities of Streams
A capture process can capture database events, such as changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The rules used by a capture process determine which changes it captures, and these captured changes are called captured messages.
Etc…
What Are the Uses of Streams?
The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.
• Message Queuing
• Data Replication
• Event Management and Notification
• Data Warehouse Loading
• Data Protection
• Database Availability During Upgrade and Maintenance Operations
Oracle ASM Commands
Running ASMCMD
You can run the ASMCMD utility in either interactive or noninteractive mode. Before running ASMCMD, you must ensure that you are logged in properly and that your environment is properly configured.
Preparing to Run ASMCMD
To prepare to run ASMCMD:
•Ensure that the ASM instance is started and the ASM disk groups are mounted. (You cannot use ASMCMD to mount disk groups.)
•Log in to the host that is running the ASM instance that you want to work with. You must log in as a user that has SYSDBA privileges through operating system authentication.
•Set the ORACLE_HOME and ORACLE_SID environment variables to select the ASM instance. Depending on your operating system, you may have to set other environment variables to properly connect to the ASM instance.
• The default value of the ASM SID for a single instance database is +ASM. On a Real Application Clusters system, the default value of the ASM SID on any node is+ASMnode#.
•Ensure that bin subdirectory of your Oracle home is in your PATH environment variable.
Running ASMCMD in Interactive Mode
The interactive mode of the ASMCMD utility provides a shell-like environment where you are prompted to enter ASMCMD commands.
To run ASMCMD in interactive mode:
1.At the operating system command prompt, enter:
asmcmd
An ASMCMD command prompt is displayed.
ASMCMD>
2.Enter an ASMCMD command and press the Enter key.
The command runs and displays its output, if any, and then ASMCMD prompts for the next command.
3.Continue entering ASMCMD commands. Enter the command exit to exit ASMCMD.
Including the Current Directory in the ASMCMD Prompt
You can specify the -p option with the asmcmd command to include the current directory in the ASMCMD prompt, as shown in the following example:
% asmcmd -p
ASMCMD [+] > cd dgroup1/mydir
ASMCMD [+DGROUP1/MYDIR] >
Running ASMCMD in Noninteractive Mode
In noninteractive mode, you run a single ASMCMD command by including the command and command arguments on the command line that invokes ASMCMD. ASMCMD runs the command, generates output if any, and then exits. The noninteractive mode is especially useful for running scripts.
To run ASMCMD in noninteractive mode:
•At the command prompt, enter:
asmcmd command arguments
where:
command is any valid ASMCMD command. arguments is a list of command flags and arguments.
The following example demonstrates running ASMCMD in the noninteractive mode.
% asmcmd ls -l
State Type Rebal Unbal Name
MOUNTED NORMAL N N DGROUP1/
MOUNTED NORMAL N N DGROUP2/
%
Getting Help
Type help at the ASMCMD prompt or as a command in noninteractive mode to view general ASMCMD help and a list of available commands.
ASMCMD Command Reference
You can run the ASMCMD utility in either interactive or noninteractive mode. Before running ASMCMD, you must ensure that you are logged in properly and that your environment is properly configured.
Preparing to Run ASMCMD
To prepare to run ASMCMD:
•Ensure that the ASM instance is started and the ASM disk groups are mounted. (You cannot use ASMCMD to mount disk groups.)
•Log in to the host that is running the ASM instance that you want to work with. You must log in as a user that has SYSDBA privileges through operating system authentication.
•Set the ORACLE_HOME and ORACLE_SID environment variables to select the ASM instance. Depending on your operating system, you may have to set other environment variables to properly connect to the ASM instance.
• The default value of the ASM SID for a single instance database is +ASM. On a Real Application Clusters system, the default value of the ASM SID on any node is+ASMnode#.
•Ensure that bin subdirectory of your Oracle home is in your PATH environment variable.
Running ASMCMD in Interactive Mode
The interactive mode of the ASMCMD utility provides a shell-like environment where you are prompted to enter ASMCMD commands.
To run ASMCMD in interactive mode:
1.At the operating system command prompt, enter:
asmcmd
An ASMCMD command prompt is displayed.
ASMCMD>
2.Enter an ASMCMD command and press the Enter key.
The command runs and displays its output, if any, and then ASMCMD prompts for the next command.
3.Continue entering ASMCMD commands. Enter the command exit to exit ASMCMD.
Including the Current Directory in the ASMCMD Prompt
You can specify the -p option with the asmcmd command to include the current directory in the ASMCMD prompt, as shown in the following example:
% asmcmd -p
ASMCMD [+] > cd dgroup1/mydir
ASMCMD [+DGROUP1/MYDIR] >
Running ASMCMD in Noninteractive Mode
In noninteractive mode, you run a single ASMCMD command by including the command and command arguments on the command line that invokes ASMCMD. ASMCMD runs the command, generates output if any, and then exits. The noninteractive mode is especially useful for running scripts.
To run ASMCMD in noninteractive mode:
•At the command prompt, enter:
asmcmd command arguments
where:
command is any valid ASMCMD command. arguments is a list of command flags and arguments.
The following example demonstrates running ASMCMD in the noninteractive mode.
% asmcmd ls -l
State Type Rebal Unbal Name
MOUNTED NORMAL N N DGROUP1/
MOUNTED NORMAL N N DGROUP2/
%
Getting Help
Type help at the ASMCMD prompt or as a command in noninteractive mode to view general ASMCMD help and a list of available commands.
ASMCMD Command Reference
Thursday, 29 September 2011
HOWTO: Enable Archivelog Mode in a RAC database
1. Change the cluster database parameter so the database can be mounted in exclusive mode which is required to enable archive logging
alter system set cluster_database=false scope=spfile;
2. Shutdown the database using srvctl
srvctl stop database -d ORA
3. Startup one of the instances upto the mount state
sqlplus / as sysdba
startup mount
4. Enable archivelog mode
alter database archivelog;
5. Change the cluster_database parameter back to true in the spfile
alter system set cluster_database=true scope=spfile;
6. Shutdown the instance
shutdown immediate
7. Startup the database using srvctl
srvctl start database -d ORA
8. Once the database is back up you can verify the change by connecting to one of the database instances
sqlplus / as sysdba
archive log list
For example:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /archlogs/ORA/
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
Simple as that.
Notes:
You don’t need to set log_archive_dest1 as it defaults to the flash recovery area (ie USE_DB_RECOVERY_FILE_DEST) although you’ll need to make sure it is large enough for your needs.
alter system set cluster_database=false scope=spfile;
2. Shutdown the database using srvctl
srvctl stop database -d ORA
3. Startup one of the instances upto the mount state
sqlplus / as sysdba
startup mount
4. Enable archivelog mode
alter database archivelog;
5. Change the cluster_database parameter back to true in the spfile
alter system set cluster_database=true scope=spfile;
6. Shutdown the instance
shutdown immediate
7. Startup the database using srvctl
srvctl start database -d ORA
8. Once the database is back up you can verify the change by connecting to one of the database instances
sqlplus / as sysdba
archive log list
For example:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /archlogs/ORA/
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
Simple as that.
Notes:
You don’t need to set log_archive_dest1 as it defaults to the flash recovery area (ie USE_DB_RECOVERY_FILE_DEST) although you’ll need to make sure it is large enough for your needs.
Wednesday, 28 September 2011
Table Partitioning in Oracle
Now a days enterprises run databases of hundred of Gigabytes in size. These databases are known as Very Large Databases (VLDB). From Oracle Ver. 8.0 Oracle has provided the feature of table partitioning i.e. you can partition a table according to some criteria . For example you have a SALES table with the following structure
Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993 and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following
select sum(amt) from sales where year=1991;
select product,sum(amt) from sales where year=1992 Group by product;
Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.
CREATING PARTITION TABLES
To create a partition table give the following statement
create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5;
In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the
partitions will still be available.
The above example the table is partition by range.
In Oracle you can partition a table by
Range Partitioning
Hash Partitioning
List Partitioning
Composite Partitioning
Range Partitioning
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed;
for example, value of year. Performance is best when the data evenly distributes across the range
Hash partitioning
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for
performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a
specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key
The following example shows how to create a hash partition table.
The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created
and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).
CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);
List Partitioning
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete
values for the partitioning column in the description for each partition. This is different from range partitioning, where a
range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to
partition mapping.
List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally
The following example creates a table with list partitioning
Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,
’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the
city column the row is stored in that partition.
COMPOSITE PARTITONING
Composite partitioning partitions data using the range method, and within each partition, subpartitions it using
the hash method. Composite partitions are ideal for both historical data and striping, and provide improved
manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.
When creating composite partitions, you specify the following:
Partitioning method: range
Partitioning column(s)
Partition descriptions identifying partition bounds
Subpartitioning method: hash
Subpartitioning column(s)
Number of subpartitions for each partition or descriptions of subpartitions
The following statement creates a composite-partitioned table. In this example, three range partitions are created, each
containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).
CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
ALTERING PARTITION TABLES
To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition
at the beginning or in the middle of a table, use the SPLIT PARTITION clause.
For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);
To add a partition to a Hash Partition table give the following command.
Alter table products add partition;
Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace.
To add a partition by user define name and in your specified tablespace give the following command.
Alter table products add partition p5 tablespace u5;
To add a partition to a List partition table give the following command.
alter table customers add partition central_India
values (‘BHOPAL’,’NAGPUR’);
Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions
of the table.
Coalescing Partitions
Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.
To coalesce a hash partition give the following statement.
Alter table products coalesce partition;
This reduces by one the number of partitions in the table products.
DROPPING PARTITIONS
To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;
Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.
Alter index sales_ind rebuild;
To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this
Delete from sales where year=1994;
Alter table sales drop partition p4;
This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.
Another method of dropping partitions is give the following statement.
ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;
This causes the global index to be updated at the time the partition is dropped.
Exchanging a Range, Hash, or List Partition
To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.
ALTER TABLE stocks
EXCHANGE PARTITION p3 WITH stock_table_3;
Merging Partitions
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.
Alter table sales merge partition p2 and p3 into
partition p23;
Modifying Partitions: Adding Values
Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.
The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.
ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');
Modifying Partitions: Dropping Values
Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.
ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);
SPLITTING PARTITIONS
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.
Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));
TRUNCATING PARTITON
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement
Alter table sales truncate partition p5;
LISTING INFORMATION ABOUT PARTITION TABLES
To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;
To see on partition level partitioning information
Select * from user_tab_partitions;
Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993 and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following
select sum(amt) from sales where year=1991;
select product,sum(amt) from sales where year=1992 Group by product;
Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.
CREATING PARTITION TABLES
To create a partition table give the following statement
create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5;
In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the
partitions will still be available.
The above example the table is partition by range.
In Oracle you can partition a table by
Range Partitioning
Hash Partitioning
List Partitioning
Composite Partitioning
Range Partitioning
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed;
for example, value of year. Performance is best when the data evenly distributes across the range
Hash partitioning
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for
performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a
specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key
The following example shows how to create a hash partition table.
The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created
and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).
CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);
List Partitioning
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete
values for the partitioning column in the description for each partition. This is different from range partitioning, where a
range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to
partition mapping.
List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally
The following example creates a table with list partitioning
Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,
’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the
city column the row is stored in that partition.
COMPOSITE PARTITONING
Composite partitioning partitions data using the range method, and within each partition, subpartitions it using
the hash method. Composite partitions are ideal for both historical data and striping, and provide improved
manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.
When creating composite partitions, you specify the following:
Partitioning method: range
Partitioning column(s)
Partition descriptions identifying partition bounds
Subpartitioning method: hash
Subpartitioning column(s)
Number of subpartitions for each partition or descriptions of subpartitions
The following statement creates a composite-partitioned table. In this example, three range partitions are created, each
containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).
CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
ALTERING PARTITION TABLES
To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition
at the beginning or in the middle of a table, use the SPLIT PARTITION clause.
For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);
To add a partition to a Hash Partition table give the following command.
Alter table products add partition;
Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace.
To add a partition by user define name and in your specified tablespace give the following command.
Alter table products add partition p5 tablespace u5;
To add a partition to a List partition table give the following command.
alter table customers add partition central_India
values (‘BHOPAL’,’NAGPUR’);
Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions
of the table.
Coalescing Partitions
Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.
To coalesce a hash partition give the following statement.
Alter table products coalesce partition;
This reduces by one the number of partitions in the table products.
DROPPING PARTITIONS
To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;
Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.
Alter index sales_ind rebuild;
To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this
Delete from sales where year=1994;
Alter table sales drop partition p4;
This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.
Another method of dropping partitions is give the following statement.
ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;
This causes the global index to be updated at the time the partition is dropped.
Exchanging a Range, Hash, or List Partition
To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.
ALTER TABLE stocks
EXCHANGE PARTITION p3 WITH stock_table_3;
Merging Partitions
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.
Alter table sales merge partition p2 and p3 into
partition p23;
Modifying Partitions: Adding Values
Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.
The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.
ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');
Modifying Partitions: Dropping Values
Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.
ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);
SPLITTING PARTITIONS
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.
Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));
TRUNCATING PARTITON
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement
Alter table sales truncate partition p5;
LISTING INFORMATION ABOUT PARTITION TABLES
To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;
To see on partition level partitioning information
Select * from user_tab_partitions;
Subscribe to:
Posts (Atom)