Translate

Saturday, 26 November 2011

10046 trace file content

The 10046 trace file contents differ greatly between different versions of Oracle.


Indeed, format of trace files changes sometimes between Oracle versions but definitely not greatly.

Somewhere near 9.2.0.2 STAT lines changed a bit (row source execution statistics was added).
Instead of just:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$'
they started look like this:
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=17 op='TABLE ACCESS FULL FILE$ (cr=4 pr=0 pw=0 time=91 us)'

And in 10.2 format of WAIT lines and BINDS sections changed significantly.

BINDS sections can not be processed by tkprof in any versions of Oracle so I just skip them.

As for WAIT lines, here's the pre-10.2 WAIT line:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 p1=1111838976 p2=1 p3=0 p3=62
Compare it with 10.2:
WAIT #4: nam='SQL*Net message from client' ela= 2794957 driver id=1111838976 #bytes=1 p3=0 obj#=62 tim=42601199281

Wait events parameters (p1,p2,p3) got a meaningful name (as a side note: but sometimes not a meaningful value -- #bytes in 'SQL*Net messages' still always equal to 1) and every WAIT line now have tim-stamp.

The funny thing is that tkprof from Oracle 9.2 correctly process trace files from 10.2! Looks like tkprof parses WAIT line by splitting line on '=' character and just gets the first three values.

In summary, I beleive that when tkprof from old version can correctly process trace files from the new one then it's more natural to say that 'trace files contents does not differ greatly between different versions of Oracle'.

Thursday, 24 November 2011

How to Configure OEM 10g Database Control after Clone

After cloning an EBS environment I had tried to start the OEM 10g Database Control (DBConsole) and I got this message:

[oratest@testsrv1 ~]$ emctl start dbconsole
TZ set to Israel
OC4J Configuration issue.
/oracle/test/db/testdb/10.2/ oc4j/j2ee/OC4J_DBConsole_testsrv1.domain_TEST not found.

However, it was an acceptable message since DB Console wasn't configured to this instance yet.

Using emca I've tried to recreate the DB Control configuration files as follow:

[oratest@testsrv1 ~]$ emca -config dbcontrol db

...
INFO: Database Control started successfully
...
INFO: >>>>>>>>>>> The Database Control URL is http://testsrv1.domain:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully

Well, I thought it looks fine, but when I opened the DB Console I saw this message at the top of the page:

error:
java.lang.Exception: Exception in sending Request :: null



After some research, logs digging and Metalink searches, I found that the Agent component should be relinked manually.
But before there are two issues in env_sysman.mk file that should be fixed before executing the relink command.

I will show my actions step by step:

Stop DB Console:
emctl stop dbconsole

cd $ORACLE_HOME/sysman/lib

Edit env_sysman.mk and look for "JRE_LIB_DIR"
In my instance JRE_LIB_DIR was populated with the old environment name..
I changed it from:
JRE_LIB_DIR=/oracle/prod/db/proddb/10.2/jdk/jre/lib/i386
To:
JRE_LIB_DIR=/oracle/test/db/testdb/10.2/jdk/jre/lib/i386

One more issue for env_sysman.mk, look for "$(LIBNMEM) $(LIBNMER)"
And change from:
$(LIBNMEM) $(LIBNMER) \
To:
$(LIBNMEM) $(LIBNMER) $(LIBNMO) \

Now relink the Agent component by issuing this command:
make -f ins_sysman.mk agent

Recreate the DB Control configuration files (Again, even if you did it before)
emca -config dbcontrol db
Following this procedure will make the DBConsole configured and work for the new cloned instance.

ADPATCH with "options=prereq" - what really happens there?

With adpatch utility we have a possibility to ask for a prerequisite check prior to running patch driver files.
Actually, some patches must apply with prerequisite check before applying them.

The adpatch command should look like: adpatch option=prereq
This flag indicate to adpatch to check prerequisite before applying patch.

adpatch checks the prerequisite based on information from patch files and current snapshot on APPL_TOP.

When running adpatch with "prereq" flag, we might get an error message like:
Analyzing prerequisite patch information...
AutoPatch error: This patch has some prerequisites specified, but a "snapshot" of this APPL-TOP's file-system has never been taken, thereby rendering it impossible to check for the prerequisites.
Please take a "snapshot" of this APPL-TOP using "AD Administration" first.

This error message will show up if a snapshot on current APPL_TOP doesn't exists.

To create such snapshot on APPL_TOP:
1) run adadmin
2) Select "Maintain Applications Files menu"
3) Select "Update current view snapshot"
4) Rerun adpatch

**It might take couple of hours depends on your hardware and APPL_TOP size.


So how adpatch check the prerequisites?

1) Check if a snapshot on current APPL_TOP exist.
using sql script - adbkflsn.sql (if not, will terminate with above error message.....)

2) adpatch uploads a ldt file with FNDLOAD utility into system (bug-fixes).
ldt file name is: b[PATCH_NUMER].ldt - comes from patch root directory.

3) Execute the UMS analysis engine based on the snapshot and bug-fixes to check if all prerequisites exists.

How to check conflicts while applying CPU patch?

Please use below command to check the conflicts aganist the oracle_home and avoid to land in problems

step 1: unzip your patch zip file
step 2: run below command
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir

Example:

$ unzip p9655017_10204_linux.zip
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 9655017
The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error

Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
6600051, 8836683
Whenever you get this type of error message, plz contact oracle support by raising a service request(SR)

In my case, Oracle support suggested to apply a merge patch 9347333 before applying Jul CPU 2011. Once done with applying merge patch, without any further issues I successfully applied CPU patch

Sometimes apart from above message you may see below warning messages which you can ignore

Summary of Conflict Analysis:

Patches that can be applied now without any conflicts are :
10013975, 10014009, 10014012, 10014015, 10325878, 10325885, 11787762, 11787763, 11787765, 11787766, 12419249, 12566121, 12566124, 12566126, 12566129, 12566131, 12566134, 12566136, 12566137, 12566139, 12566141, 12566142, 12566143, 7155248, 7155249, 7155250, 7155251, 7155252, 7155253, 7155254, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695, 9678697

Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
10249540, 8836681, 8568405

Following patches will be rolled back from Oracle Home on application of the patches in the given list :
10013975, 10014009, 10014012, 10014015, 10325878, 10249540, 8836681, 8568405, 7155248, 7155249, 7155250, 7155251, 7155252, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695

How to resolve ORA-00490 error?

Root Cause:

This error will occur when there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn’t able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.

Solution :

There are 2 solutions for this problem which are mentioned below

1. Check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process.

2. Check “ulimit” setting for Oracle. “ulimit” is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the “ulimit” setting for Oracle user.

Purge old files on Linux/Unix using “find” command

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files.

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;

ORA-00821 Specified value of sga_target is too small....

Resized your SGA_TARGET too small and found you can’t now start your database.

If you are using a PFILE then just edit it and set the SGA_TARGET to a larger value. But what if you’re using an SPFILE. One possibility is to create the pfile from the spfile edit the pfile, and then either start the database using the pfile and remove the spfile and start the database as normal and the new pfile will be picked up.

The problem arises when the spfile is in an ASM, creating the pfile from this can be a problem. One solution is to create a pfile which calls the spfile in the ASM but after the call to the spfile add an extra line which alters the SGA_TARGET as follows

SPFILE='+DATA1/PROD1/spfilePROD1.ora'
SGA_TARGET=1024M

This pfile can be places in $OH/dbs thus, the next time you start the database this pfile will be run. Alternatively, you could explicitly use the ‘pfile=’ parameter when starting the database thus

Startup pfile=$OH/dbs/initPROD1.ora

Wednesday, 23 November 2011

Oracle Applications Schema Password Change Utility - (FNDCPASS)

Changing passwords periodically helps ensure database security. Oracle Applications provides a command line utility, FNDCPASS, to set Oracle Applications schema passwords. In addition to changing the schema password in the database, this utility changes the password registered in Oracle Applications tables (FND Tables). This utility can also change Applications End User passwords.


FNDCPASS changes

Oracle Applications Database System Users (APPS, APPLSYS)
Oracle Applications Product Schema Passwords (GL, AR, AP, etc,)
Oracle Applications End User Account Passwords (SYSADMIN, OPERATIONS etc)

Note: the utility, FNDCPASS, cannot be used for changing the database SYSTEM and SYS users. Only users that are registered in FND meta data tables need to be changed using FNDCPASS. Normally, the APPS database user password and APPLSYS password need to be the same. When changing the APPLSYS password using FNDCPASS, the APPS password is also changed.

Syntax of FNDCPASS command:

FNDCPASS logon 0 Y system/password mode username new_password

Where logon is username/password[@connect]

System/password is password of the system account of that database
Mode is SYSTEM/USER/ORACLE
Username is the username where you want to change its password
new_password is the new password in unencrypted format

Example:

$ FNDCPASS apps/apps 0 Y system/manager SYSTEM APPLSYS WELCOME

$ FNDCPASS apps/apps 0 Y system/manager ORACLE GL GL1

$ FNDCPASS apps/apps 0 Y system/manager USER VISION WELCOME

Note: The FNDCPASS has a new mode, "ALLORACLE", in which all Oracle Application schema passwords can be changed in one call. Apply the patch (Patch No# 4745998) to have this option, if not available currently with your Apps.

Syntax:

FNDCPASS 0 Y ALLORACLE

Example:

$ FNDCPASS apps/apps 0 Y system/manager ALLORACLE WELCOME

To change APPS/APPLSYS password, we need to give mode as SYSTEM
To change product schema passwords, i.e., GL, AP, AR, etc., we need to give mode as ORACLE
To change end user passwords, i.e., SYSADMIN, OPERATIONS etc., we need give mode as USER

Note: Till 11.5.9 there is bug in FNDCPASS, which allows FNDCPASS to change APPS&APPLSYS passwords. Doing so will corrupt the data in FND meta data tables and cause to the application unusable. Because of that it is recommend taking backup of the tables FND_USER and FND_ORACLE_USERID before changing the passwords.

After changing the APPS/APPLSYS or APPLSYSPUB user, following extra manual steps needs to be done.

If you changed the APPS (and APPLSYS) password, update the password in these files:

iAS_TOP/Apache/modplsql/cfg/wdbsvr.app
ORACLE_HOME/reports60/server/CGIcmd.dat
If you changed the APPLSYSPUB password, update the password in these files:

FND_TOP/resource/appsweb.cfg
OA_HTML/bin/appsweb.cfg
FND_TOP/secure/HOSTNAME_DBNAME.dbc

Note: I would you suggest you to first try changing the passwords using FNDCPASS on your test Apps Instances, once you are done with this without any errors or issues then you can move this to production, and also request you to search in metalink for more information about FNDCPASS utility and it's usage.

DB Link to Oracle 11g

As you know in Oracle 11g passwords are case sensitive by default. This applies to connecting via SQL*Plus or other client tools. And it also applies to database links between databases. So when you link from Oracle 10g to Oracle 11g create database link like this:

CREATE DATABASE LINK my_link
CONNECT TO remote_user IDENTIFIED BY "CaSe_SeNsItIvE_PaSsWoRd"
USING 'TNS_ID_11G';


Do not forget to enclose password by double-quote marks!

When you do not set password this way, you will be getting:

ORA-01017: invalid username/password; logon denied.

Hope this small tip will be useful for someone.

Tuesday, 15 November 2011

ORA-28576: lost RPC connection to external procedure agent

Cause: of this error is abnormal termination of the invoked "C" routine. If this is not the case, check for network problems. Correct the problem if you find it. If all components appear to be normal but the problem persists, the problem could be an internal logic error in the RPC transfer code. Contact your customer support representative.
Action: First check the 3GL code you are invoking; the most likely

ORA-04031: unable to allocate string bytes of shared memory ("string","string","string","string")

Cause: More shared memory is needed than was allocated in the shared pool.
Action: If the shared pool is out of memory, either use the dbms_shared_pool package to pin large packages, reduce your use of shared memory, or increase the amount of available shared memory by increasing the value of the INIT.ORA parameters "shared_pool_reserved_size" and "shared_pool_size". If the large pool is out of memory, increase the INIT.ORA parameter "large_pool_size".

Monday, 14 November 2011

How to know which columns are in the overflow segment?

Remember that the INCLUDE_COLUMN of the USER_INDEXES view gives you the last column included in the index and so in the top segment, so joining this view with the USER_TAB_COLUMNS you can separate the columns that are in the top segment from those that are in the overflow one as with the following statement:
SQL> SELECT c.table_name, c.column_name,
2 CASE WHEN c.column_id <= i.include_column THEN 'TOP' ELSE 'OVERFLOW' END segment
3 FROM user_tab_columns c, user_indexes i
4 WHERE i.table_name = c.table_name
5 ORDER by table_name, column_id;

TABLE_NAME COLUMN_NAME SEGMENT
------------------------------ ------------------------------ --------
MY_IOT ID TOP
MY_IOT VALUE TOP
MY_IOT COMMENTS OVERFLOW

How to move an IOT into another tablespace?

SQL> SELECT segment_name, segment_type, tablespace_name FROM user_segments ORDER BY 1;

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_D01
SYS_IOT_TOP_71142 INDEX TS_D01
Let's assume you want to move all the segments from their current tablespace to another tablespace. You'll quickly discover that you cannot use the standard commands:
SQL> ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01;

ALTER TABLE sys_iot_over_71142 MOVE TABLESPACE ts_i01
*
ERROR at line 1:
ORA-25191: cannot reference overflow table of an index-organized table

SQL> ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01;

ALTER INDEX sys_iot_top_71142 REBUILD TABLESPACE ts_i01
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt
You have to first find the name of the associated IOT table:
SQL> SELECT iot_name FROM user_tables WHERE table_name = 'SYS_IOT_OVER_71142';

IOT_NAME
------------------------------
MY_IOT

SQL> SELECT table_name FROM user_indexes WHERE index_name = 'SYS_IOT_TOP_71142';

TABLE_NAME
------------------------------
MY_IOT

Then you can move the segments:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_i01;

Table altered.

SQL> ALTER TABLE my_iot MOVE OVERFLOW TABLESPACE ts_i01;

Table altered.

SQL> SELECT segment_name, segment_type, tablespace_name FROM user_segments ORDER BY 1;

SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
------------------------------ ------------------ ------------------------------
SYS_IOT_OVER_71142 TABLE TS_I01
SYS_IOT_TOP_71142 INDEX TS_I01
The first statement moves the top segment and the second one the overflow segment. This can be done in one statement using:
SQL> ALTER TABLE my_iot MOVE TABLESPACE ts_d01 OVERFLOW TABLESPACE ts_d01;

Table altered.

What is Oracle Exadata ???

Oracle recently announced a wide array of products and solutions spanning database, middleware, applications and hardware in the Indian market. Among all these products and solution - Exadata, drew our attention.

According to the company, Oracle "Exadata Database Machine" provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. A complete package of servers, storage, networking, and software, the product is scalable, secure, and redundant.

Till now more than 1000 Exadata machines have been installed in 67 countries and across 23 industries. With more than 60 per cent market share, the company aims to leverage its existing customer base, which consists of more than 7000 Indian organizations to drive sales for Exadata.

Currently, Sun Pharmaceuticals, Tech Mahindra, Religare, Reliance Consumer Finance and Stock Holding Corporation and Stock Holding Corporation of India have adopted Exadata for their datawarhousing , database and OLTP requirement.

Placed among the top five vendors worldwide, Oracle believes that Exadata has made a strong contribution to Oracle’s growth in Q4 FY10, IT PRO India explores the product with Sheshagiri Anegondi, Vice President – Technology, Oracle India.

Bringing in technical innovation

"Exadata brings forward key technical innovations like Intelligent Storage, Smart Flash Cache and Hybrid Columnar Compression. With these innovations, Exadata provides extreme performance while ensuring maximum ROI," says Sheshagiri Anegondi, Vice President – Technology, Oracle India.

He further adds, "the smart storage software in Exadata offloads data-intensive query processing from Oracle Database 11g servers to Exadata’s storage layer for parallel data processing. Because there’s less data moving through the higher-bandwidth connections, you get dramatically improved performance as well as concurrency for simple and complex data warehousing queries.”

Exadata packs in some impressive features. It performs 1.5 million I/O operations in 1 second, speeding database performance by 10x. It also reduces electricity consumption by 87.5 percent and floor space by 75 percent – thereby shrinking datacenter from an entire building to a single room.

Playing pivotal role in security issues

Exadata will also play a pivotal role in data security. "As organizations consolidate their data, more and more sensitive information ranging from email addresses to credit card numbers now resides in a single database, giving organizations the ability to secure and monitor that data more efficiently than ever before," according to Anegondi.

As a matter of fact Exadata, can utilize Oracle’s industry leading database security solutions to block threats and detect unauthorized activities like misuse of privileged user credentials, insider threats, and SQL injection.

For further security Oracle is offering Oracle's Advanced Security for Exadata. This would include Oracle Database Vault that protects against misuse of stolen login credentials, application bypass, and unauthorized changes to applications, including attempts to make copies of application tables, Oracle Audit Vault that provides auditors and internal security personnel an efficient means of reviewing of activity inside the Oracle Exadata Database Machine and Oracle Database Firewall to monitor in-bound SQL traffic over Oracle SQL Net and the TCP/IP protocol."

Oracle RAC Background processes

RAC Background Processes:

1. Lock Monitor Processes ( LMON)
2. Lock Monitor Services (LMS)
3. Lock Monitor Daemon Process ( LMD)
4. LCKn ( Lock Process)
5. DIAG (Diagnostic Daemon)

1. Lock Monitor Processes ( LMON)

It Maintains GCS memory structures.
Handles the abnormal termination of processes and instances.
Reconfiguration of locks & resources when an instance joins or leaves the cluster are handled by LMON ( During reconfiguration LMON generate the trace files)
It responsible for executing dynamic lock remastering every 10 mins ( Only in 10g R2 & later versions).
LMON Processes manages the global locks & resources.
It monitors all instances in cluster, primary for dictionary cache locks,library cache locks & deadlocks on deadlock sensitive on enqueue & resources.
LMON also provides cluster group services.
Also called Global enqueue service monitor.

2. Lock Monitor Services (LMS)

LMS is most very active background processes.
Consuming significant amount of CPU time. ( 10g R2 - ensure that LMS process does not encounter the CPU starvation).
Its primary job is to transport blocks across the nodes for cache-fusion requests.
If there is a consistent-read request, the LMS process rolls back the block, makes a Consistent-Read image of the block and then ship this block across the HSI (High Speed Interconnect) to the process requesting from a remote node.
LMS must also check constantly with the LMD background process (or our GES process) to get the lock requests placed by the LMD process.
Each node have 2 or more LMS processes.

GCS_SERVER_PROCESSES --> no of LMS processes specified in init. ora parameter.
Above parameter value set based on number of cpu's ( MIN(CPU_COUNT/2,2))
10gR2, single CPU instance,only one LMS processes started.
Increasing the parameter value,if global cache activity is very high.
Also called the GCS (Global Cache Services) processes.

Internal View: X$KJMSDP

3. Lock Monitor Daemon Process ( LMDn)

LMD process performs global lock deadlock detection.
Also monitors for lock conversion timeouts.
Also sometimes referred to as the GES (Global Enqueue Service) daemon since its job is to manage the global enqueue and global resource access.
LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests originating from another instance.
Internal View: X$KJMDDP
4. LCKn ( Lock Process)
Manages instance resource requests & cross instance calls for shared resources.
During instance recovery,it builds a list of invalid lock elements and validates lock elements.
5. DIAG (Diagnostic Daemon)
Oracle 10g - this one new background processes ( New enhanced diagnosability framework).
Regularly monitors the health of the instance.
Also checks instance hangs & deadlocks.
It captures the vital diagnostics data for instance & process failures.

What's blocking my lock?

Create a blocking lock

To begin, create a situation where one user is actively blocking another. Open two sessions. Issue the following commands in Session 1 to build the test table:

SQL> create table tstlock (foo varchar2(1), bar varchar2(1));

Table created.

SQL> insert into tstlock values (1,'a');

1 row created.

SQL> insert into tstlock values (2, 'b');

1 row created.

SQL> select * from tstlock ;

FOO BAR
--- ---
1 a
2 b

2 rows selected.

SQL> commit ;

Commit complete.
Now grab a lock on the whole table, still in Session 1:

SQL> select * from tstlock for update ;

And in Session 2, try to update a row:

SQL> update tstlock set bar=
2 'a' where bar='a' ;
This statement will hang, blocked by the lock that Session 1 is holding on the entire table.

Identify the blocking session

Oracle provides a view, DBA_BLOCKERS, which lists the SIDs of all blocking sessions. But this view is often, in my experience, a good bit slower than simply querying V$LOCK, and it doesn't offer any information beyond the SIDs of any sessions that are blocking other sessions. The V$LOCK view is faster to query, makes it easy to identify the blocking session, and has a lot more information.

SQL> select * from v$lock ;

ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
.... .... ... ... .... .... .... .... .... ....
Note the BLOCK column. If a session holds a lock that's blocking another session, BLOCK=1. Further, you can tell which session is being blocked by comparing the values in ID1 and ID2. The blocked session will have the same values in ID1 and ID2 as the blocking session, and, since it is requesting a lock it's unable to get, it will have REQUEST > 0.

In the query above, we can see that SID 422 is blocking SID 479. SID 422 corresponds to Session 1 in our example, and SID 479 is our blocked Session 2.

To avoid having to stare at the table and cross-compare ID1's and ID2's, put this in a query:

SQL> select l1.sid, ' IS BLOCKING ', l2.sid
2 from v$lock l1, v$lock l2
3 where l1.block =1 and l2.request > 0
4 and l1.id1=l2.id1
5 and l1.id2=l2.id2
SQL> /

SID 'ISBLOCKING' SID
---------- ------------- ----------
422 IS BLOCKING 479

1 row selected.
Even better, if we throw a little v$session into the mix, the results are highly readable:

SQL> select s1.username || '@' || s1.machine
2 || ' ( SID=' || s1.sid || ' ) is blocking '
3 || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
4 from v$lock l1, v$session s1, v$lock l2, v$session s2
5 where s1.sid=l1.sid and s2.sid=l2.sid
6 and l1.BLOCK=1 and l2.request > 0
7 and l1.id1 = l2.id1
8 and l2.id2 = l2.id2 ;


BLOCKING_STATUS
----------------------------------------------------------------------------------------------------
BULKLOAD@yttrium ( SID=422 ) is blocking BULKLOAD@yttrium ( SID=479 )

1 row selected.
There's still more information in the v$lock table, but in order to read that information, we need to understand a bit more about lock types and the cryptically-named ID1 and ID2 columns.

Lock type and the ID1 / ID2 columns

In this case, we already know that the blocking lock is an exclusive DML lock, since we're the ones who issued the locking statement. But most of the time, you won't be so lucky. Fortunately, you can read this information from the v$lock table with little effort.

The first place to look is the TYPE column. There are dozens of lock types, but the vast majority are system types. System locks are normally only held for a very brief amount of time, and it's not generally helpful to try to tune your library cache, undo logs, etc. by looking in v$lock! (See the V$LOCK chapter in the Oracle Database Reference for a list of system lock types.)

There are only three types of user locks, TX, TM and UL. UL is a user-defined lock -- a lock defined with the DBMS_LOCK package. The TX lock is a row transaction lock; it's acquired once for every transaction that changes data, no matter how many objects you change in that transaction. The ID1 and ID2 columns point to the rollback segment and transaction table entries for that transaction.

The TM lock is a DML lock. It's acquired once for each object that's being changed. The ID1 column identifies the object being modified.

Lock Modes

You can see more information on TM and TX locks just by looking at the lock modes. The LMODE and REQUEST columns both use the same numbering for lock modes, in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock. A session must obtain an exclusive TX lock in order to change data; LMODE will be 6. If it can't obtain an exclusive lock because some of the rows it wants to change are locked by another session, then it will request a TX in exclusive mode; LMODE will be 0 since it does not have the lock, and REQUEST will be 6. You can see this interaction in the rows we selected earlier from v$lock:

ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60 479 TX 131078 16739 0 6 685 0
ADEBEA20 ADEBEB3C 422 TX 131078 16739 6 0 697 1
Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6), point back to the rollback and transaction entries for Session 1. That's what lets us determine the blocking session for Session 2.

You may also see TX locks in mode 4, Shared mode. If a block containing rows to be changed doesn't have any interested transaction list (ITL) entries left, then the session acquires a TX lock in mode 4 while waiting for an ITL entry. If you see contention for TX-4 locks on an object, you probably need to increase INITRANS for the object.

TM locks are generally requested and acquired in modes 3, aka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock. (Note that CREATE TABLE doesn't require a TM lock -- it doesn't need to lock any objects, because the object in question doesn't exist yet!) DML requires a Shared-Row Exclusive lock. So, in the rows we selected earlier from v$lock, you can see from the TM locking levels that these are DML locks:

ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
ADDF7EC8 ADDF7EE0 422 TM 88519 0 3 0 697 0
ADDF7F74 ADDF7F8C 479 TM 88519 0 3 0 685 0
Identifying the locked object
Now that we know that each TM row points to a locked object, we can use ID1 to identify the object.

SQL> select object_name from dba_objects where object_id=88519 ;

OBJECT_NAME
--------------
TSTLOCK
Sometimes just knowing the object is enough information; but we can dig even deeper. We can identify not just the object, but the block and even the row in the block that Session 2 is waiting on.

Identifying the locked row

We can get this information from v$session by looking at the v$session entry for the blocked session:

SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2* from v$session where sid=479 ;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
88519 16 171309 0
This gives us the object ID, the relative file number, the block in the datafile, and the row in the block that the session is waiting on. If that list of data sounds familiar, it's because those are the four components of an extended ROWID. We can build the row's actual extended ROWID from these components using the DBMS_ROWID package. The ROWID_CREATE function takes these arguments and returns the ROWID:

SQL> select do.object_name,
2 row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
3 dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
4 from v$session s, dba_objects do
5 where sid=543
6 and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;

OBJECT_NAME ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# DBMS_ROWID.ROWID_C
--------------- ------------- -------------- --------------- ------------- ------------------
TSTLOCK 88519 16 171309 0 AAAVnHAAQAAAp0tAAA
And, of course, this lets us inspect the row directly.

SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;

FOO BAR
--- ---
1 a

Conclusion

We've seen how to identify a blocking session, and how to inspect the very row that the waiting session is waiting for. And, I hope, learned a bit about v$lock in the process.