tag:blogger.com,1999:blog-68904898806738757302024-03-13T22:48:37.394+05:30Oracle Experiences and informationSandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-6890489880673875730.post-39004231283255854922015-10-13T14:33:00.002+05:302015-10-13T14:51:21.065+05:30New Blog<div dir="ltr" style="text-align: left;" trbidi="on">
Friends,<br />
<br />
Thank you very much for your support. Please make note that, I have re-started writing blogs and new blogs can be found at <a href="https://sandeeparedkar.wordpress.com/">Sandeep Redkar - Wordpress.</a> For reference, I have migrated few old posts to new site.<br />
<br />
I look forward your continued support for my new blog site as well.<br />
<br />
Thanks...</div>
Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-79006856750710219422012-07-22T15:31:00.000+05:302012-07-22T15:31:26.522+05:30ORA-0060 – DEADLOCK<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
We all know deadlock is the situation where DBA cannot do anything and directly can be blamed on application saying – this is an application issue. Oracle writes the same in its trace file of deadlock. <br />
<br />
The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. <br />
<br />
I agree to above but we, DBA can help application developer in determining cause of the deadlock. Here are some samples to check the clause of the deadlock. <br />
<br />
<div style="text-align: left;">
It is very important to analyze the deadlock graph especially blocker and waiter waits. In normal situations when it is purely an application coding issue, the waits would be row exclusive “X” when other cases this may change. Let’s see in the following examples.</div>
<br />
<strong>Traditional Deadlock</strong> <br />
<br />
<br />
create table u (id number, name varchar2(20));<br />
insert into u values (1, 'OPERATION1');<br />
insert into u values (2, 'OPERATION2');<br />
commit;<br />
<br />
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th><th>SESSION 2 </th></tr>
<tr><td>1</td><td>update u set name='OPERATION_1' where id=1;</td><td></td></tr>
<tr><td>2</td><td></td><td>update u set name='OPERATION_2' where id=2;</td></tr>
<tr><td>3</td><td>update u set name='OPERATION_2' where id=2;</td><td></td></tr>
<tr><td>4</td><td></td><td>update u set name='OPERATION_1' where id=1;</td></tr>
<tr><td>5</td><td>DEADLOCK ERROR</td><td></td></tr>
</tbody></table>
<br />
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt; mso-ansi-language: EN-US; mso-bidi-language: AR-SA; mso-fareast-font-family: "Times New Roman"; mso-fareast-language: EN-US;"><span style="font-family: "Courier New", Courier, monospace;">Deadlock graph:</span></span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------<br />Resource Name process session holds waits process session holds waits<br />TX-00080009-00002c6b 21 133 X 24 14 X<br />TX-00020005-00002c70 24 14 X 21 133 X<br /><br />session 133: DID 0001-0015-00000015 session 14: DID 0001-0018-00000018 <br />session 14: DID 0001-0018-00000018 session 133: DID 0001-0015-00000015</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">Rows waited on:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 133: obj - rowid = 000036A3 - AAADajAAEAAAAC0AAB</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> (dictionary objn - 13987, file - 4, block - 180, slot - 1)</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 14: obj - rowid = 000036A3 - AAADajAAEAAAAC0AAA</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> (dictionary objn - 13987, file - 4, block - 180, slot - 0)</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">----- Information for the OTHER waiting sessions -----</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">Session 14:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> sid: 14 ser: 35 audsid: 150059 user: 34/SANDY flags: 0x45</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> pid: 24 O/S info: user: SYSTEM, term: LVNTW03526D3BS, ospid: 4268</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> image: ORACLE.EXE (SHAD)</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> client details:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> O/S info: user: ASIAPAC\N228937, term: LVNTW03526D3BS, ospid: 2700:1872</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> machine: ASIAPAC\LVNTW03526D3BS program: sqlplus.exe</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> application name: SQL*Plus, hash value=3669949024</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> current SQL:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> update u set name='OPERATION_1' where id=1</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">----- End of information for the OTHER waiting sessions -----</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">Information for THIS session:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">----- Current SQL Statement for this session (sql_id=4r2c68v92xnn2) -----</span><br />
<span style="font-family: "Courier New", Courier, monospace;">update u set name='OPERATION_2' where id=2</span><br />
<br />
<br />
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt; mso-ansi-language: EN-US; mso-bidi-language: AR-SA; mso-fareast-font-family: "Times New Roman"; mso-fareast-language: EN-US;">Deadlock trace file also gives you all waiters and holder sessions’ information. In above trace file section, the first highlighted query is of session that had caused the deadlock. The next highlighted query is of the session which was terminated by oracle and got “DEADLOCK DETECTED” error.</span><br />
<br />
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt; mso-ansi-language: EN-US; mso-bidi-language: AR-SA; mso-fareast-font-family: "Times New Roman"; mso-fareast-language: EN-US;"><strong>Case 1 – Insert in Primary key column</strong></span><br />
<br />
create table primary (id number primary key, name varchar2(20));</div>
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th><th>SESSION 2 </th></tr>
<tr><td>1</td><td>insert into primary values (1, 'Sandeep');</td><td></td></tr>
<tr><td>2</td><td></td><td>insert into primary values (2, 'Redkar');</td></tr>
<tr><td>3</td><td>insert into primary values (2, 'Redkar');</td><td></td></tr>
<tr><td>4</td><td></td><td>insert into primary values (1, 'Sandeep');</td></tr>
<tr><td>5</td><td>DEADLOCK ERROR</td><td></td></tr>
</tbody></table>
<br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-family: "Courier New", Courier, monospace;">Deadlock graph:</span></span><br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Resource Name process session holds waits process session holds waits</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00030007-00002c81 24 14 X 21 133 S</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00060005-00002c9e 21 133 X 24 14 S</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">session 14: DID 0001-0018-00000013 session 133: DID 0001-0015-00000010 </span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 133: DID 0001-0015-00000010 session 14: DID 0001-0018-00000013 </span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">Rows waited on:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 14: no row</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 133: no row</span><br />
<br />
<strong>Note – </strong>This deadlock situation can be possible when using unique index also.<br />
<br />
<br />
<strong>Solution – </strong>To avoid this deadlock, you can make use of sequences. <br />
<br />
<strong>Case 2 – Insert in Primary key/Foreign key.</strong><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">create table foreign (id number, addr varchar2(30), phone number);</span> <br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">alter table foreign add constraint fk_id foreign key (id) references primary(id);</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">insert into primary </span><br />
<span style="font-family: "Courier New", Courier, monospace;">select level, dbms_random.string('A',10)</span><br />
<span style="font-family: "Courier New", Courier, monospace;">from dual connect by level <= 100;</span><br />
<span style="font-family: "Courier New", Courier, monospace;">commit;</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">insert into foreign </span><br />
<span style="font-family: "Courier New", Courier, monospace;">select round(dbms_random.value(1,100)), </span><br />
<span style="font-family: "Courier New", Courier, monospace;">dbms_random.string('A',10), </span><br />
<span style="font-family: "Courier New", Courier, monospace;">round(dbms_random.value(1000000,9999999)) </span><br />
<span style="font-family: "Courier New", Courier, monospace;">from dual connect by level<=1000;</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">commit;</span></div>
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th><th>SESSION 2 </th></tr>
<tr><td>1</td><td>update foreign set phone=9999999 where id=20;</td><td></td></tr>
<tr><td>2</td><td></td><td>update foreign set phone=9999999 where id=10;</td></tr>
<tr><td>3</td><td>update primary set id=50, name='Sandeep' where id=50;</td><td></td></tr>
<tr><td>4</td><td></td><td>update primary set id=60, name='Sandy' where id=60;</td></tr>
<tr><td>5</td><td>DEADLOCK ERROR</td><td></td></tr>
</tbody></table>
<br />
<span style="font-family: "Courier New", Courier, monospace;">Deadlock graph:</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Resource Name process session holds waits process session holds waits</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TM-0000369e-00000000 21 133 SX SSX 24 14 SX SSX</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TM-0000369e-00000000 24 14 SX SSX 21 133 SX SSX</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">session 133: DID 0001-0015-00000011 session 14: DID 0001-0018-00000014 </span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 14: DID 0001-0018-00000014 session 133: DID 0001-0015-00000011 </span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">Rows waited on:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 133: no row</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 14: no row</span><br />
<br />
<strong>Solution –</strong> In this situation, oracle needs TM lock when inserting into foreign key column. In earlier oracle releases (8i and prior) even when inserting into primary column, you have to have TM lock on foreign key table. This has been taken care in release 9i onwards. But issue with foreign key is still there. <br />
<br />
<br />
To avoid this situation there are two solutions – <br />
1. Create index on foreign key (Oracle by default creates index when defining primary key constraint. But oracle will never creates index when defining foreign key constraint)<br />
2. Do not update primary key column, though with same/other value.<br />
<br />
<strong>Case 3– Bitmap index – Part I</strong><br />
<br />
create table t (id number, gender varchar2(1));<br />
create bitmap index bit_t_ind on t(gender);</div>
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th><th>SESSION 2 </th></tr>
<tr><td>1</td><td>insert into t values (1,'M');</td><td></td></tr>
<tr><td>2</td><td></td><td>insert into t values (2,'F');</td></tr>
<tr><td>3</td><td>insert into t values (4,'F');</td><td></td></tr>
<tr><td>4</td><td></td><td>insert into t values (3,'M');</td></tr>
<tr><td>5</td><td>DEADLOCK ERROR</td><td></td></tr>
</tbody></table>
<br />
<span style="font-family: "Courier New", Courier, monospace;">Deadlock graph:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Resource Name process session holds waits process session holds waits</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00060021-00002c9f 24 14 X 21 133 S</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00050005-00002c7c 21 133 X 24 14 S</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 14: DID 0001-0018-00000015 session 133: DID 0001-0015-00000012 </span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 133: DID 0001-0015-00000012 session 14: DID 0001-0018-00000015</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">Rows waited on:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 14: no row</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> Session 133: no row</span><br />
<br />
<strong>Solution – </strong>In OLTP system we should avoid bitmap index when table is highly used by the application for DML operations. This deadlock situation can be avoided by creating b-tree (normal) index based on the index selectivity. In the deadlock trace file, you can see “no rows” on “Rows waiting on” section. Most of the cases, this can be found when using bitmap index.<br />
<br />
If you remember, we got “no rows” information when inserting into primary key columns. It is difficult to distinguish which had caused the deadlock (bitmap or primary key). Best way to check for which type of index exists on the table. <br />
<br />
<strong>Case 4– Bitmap index – Part II</strong> <br />
<strong></strong><br />
<br />
insert into t<br />
select level, decode(mod(level,2),0,'M','F')<br />
from dual connect by level<=100;<br />
<br />
commit;<br />
<br />
<br />
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th><th>SESSION 2 </th></tr>
<tr><td>1</td><td>Insert into t values (1,'M');</td><td></td></tr>
<tr><td>2</td><td></td><td>insert into t values (2,'F');</td></tr>
<tr><td>3</td><td>insert into t values (4,'F');</td><td></td></tr>
<tr><td>4</td><td></td><td>insert into t values (3,'M');</td></tr>
<tr><td>5</td><td>DEADLOCK ERROR</td><td></td></tr>
</tbody></table>
<br />
<span style="font-family: "Courier New", Courier, monospace;">Deadlock graph:</span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Resource Name process session holds waits process session holds waits</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-0006001b-00002ca0 24 14 X 21 133 S</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00090021-00002c6c 21 133 X 24 14 S</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">session 14: DID 0001-0018-00000016 session 133: DID 0001-0015-00000013 </span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 133: DID 0001-0015-00000013 session 14: DID 0001-0018-00000016 </span><br />
<br />
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<b style="mso-bidi-font-weight: normal;"><span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">Note – </span></b><span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">This is very similar situation, as previous case (Part I). If you observe, the trace file, in previous example, we got “no rows” and here we got the rows information even test case is same. Why is this difference? The difference between these two test cases is, in previous case the table was empty and not in the above case. Now you may think, how to distinguish whether this has caused because of bitmap issue? In Rows waited on section, below rowid information, there is object number given. If you check the object number you can find this is of bitmap index.</span></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;"><span style="font-family: "Courier New", Courier, monospace;">SQL> select owner, object_name, object_type from dba_objects where object_id=13984;</span></span> <br />
</div>
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">OWNER OBJECT_NAME OBJECT_TYPE</span><br />
<span style="font-family: "Courier New", Courier, monospace;">------ -------------------- -------------------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">SANDY BIT_T_IND INDEX</span><br />
<br />
<br />
<strong>Case 5– Autonomous Transaction</strong><br />
<strong></strong><br />
<br />
create table d (v varchar2(20));<br />
insert into d values ('S');<br />
commit;<br />
<br />
CREATE OR REPLACE PROCEDURE proc_del<br />
AS<br />
PRAGMA AUTONOMOUS_TRANSACTION;<br />
BEGIN<br />
DELETE FROM D;<br />
COMMIT;<br />
END;<br />
/ </div>
<table border="1"><tbody>
<tr><th>SR</th><th>SESSION 1</th></tr>
<tr><td>1</td><td>delete from d;</td></tr>
<tr><td>2</td><td>exec proc_del</td></tr>
<tr><td>3</td><td>DEADLOCK ERROR</td></tr>
</tbody></table>
<br />
<span style="font-family: "Courier New", Courier, monospace;">Deadlock graph: </span><br />
<span style="font-family: "Courier New", Courier, monospace;"> ---------Blocker(s)-------- ---------Waiter(s)---------</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Resource Name process session holds waits process session holds waits</span><br />
<span style="font-family: "Courier New", Courier, monospace;">TX-00060005-00002c9f 21 133 X 21 133 X</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">session 133: DID 0001-0015-00000014 session 133: DID 0001-0015-00000014 </span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span><br />
<span style="font-family: "Courier New", Courier, monospace;">Rows waited on:</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Session 133: obj - rowid = 000036A1 - AAADahAAEAAAACsAAB</span><br />
<span style="font-family: "Courier New", Courier, monospace;">(dictionary objn - 13985, file - 4, block - 172, slot - 1)</span><br />
<br />
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<b style="mso-bidi-font-weight: normal;"><span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">Solution – </span></b><span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">This is example of autonomous transaction. Triggers are also form of autonomous transactions. This issue may also seen with update operation or combination of update and delete operation.</span></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">The solutions above given are just sample situations wherein Deadlock can happen. There are many other situations wherein Deadlock can occur. The deadlock graph gives the hint to identify the deadlock cause. Application desing needs to be changed for Transaction Level Deadlocks but other than this, there could be solutions to Deadlock. </span></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt; text-align: justify;">
<span style="font-family: "Tahoma", "sans-serif"; font-size: 10pt;">To sum up, though the deadlock is an issue of application, we can identify the cause and solution can provided to get rid of deadlock issue.</span></div>
</div>Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com1tag:blogger.com,1999:blog-6890489880673875730.post-35875242772483912662009-12-19T14:45:00.003+05:302009-12-20T17:13:11.669+05:30Index column ordering and performanceWhen we are talking about why oracle is not picking up index, one of the well known reason is cardinality. When data is uniform, an index is selective only when cardinality is not very low. When data is not uniform, we can have histograms but here we will keep that feature apart. <br /><br />So lets assume we want to create a composite index on two columns which both are part of the filter condition. One column (ID) is having high cardinality and another (Status) is having very low cardinality. Then what should be the column sequence for that index? The obivious answer is, for better performance first column should be one which is having high cardinality. <br /><br />But in reality, any column sequence will give the same performance. To demonstrate lets create following testcase. We will create a table and will choose two columns object_id and status. The column object_id is distinct and update the status column such that it will only two values with same number of rows. <br /><br /><pre><br />SQL> create table t as select * from dba_objects;<br /><br />Table created.<br /><br /></pre><br /><br /><pre><br />SQL> update t set status=decode(mod(rownum,2),0,'VALID','INVALID');<br /><br />50623 rows updated.<br /><br /></pre><br /><br /><pre><br />SQL> commit;<br /><br />Commit complete.<br /><br /></pre><br /><br /><pre><br />SQL> select status, count(1) from t group by status;<br /><br />STATUS COUNT(1)<br />------- ----------<br />INVALID 25312<br />VALID 25311<br /><br /></pre><br /><br /><pre><br />SQL> create index ind_t on t(object_id, status);<br /><br />Index created.<br /><br /></pre><br /><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /><br /></pre><br /><br /><pre><br />SQL> select * from t where object_id=100 and status='INVALID';<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 4013845416<br /><br />-------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 94 | 2 (0)| 00:00:01 |<br />| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 94 | 2 (0)| 00:00:01 |<br />|* 2 | INDEX RANGE SCAN | IND_T | 1 | | 1 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 2 - access("OBJECT_ID"=100 AND "STATUS"='INVALID')<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 4 consistent gets<br /> 0 physical reads<br /> 0 redo size<br /> 1207 bytes sent via SQL*Net to client<br /> 396 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /></pre><br /><br />The query executed with 4 consistent gets. Now lets create the index with reverse order and check the performance.<br /><br /><pre><br />SQL> drop index ind_t;<br /><br />Index dropped.<br /><br /></pre><br /><br /><pre><br />SQL> create index ind_t on t(status, object_id);<br /><br />Index created.<br /><br /></pre><br /><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><br /><pre><br />SQL> select * from t where object_id=100 and status='INVALID';<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 4013845416<br /><br />-------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 94 | 2 (0)| 00:00:01 |<br />| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 94 | 2 (0)| 00:00:01 |<br />|* 2 | INDEX RANGE SCAN | IND_T | 1 | | 1 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 2 - access("STATUS"='INVALID' AND "OBJECT_ID"=100)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 4 consistent gets<br /> 0 physical reads<br /> 0 redo size<br /> 1207 bytes sent via SQL*Net to client<br /> 396 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /><br /></pre><br /><br />Even after changing the column order, the query got executed with 4 consistent gets. So no performance degradation. <br /><br />So now we have prooved that column order is having no performance on the query. <br /><br />But this column order may cause performance degradation. Lets assume, In the application, we have some query only with object_id as a filter. If there is no individual index on object_id column, then oracle will use this index and here column ordering will impact. <br /><br />We will execute the following query with index having order (object_id, status) and lets check the execution plan. <br /><br /><br /><pre><br />SQL> select * from t where object_id=100;<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 4013845416<br /><br />-------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 94 | 3 (0)| 00:00:01 |<br />| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 94 | 3 (0)| 00:00:01 |<br />|* 2 | INDEX RANGE SCAN | IND_T | 1 | | 2 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 2 - access("OBJECT_ID"=100)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 4 consistent gets<br /> 0 physical reads<br /> 0 redo size<br /> 1207 bytes sent via SQL*Net to client<br /> 396 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /><br /></pre><br /><br />It is doing Index range scan with 4 consistent gets. Lets reverse the column order (status, object_id) and check the execution plan.<br /><br /><pre><br />SQL> select * from t where object_id=100;<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 2246305531<br /><br />-------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 94 | 4 (0)| 00:00:01 |<br />| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 94 | 4 (0)| 00:00:01 |<br />|* 2 | INDEX SKIP SCAN | IND_T | 1 | | 3 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 2 - access("OBJECT_ID"=100)<br /> filter("OBJECT_ID"=100)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 8 consistent gets<br /> 0 physical reads<br /> 0 redo size<br /> 1207 bytes sent via SQL*Net to client<br /> 396 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /></pre><br /><br />Oracle has opted for index skip scan and consistent gets are also double against our previous execution plan.<br /><br />In the above example, other column is having only 4 distinct values, but if the distinct values are higher then oracle may ignore this index and will opt for Full table scan.<br /><br />To sum up, when query is having multiple filter conditions, column order of a composite index on those columns does not matter. But it may have impact on other queries having not all filter conditions.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-70617740973478921962009-12-18T19:01:00.002+05:302009-12-18T19:04:23.386+05:30Query and Transitive ClosureI have got some of the comments on my last blog - "Query Performance & Join Conditions" which were asking about the transitive closure. The example that I have given was not for transitive closure. Oracle has introduced transitive <br /><br />closure when the literal values are present for joining conditions. For example when I have join condition on two tables and same column of one of the table is having filter condition then other table will use the same filter condition. <br /><br />To demonstrate, we will see the following test case.<br /><br /><pre><br />SQL> create table t1 as select rownum "N", rownum*30 "M" from dual connect by level <=10000;<br /><br />Table created.<br /></pre><br /><pre><br />SQL> create table t2 as select rownum+20 "R", rownum*50 "T" from dual connect by level <=10000;<br /><br />Table created.<br /><br /></pre><br /><pre><br />SQL> create index ind_t1 on t1(n);<br /><br />Index created.<br /><br /></pre><br /><pre><br />SQL> create index ind_t2 on t2(r);<br /><br />Index created.<br /><br /></pre><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br />SQL> set autot trace exp stat<br />SQL> select n, m, t<br /> 2 from t1, t2<br /> 3 where t1.n = t2.r<br /> 4 and t1.n=30;<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 863060763<br /><br />----------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />----------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 16 | 4 (0)| 00:00:01 |<br />| 1 | MERGE JOIN CARTESIAN | | 1 | 16 | 4 (0)| 00:00:01 |<br />| 2 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 8 | 2 (0)| 00:00:01 |<br />|* 3 | INDEX RANGE SCAN | IND_T1 | 1 | | 1 (0)| 00:00:01 |<br />| 4 | BUFFER SORT | | 1 | 8 | 2 (0)| 00:00:01 |<br />| 5 | TABLE ACCESS BY INDEX ROWID| T2 | 1 | 8 | 2 (0)| 00:00:01 |<br />|* 6 | INDEX RANGE SCAN | IND_T2 | 1 | | 1 (0)| 00:00:01 |<br />----------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 3 - access("T1"."N"=30)<br /> 6 - access("T2"."R"=30)<br /><br /></pre><br /><pre><br /><br />Statistics<br />----------------------------------------------------------<br /> 0 recursive calls<br /> 0 db block gets<br /> 7 consistent gets<br /> 0 physical reads<br /> 0 redo size<br /> 512 bytes sent via SQL*Net to client<br /> 396 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 1 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /></pre><br />Check for "Predicate Information" in above-mentioned execution plan. When I have assigned n=30, it has also picked up r=30. Now one might have question, what is the use of this and will it impact performance? <br /><br />Based on the input filter condition, oracle may opt for better execution plan.<br />To demonstrate, lets create t2 table with R column as 4 distinct values and execute the above query again.<br /><br /><pre><br />SQL> create table t1 as select rownum "N", rownum*30 "M" from dual connect by level <=10000;<br /><br />Table created.<br /><br /></pre><br /><pre><br />SQL> create table t2 as select mod(rownum,4) "R", rownum*50 "T" from dual connect by level <=10000;<br /><br />Table created.<br /><br /></pre><br /><pre><br />SQL> create index ind_t1 on t1(n);<br /><br />Index created.<br /><br /></pre><br /><pre><br />SQL> create index ind_t2 on t2(r);<br /><br />Index created.<br /><br /></pre><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br />SQL> exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br />SQL> set autot trace exp stat<br />SQL> select n, m, t<br /> 2 from t1, t2<br /> 3 where t1.n = t2.r<br /> 4 and t2.r=3;<br /><br />2500 rows selected.<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 2470703826<br /><br />---------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />---------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 2500 | 37500 | 9 (12)| 00:00:01 |<br />|* 1 | HASH JOIN | | 2500 | 37500 | 9 (12)| 00:00:01 |<br />| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 8 | 2 (0)| 00:00:01 |<br />|* 3 | INDEX RANGE SCAN | IND_T1 | 1 | | 1 (0)| 00:00:01 |<br />|* 4 | TABLE ACCESS FULL | T2 | 2500 | 17500 | 6 (0)| 00:00:01 |<br />---------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 1 - access("T1"."N"="T2"."R")<br /> 3 - access("T1"."N"=3)<br /> 4 - filter("T2"."R"=3)<br /></pre><br /><br />As per the above plan, based on filter condition(N=3), oracle opted for index scan. This is because of transitive closure. Now what would have happen if transitive closure was not taken place.<br />For this, lets modify query as follows -<br /><br /><pre><br />SQL> select n, m, t<br /> 2 from t1, (select /*+ no_merge */ * from t2 where r=3) t2<br /> 3 where t1.n = t2.r;<br /><br />2500 rows selected.<br /><br /></pre><br /><pre><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 157564253<br /><br />----------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />----------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 2500 | 90000 | 14 (8)| 00:00:01 |<br />|* 1 | HASH JOIN | | 2500 | 90000 | 14 (8)| 00:00:01 |<br />| 2 | VIEW | | 2500 | 70000 | 6 (0)| 00:00:01 |<br />|* 3 | TABLE ACCESS FULL| T2 | 2500 | 17500 | 6 (0)| 00:00:01 |<br />| 4 | TABLE ACCESS FULL | T1 | 10000 | 80000 | 7 (0)| 00:00:01 |<br />----------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 1 - access("T1"."N"="T2"."R")<br /> 3 - filter("R"=3)<br /></pre><br /><br />If we execute same query, oracle would have opted for nested loops.<br /><br /><strong>Conclusion : </strong>Transitive closure is the feature which works when we are using filter predicate in query with join condition. Oracle will consider transitive closure with all equility, non-equility predicates.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-91687565025590274162009-09-22T15:43:00.003+05:302009-09-22T15:47:26.728+05:30Query Performance & Join ConditionsOracle database performance is measured by application queries those are having impact on host resources. Oracle generates plan as per the input values and that decides the resource usage effeciency.<br /><br />While working on performance issue, one of the problem reported by the user that one of the module is not functioning and having some issue with query. When we traced the session found one of the query doing FTS and retriving few records only. This was one of the top resource consumer query in the trace file. <br /><br />The query was looking very normal and checked for the indexes, statistics of the tables etc. Though the statistics were collected recently, it was not the problem with that. <br /><br />After investigation, it was found that one of the condition was not correctly defined and the same has been confirmed by developer. When you have multi table join in the query, we have to be very careful about equility predicates.<br /><br />Here I want to remind one of the fact in oracle -<br />As per mathematics rules, when we say a=b and b=c, we can conclude a=c. But in oracle this is not the case. Oracle never concludes a=c.<br /><br />To illustrate this, let us create a test case -<br /><br />Create two tables and insert some random records in it. Also create required indexes. As the query provided was from the production, here I am not using the same table names. Also I have picked up only the problematic part of the query.<br /><br /><pre><br />SQL> create table cu_all (custid number, addr varchar2(200), ph number, cano number, acctype varchar2(10));<br /><br />Table created.<br /><br />SQL> create table ca_receipt (custid number, caamt number, cadt date, totbal number);<br /><br />Table created.<br /><br />SQL><br />SQL> insert into cu_all<br /> 2 select lvl,<br /> 3 dbms_random.string('A',30),<br /> 4 round(dbms_random.value(1,100000)),<br /> 5 round(dbms_random.value(1,10000)),<br /> 6 dbms_random.string('A',10)<br /> 7 from (select level "LVL" from dual connect by level <=200000);<br /><br />200000 rows created.<br /><br />SQL> insert into ca_receipt<br /> 2 select round(dbms_random.value(1,10000)),<br /> 3 round(dbms_random.value(1,100000)),<br /> 4 sysdate - round(dbms_random.value(1,100000)),<br /> 5 round(dbms_random.value(1,100000))<br /> 6 from (select level "LVL" from dual connect by level <=500000);<br /><br />500000 rows created.<br /><br />SQL> create unique index pk_cu_all_ind on cu_all(custid);<br /><br />Index created.<br /><br />SQL> create index ind2_cu_all on cu_all(CANO);<br /><br />Index created.<br /><br />SQL> create index ind_ca_receipt_custid on ca_receipt(custid);<br /><br />Index created.<br /><br />SQL> exec dbms_stats.gather_table_stats(user,'CU_ALL', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.gather_table_stats(user,'CA_RECEIPT', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /></pre><br /><br /><br />Now let us execute the query with trace on. This is the similar query which was provided to me.<br /><br /><br /><pre><br />SQL> set autot trace<br />SQL> SELECT ca.*, cu.*<br /> 2 FROM ca_receipt CA,<br /> 3 cu_all CU<br /> 4 WHERE CA.CUSTID = CU.CUSTID<br /> 5 AND CA.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353<br /> 6 CONNECT BY PRIOR CUSTID = CANO)<br /> 7 ORDER BY ACCTYPE DESC;<br /><br />289 rows selected.<br /></pre><br /><Pre><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 3186098611<br /><br />-------------------------------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1000 | 81000 | 504 (2)| 00:00:07 |<br />| 1 | SORT ORDER BY | | 1000 | 81000 | 504 (2)| 00:00:07 |<br />|* 2 | HASH JOIN | | 1000 | 81000 | 503 (2)| 00:00:07 |<br />| 3 | NESTED LOOPS | | | | | |<br />| 4 | NESTED LOOPS | | 1000 | 26000 | 112 (1)| 00:00:02 |<br />| 5 | VIEW | VW_NSO_1 | 20 | 100 | 21 (0)| 00:00:01 |<br />| 6 | HASH UNIQUE | | 20 | 180 | | |<br />|* 7 | CONNECT BY WITH FILTERING | | | | | |<br />| 8 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 9 | 2 (0)| 00:00:01 |<br />|* 9 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |<br />| 10 | NESTED LOOPS | | | | | |<br />| 11 | CONNECT BY PUMP | | | | | |<br />| 12 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 21 (0)| 00:00:01 |<br />|* 13 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |<br />|* 14 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |<br />| 15 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |<br />| 16 | TABLE ACCESS FULL | CU_ALL | 200K| 10M| 389 (1)| 00:00:05 |<br />-------------------------------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 2 - access("CA"."CUSTID"="CU"."CUSTID")<br /> 7 - access("CANO"=PRIOR "CUSTID")<br /> 9 - access("CUSTID"=2353)<br /> 13 - access("CANO"=PRIOR "CUSTID")<br /> 14 - access("CA"."CUSTID"="CUSTID")<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 2249 consistent gets<br /> 25 physical reads<br /> 0 redo size<br /> 11748 bytes sent via SQL*Net to client<br /> 729 bytes received via SQL*Net from client<br /> 21 SQL*Net roundtrips to/from client<br /> 7 sorts (memory)<br /> 0 sorts (disk)<br /> 289 rows processed<br /></pre><br />If you look at the query, it seems to be normal one. <br /><br />But the problem is here- <br /><br />Query is having two tables CA and CU. From the inner CU table query, it fetches records and joins with CA table an CA table Joins with CU table using the same column.<br />Here the inner query joins with CA table and cardinality of the query gets changed. So it is opting FTS when joining to CU table again.<br />This is causing the performance bottleneck. So to resolve the issue, I have change the joining condition. <br /><br />Now if we check, following is the proper execution plan. Also the consistents gets have been reduced to 797 against 2249 in original query.<br /><br /><pre><br />SQL> SELECT ca.*, cu.*<br /> 2 FROM ca_receipt CA,<br /> 3 cu_all CU<br /> 4 WHERE CA.CUSTID = CU.CUSTID<br /> 5 AND CU.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353<br /> 6 CONNECT BY PRIOR CUSTID = CANO)<br /> 7 ORDER BY ACCTYPE DESC;<br /><br />289 rows selected.<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 3713271440<br /><br />-------------------------------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1000 | 81000 | 133 (2)| 00:00:02 |<br />| 1 | SORT ORDER BY | | 1000 | 81000 | 133 (2)| 00:00:02 |<br />| 2 | NESTED LOOPS | | | | | |<br />| 3 | NESTED LOOPS | | 1000 | 81000 | 132 (1)| 00:00:02 |<br />| 4 | NESTED LOOPS | | 20 | 1200 | 42 (3)| 00:00:01 |<br />| 5 | VIEW | VW_NSO_1 | 20 | 100 | 21 (0)| 00:00:01 |<br />| 6 | HASH UNIQUE | | 20 | 180 | | |<br />|* 7 | CONNECT BY WITH FILTERING | | | | | |<br />| 8 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 9 | 2 (0)| 00:00:01 |<br />|* 9 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 1 (0)| 00:00:01 |<br />| 10 | NESTED LOOPS | | | | | |<br />| 11 | CONNECT BY PUMP | | | | | |<br />| 12 | TABLE ACCESS BY INDEX ROWID| CU_ALL | 20 | 180 | 21 (0)| 00:00:01 |<br />|* 13 | INDEX RANGE SCAN | IND2_CU_ALL | 20 | | 1 (0)| 00:00:01 |<br />| 14 | TABLE ACCESS BY INDEX ROWID | CU_ALL | 1 | 55 | 1 (0)| 00:00:01 |<br />|* 15 | INDEX UNIQUE SCAN | PK_CU_ALL_IND | 1 | | 0 (0)| 00:00:01 |<br />|* 16 | INDEX RANGE SCAN | IND_CA_RECEIPT_CUSTID | 50 | | 2 (0)| 00:00:01 |<br />| 17 | TABLE ACCESS BY INDEX ROWID | CA_RECEIPT | 50 | 1050 | 52 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 7 - access("CANO"=PRIOR "CUSTID")<br /> 9 - access("CUSTID"=2353)<br /> 13 - access("CANO"=PRIOR "CUSTID")<br /> 15 - access("CU"."CUSTID"="CUSTID")<br /> 16 - access("CA"."CUSTID"="CU"."CUSTID")<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1 recursive calls<br /> 0 db block gets<br /> 797 consistent gets<br /> 1 physical reads<br /> 0 redo size<br /> 11748 bytes sent via SQL*Net to client<br /> 729 bytes received via SQL*Net from client<br /> 21 SQL*Net roundtrips to/from client<br /> 7 sorts (memory)<br /> 0 sorts (disk)<br /> 289 rows processed<br /></pre><br /><br />Hence to sum up, oracle gives us the output based on our input values/conditions. Any query must have proper joining condition when multiple tables are involved.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-63734411073575594952009-06-21T13:01:00.006+05:302009-06-21T13:10:27.480+05:30ORA-600 [ktadrprc-1]During my visit to the one of the customer, I found following error while dropping old partition from partitioned table.<br /><br />ORA-00600: internal error code, arguments: [ktadrprc-1], [], [], [], [], [], [], []<br /><br />When we search metalink, it was found that this error can occur when there is mismatch between seg$ and ind$ entries. There was a mismatch because of an index entry with the no-segment. This is the feature introduced by oracle to test the index selectivity without creating an index. Following is the test case for the same – <br /><br />Create a partitioned table<br /><br /><pre><br /><br />SQL> create table partt (n number, dt date, v varchar2(20)) partition by range (dt)<br /> 2 (<br /> 3 partition p1 values less than (to_date('01-JAN-2009','DD-MON-YYYY')),<br /> 4 partition p2 values less than (to_date('02-JAN-2009','DD-MON-YYYY')),<br /> 5 partition p3 values less than (to_date('03-JAN-2009','DD-MON-YYYY'))<br /> 6 );<br /><br />Table created.<br /><br /></pre><br /><br><br /><br />Create one normal index and one nosegment index.<br /><br /><pre><br /><br />SQL> create index indpart on partt(n) local;<br /><br />Index created.<br /><br />SQL> create index indpartx on partt(v) local nosegment;<br /><br />Index created.<br /><br /></pre><br /><br />Note that, virtual index entry cannot be found in dba_indexes but is available in dba_ind_columns.<br /><br /><pre><br /><br />SQL> select owner, index_name, table_name from dba_indexes where table_name='PARTT';<br /><br />OWNER INDEX_NAME TABLE_NAME<br />---------- ------------------------------ ------------------------------<br />SYS INDPART PARTT<br /><br />1 row selected.<br /><br />SQL> select index_owner, index_name, table_owner, table_name, column_name<br /> 2 from dba_ind_columns where table_name='PARTT';<br /><br />INDEX_OWNER INDEX_NAME TABLE_OWNER TABLE_NAME COLUMN_NAME<br />------------ ------------ ------------ ------------ ------------<br />SYS INDPARTX SYS PARTT V<br />SYS INDPART SYS PARTT N<br /><br />2 rows selected.<br /><br /><br /></pre><br /><br /><br />Now, when we try to drop one of the partitions, we will get the error.<br /><br /><pre><br /><br />SQL> alter table partt drop partition p1;<br />alter table partt drop partition p1<br />*<br />ERROR at line 1:<br />ORA-00600: internal error code, arguments: [ktadrprc-1], [], [], [], [], [], [], []<br /><br /></pre><br /><br /><b> Solution:</b><br /><br />To resolve this error, when we drop the virtual index, the partition got dropped.<br /><br /><pre><br /><br />SQL> drop index indpartx;<br /><br />Index dropped.<br /><br />SQL> alter table partt drop partition p1;<br /><br />Table altered.<br /><br /></pre><br /><b>Note : </b><br /><br />The solution given above is valid for 10gR2 (tested) version. For previous versions (Oracle 8i and 9i), even dropping an index will give following ORA-600 error. <br /><br /><pre><br /><br />SQL> drop index indpartx;<br /><br />drop index indpartx<br /> *<br />ERROR at line 1:<br />ORA-00600: internal error code, arguments: [4823], [], [], [], [], [], [], []<br /><br /></pre><br />Whereas in 8.1.7.4 version while dropping partition, you may get following error –<br /><br /><pre><br />SQL> alter table partt drop partition p1;<br />alter table partt drop partition p1<br />*<br />ERROR at line 1:<br />ORA-00600: internal error code, arguments: [4610], [], [], [], [], [], [], []<br /><br /></pre><br /><br /><b>Conclusion:</b><br /><br />Be aware of the creating virtual indexes on partitioned tables just for testing purpose. If it is very big production table, you may end up with unwanted troubles.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-6755796720932810552009-01-26T20:16:00.003+05:302009-01-26T20:26:32.940+05:30Missing Tables in Execution PlanIn Oracle 10g, one of the new features introduced for optimizer, named transformation. Using this features it may be possible that oracle internally removes some of the redundant tables those were involved only in join predicates. <br /><br />To demonstrate the same following test case created – <br /> 1. Table with primary key.<br /> 2. Table with foreign key referencing the above created table.<br /> 3. Query that eliminates the primary key table and hence leads to faster execution.<br /><br /><pre><br />SQL> create table pk_table (id number, name varchar2(20));<br /><br />Table created.<br /></pre><br /><pre><br /><br />SQL> alter table pk_table add constraint pk_id primary key(id);<br /><br />Table altered.<br /></pre><br /><pre><br /><br />SQL> create table fk_table (id number, phone number, addr varchar2(200));<br /><br />Table created.<br /></pre><br /><pre><br /><br />SQL> alter table fk_table add constraint fk_id foreign key (id) references pk_table (id);<br /><br />Table altered.<br /><br /></pre><br /><pre><br />SQL> begin<br /> 2 for x in 1..100 loop<br /> 3 insert into pk_table values (x, dbms_random.string('A',20));<br /> 4 end loop;<br /> 5 commit;<br /> 6* end;<br />SQL> /<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br /><br />SQL> begin<br /> 2 for x in 1..500 loop<br /> 3 insert into fk_table values (round(dbms_random.value(1,100)), dbms_random.value(1,100000), dbms_random.string('A',100));<br /> 4 end loop;<br /> 5 commit;<br /> 6 end;<br /> 7 /<br /><br />PL/SQL procedure successfully completed.<br /></pre><br /><pre><br /><br />SQL> exec dbms_stats.gather_table_stats (user,'PK_TABLE', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br /></pre><br /><pre><br />SQL> exec dbms_stats.gather_table_stats (user,'FK_TABLE', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /></pre><br /><pre><br /><br />SQL> select b.id, b.phone, b.addr<br /> 2 from pk_table a,<br /> 3 fk_table b<br /> 4 where a.id=b.id;<br /></pre><br /><pre><br /><br />SQL> @?/rdbms/admin/utlxpls<br /><br />PLAN_TABLE_OUTPUT<br />--------------------------------------------------------------------------------<br />Plan hash value: 1108179040<br /><br />------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 500 | 62500 | 5 (0)| 00:00:01 |<br />|* 1 | TABLE ACCESS FULL| FK_TABLE | 500 | 62500 | 5 (0)| 00:00:01 |<br />------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br />PLAN_TABLE_OUTPUT<br />--------------------------------------------------------------------------------<br /><br /> 1 - filter("B"."ID" IS NOT NULL)<br /><br />13 rows selected.<br /></pre><br />As shown above, the primary key table is missing in the execution plan. The table PK_TABLE is redundant. As all of us know, every record of foreign key column must be present in primary key table. Hence to fetch data from foreign key table we need have to check primary table for that record. <br /><br />Optimizer added one extra filter condition i.e. “B.ID is not null”. This is to ensure the data consistency when transformation takes place. <br /><br />Starting from Oracle 11g, the optimizer has advanced to eliminate anti joins as well. For e.g. check the plan for following query in 10g and 11g.<br /><br /><strong>Query</strong><br />select * from fk_table b where not exists (select 1 from pk_table a where a.id=b.id);<br /><br /><strong>Plan in 10g</strong><br /><pre><br />PLAN_TABLE_OUTPUT<br />---------------------------------------------------------------------------------<br />Plan hash value: 2976039779<br /><br />-------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />-------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |<br />| 1 | NESTED LOOPS ANTI | | 1 | 128 | 5 (0)| 00:00:01 |<br />| 2 | TABLE ACCESS FULL| FK_TABLE | 500 | 62500 | 5 (0)| 00:00:01 |<br />|* 3 | INDEX UNIQUE SCAN| PK_ID | 100 | 300 | 0 (0)| 00:00:01 |<br />-------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 3 - access("A"."ID"="B"."ID")<br /></pre><br /><strong>Plan in 11g</strong><br /><pre><br />PLAN_TABLE_OUTPUT<br />-------------------------------------------------------------------------------<br />Plan hash value: 1108179040<br /><br />------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1 | 125 | 5 (0)| 00:00:01 |<br />|* 1 | TABLE ACCESS FULL| FK_TABLE | 1 | 125 | 5 (0)| 00:00:01 |<br />------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 1 - filter("B"."ID" IS NULL)<br /></pre><br />This elimination is applicable to view as well. That means, if my view is fetching all columns from primary and foreign key tables and query is written to access only foreign key columns from this view, then also optimizer will eliminate primary table.<br /><br /><strong>Limitations </strong><br /><br />As this seems to be very interesting changes in optimizer, it is not applicable for the multi-column primary/foreign key columns. <br /><br />To sum up, now optimizer itself takes care of such unnecessary joins to eliminate table access to improve the query performance.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-12188719146566611822008-09-15T22:41:00.001+05:302008-09-15T22:44:08.129+05:30Materialized view fast refresh and performance.Oracle uses materialized view to replicate data to remote database in distributed environment. Also it can be used for cache expensive queries in a data warehouse environment. Many of the times, an issue is reported for low performance in fast or incremental refresh. There could be many of the reasons, bugs participating performance issues. One of them is to create an index on snaptime$$ column of corresponding MLOG$ table, which helps to fasten the fast refresh. It is required when MLOG$ table size grows due to any of the reason. <br /><br />Recently when I was looking into one of such issue, it was found that following query was getting executed (internally by Oracle) in primary database. <br /><br />Query<br /><pre><br />SELECT /*+ */ "A2"."CO_ID","A2"."TMCODE","A2"."SPCODE","A2"."SNCODE","A2"."CS_SEQNO",<br /> "A2"."CS_CHANNEL_NUM","A2"."CS_STAT_CHNG","A2"."CS_ON_CBB","A2"."CS_DATE_BILLED",<br /> "A2"."CS_REQUEST","A2"."SN_CLASS","A2"."CS_OVW_SUBSCR","A2"."CS_SUBSCRIPT",<br /> "A2"."CS_OVW_ACCESS","A2"."CS_OVW_ACC_PRD","A2"."CS_OVW_ACC_FIRST","A2"."CS_ACCESS",<br /> "A2"."CS_PENDING_STATE","A2"."CS_CHANNEL_EXCL","A2"."CS_DIS_SUBSCR","A2"."CS_ADV_CHARGE",<br /> "A2"."CS_SRV_TYPE","A2"."SUBPAYER","A2"."USGPAYER","A2"."ACCPAYER","A2"."CS_ENTDATE",<br /> "A2"."CS_OVW_LAST","A2"."INSTALL_DATE","A2"."TRIAL_END_DATE","A2"."CS_ADV_CHARGE_END_DATE",<br /> "A2"."PRM_VALUE_ID","A2"."CURRENCY","A2"."CS_ADV_CHARGE_CURRENCY",<br /> "A2"."REC_VERSION","A2"."SRV_SUBTYPE" <br />FROM "CONTR_SERVICES" "A2", <br /> (SELECT DISTINCT "A3"."CO_ID" "CO_ID","A3"."SNCODE" "SNCODE","A3"."CS_SEQNO" "CS_SEQNO" <br /> FROM "SYSADM"."MLOG$_CONTR_SERVICES" "A3" <br /> WHERE "A3"."SNAPTIME$$">:1 <br /> AND "A3"."DMLTYPE$$"<>'D') "A1" <br />WHERE "A2"."CO_ID"="A1"."CO_ID" <br />AND "A2"."SNCODE"="A1"."SNCODE" <br />AND "A2"."CS_SEQNO"="A1"."CS_SEQNO";<br /></pre><br /><pre><br />Plan Table<br />-------------------------------------------------------------------------------------<br />| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |<br />-------------------------------------------------------------------------------------<br />| SELECT STATEMENT | | 748K| 87M|1382913 | | |<br />| HASH JOIN | | 748K| 87M|1382913 | | |<br />| VIEW | | 751K| 27M| 3585 | | |<br />| SORT UNIQUE | | 751K| 16M| 3585 | | |<br />| TABLE ACCESS BY INDEX |MLOG$_CONTR_SER| 751K| 16M| 16 | | |<br />| INDEX RANGE SCAN |MLOGSNAPTIME | 751K| | 655 | | |<br />| TABLE ACCESS FULL |CONTR_SERVICES | 125M| 9G| 332467 | | |<br />-------------------------------------------------------------------------------------<br /></pre><br />Though there was an index available on SNAPTIME$$ table, the refresh was taking time because the base table size was huge (around 17GB) and it was doing FTS on it. <br /><pre><br />SQL> select owner, segment_name, bytes/1024/1024 "SIZE" from dba_segments<br /> 2 where segment_name='CONTR_SERVICES';<br /><br />OWNER SEGMENT_NAME SIZE<br />---------- ------------------------------ ----------<br />SYSADM CONTR_SERVICES 17216.0156<br /></pre><br />The mlog$ table size was also very big (around 750 MB) and having millions of records. Oracle opted for FTS because of table statistics available on MLOG$_CONTR_SERVICES table. Following are the available stats – <br /><pre><br />SQL> select owner, table_name, partitioned, num_rows, blocks, global_stats<br /> 2 from dba_tables where table_name='MLOG$_CONTR_SERVICES';<br /><br />OWNER TABLE_NAME PAR NUM_ROWS BLOCKS GLO<br />------------ ------------------------------ --- --------------- ---------- ---<br />SYSADM MLOG$_CONTR_SERVICES NO 22,531,900 95364 YES<br /></pre><br />Most of the cases, fast refresh materialized view refresh is based on primary key of the base table. If regular fast refresh is happening, the number of rows gets replicated are very less and ideally it should do index scan with primary key. In the above mentioned case, few thousands rows were suppose to replicate. <br /><br />The customer was having dynamic analyze script wherein they were also analyzing mlog$ tables. To have better performance in fast refresh, we need to purge the mlog$ table regularly and stats on this table must be gathered when the table is empty. To prove this point, I have set customized stats on mlog table.<br /><pre><br />SQL> exec dbms_stats.set_table_stats ('SYSADM','MLOG$_CONTR_SERVICES',NUMROWS=>0, NUMBLKS=>0);<br /><br />PL/SQL procedure successfully completed.<br />.<br />.<br />SQL> select owner, table_name, partitioned, num_rows, blocks, global_stats<br /> 2 from dba_tables where table_name='MLOG$_CONTR_SERVICES';<br /><br />OWNER TABLE_NAME PAR NUM_ROWS BLOCKS GLO<br />------------ ------------------------------ --- --------------- ---------- ---<br />SYSADM MLOG$_CONTR_SERVICES NO 0 0 YES<br /></pre><br />After setting above stats, the plan got changed.<br /><pre><br />Plan Table<br />-------------------------------------------------------------------------------------<br />| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |<br />-------------------------------------------------------------------------------------<br />| SELECT STATEMENT | | 1 | 122 | 27 | | |<br />| NESTED LOOPS | | 1 | 122 | 27 | | |<br />| VIEW | | 1 | 39 | 24 | | |<br />| SORT UNIQUE | | 1 | 23 | 24 | | |<br />| TABLE ACCESS BY INDEX |MLOG$_CONTR_SER| 1 | 23 | 1 | | |<br />| INDEX RANGE SCAN |MLOGSNAPTIME | 1 | | 655 | | |<br />| TABLE ACCESS BY INDEX RO|CONTR_SERVICES | 125M| 9G| 3 | | |<br />| INDEX UNIQUE SCAN |PKCONTR_SERVICE| 125M| | 2 | | |<br />-------------------------------------------------------------------------------------<br /></pre><br />In this case, it was doing proper index scan on primary key and fast refresh get successfully completed within minutes.<br /><br />To sum up, statistics play an important role and must have to be used in proper way. In materialized view refresh following things needs to be remembered<br /><br />1. The MLOG$ table must be analyze when table is empty. <br />2. Regular MLOG$ purging is required, where size grows.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-34792337224766693442008-07-27T16:35:00.005+05:302008-07-27T16:57:32.219+05:30Do we need stats on all columns?Recently I had visited one of the customers for performance review. As per the current stats gathering policy, they use following method to gather stats on the tables.<br /><pre><br />dbms_stats.gather_table_stats (user, tabname, cascade=>true, method_opt=>'FOR ALL INDEXED COLUMNS 75');<br /></pre><br />As can be seen from the command, the input from method_opt is to gather stats on all the indexed columns with bucket size as 75. This means the stats on the other non-index columns will not be collected. This is a wrong practice and stats on the columns of the table are crucial, particularly if these are use in queries. <br /><br />When I was discussed with the customer, the customer had following misconception.<br /><br /> 1. To make decision for index scan; the stats on indexed columns are available.<br /> 2. When there is no index available on column, why oracle needs stats on it. It will use FTS for such queries.<br /><br />This is purely a misconception, even if a column, use in a query, is not indexed; the correct cardinality is very crucial in determining a correct join order. To prove this point, I have created following small test case.<br /><br />I created two tables (test1 and test2) and indexes on it.<br /><pre><br />SQL> create table test1 as<br /> 2 select mod(LVL, 3) ID, LVL VAL from<br /> 3 (select level "LVL" from dual connect by level <= 100000);<br /><br />Table created.<br /><br />SQL> create index ind_test1 on test1 (id);<br /><br />Index created.<br /><br />SQL> create table test2 as<br /> 2 select lvl "ID", dbms_random.string('A',10) "STR" from (select<br /> 3 level "LVL" from dual connect by level <=50000);<br /><br />Table created.<br /><br />SQL> create index ind_test2 on test2 (id);<br /><br />Index created.<br /></pre><br />Let us generate optimizer statistics based on the policy as the customer used.<br /><pre><br />SQL> exec dbms_stats.gather_table_stats (user, 'TEST1', cascade=>true, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 75');<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.gather_table_stats (user, 'TEST2', cascade=>true, method_opt=>'FOR ALL INDEXED COLUMNS SIZE 75');<br /><br />PL/SQL procedure successfully completed.<br /></pre><br />The stats on the table are<br /><pre><br /><br />SQL> select owner, table_name, partitioned, num_rows, blocks<br /> 2 from dba_tables<br /> 3 where table_name in ('TEST1','TEST2')<br /> 4 and owner = 'TEST';<br /><br />OWNER TABLE_NAME PAR NUM_ROWS BLOCKS<br />---------- ---------- --- ---------- ----------<br />TEST TEST1 NO 100000 186<br /><br />TEST TEST2 NO 50000 156<br /></pre><br /><pre><br />break on table_name skip 1<br />select table_name, column_name,num_distinct, num_nulls, density<br />from dba_tab_columns<br />where table_name in ('TEST1','TEST2')<br />and owner='TEST'<br />order by table_name;<br /><br />TABLE_NAME COLUMN_NAM NUM_DISTINCT NUM_NULLS DENSITY<br />---------- ---------- ------------ ---------- ----------<br />TEST1 ID 3 0 .000005<br /> VAL <br /><br />TEST2 ID 50000 0 .00002<br /> STR <br /></pre><br />Based on the input to dbms_stats, stats are collected only on indexed columns and not for other non-indexed columns.<br />The following query is used to check the execution plan.<br /><pre><br />SQL> set autot trace<br />SQL> select a.id, a.val, b.str<br /> 2 from test1 a,<br /> 3 test2 b<br /> 4 where a.id = b.id<br /> 5 and a.val = 40;<br /><br />Elapsed: 00:00:00.02<br /><br />Execution Plan<br />----------------------------------------------------------<br /> 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=667 Bytes=77372)<br /> 1 0 HASH JOIN (Cost=46 Card=667 Bytes=77372)<br /> 2 1 TABLE ACCESS (FULL) OF 'TEST1' (Cost=19 Card=1000 Bytes=16000)<br /> 3 1 TABLE ACCESS (FULL) OF 'TEST2' (Cost=17 Card=50000 Bytes=5000000)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 0 recursive calls<br /> 0 db block gets<br /> 327 consistent gets<br /> 315 physical reads<br /> 0 redo size<br /> 489 bytes sent via SQL*Net to client<br /> 499 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /><br /></pre><br />Now, let’s delete and gather new statistics on the same tables.<br /><pre><br />SQL> exec dbms_stats.delete_table_stats (user, 'TEST1');<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.delete_table_stats (user, 'TEST2');<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.gather_table_stats (user, 'TEST1', cascade=>true, method_opt=>'FOR ALL COLUMNS');<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.gather_table_stats (user, 'TEST2', cascade=>true, method_opt=>'FOR ALL COLUMNS');<br /><br />PL/SQL procedure successfully completed.<br /></pre><br />Let us check the stats again.<br /><pre><br />TABLE_NAME COLUMN_NAM NUM_DISTINCT NUM_NULLS DENSITY<br />---------- ---------- ------------ ---------- ----------<br />TEST1 ID 3 0 .000005<br /> VAL 100000 0 .00001<br /><br />TEST2 ID 50000 0 .00002<br /> STR 50000 0 .00002<br /></pre><br />The plan by the same query is<br /><pre><br />SQL> select a.id, a.val, b.str<br /> 2 from test1 a,<br /> 3 test2 b<br /> 4 where a.id = b.id<br /> 5 and a.val = 40;<br /><br />Execution Plan<br />----------------------------------------------------------<br /> 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=21 Card=1 Bytes=22)<br /> 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TEST2' (Cost=2 Card=1 Bytes=15)<br /> 2 1 NESTED LOOPS (Cost=21 Card=1 Bytes=22)<br /> 3 2 TABLE ACCESS (FULL) OF 'TEST1' (Cost=19 Card=1 Bytes=7)<br /> 4 2 INDEX (RANGE SCAN) OF 'IND_TEST2' (NON-UNIQUE) (Cost=1 Card=1)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 0 recursive calls<br /> 0 db block gets<br /> 183 consistent gets<br /> 174 physical reads<br /> 0 redo size<br /> 489 bytes sent via SQL*Net to client<br /> 499 bytes received via SQL*Net from client<br /> 2 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1 rows processed<br /></pre><br />The comparison between both of the above plans is - <br /><table border=1 cellpadding=2 cellspacing=1><br /><th>Plan when stats are gathered <br /><th>Cardinality of TEST1 table<br /><th>Consistent Gets<br /><tr><td>Only on indexed columns <td>1000 <td>327<br /><tr><td>On all columns<td> 1 <td> 183<br /></table><br /><br /><strong>Why such difference in both the execution plans?</strong><br /><br />Cardinality plays a very big role in opting an execution plan. In first case, as the stats are not available on VAL column of the table TEST1. <br /><br />The cardinality when statistics are available only on indexed columns <br /><br />The filter condition is on ID column of table TEST1 and as there are no stats available on this column, oracle will take the default selectivity as 1/100 and calculated cardinality as <br /><pre><br />num_rows*selectivity = 100000*(1/100) = 1000<br /></pre><br />As per the plan, the TEST1 is driving table and cardinality is very high and hence oracle opted the FTS for TEST2 table and cardinality is calculated as<br /><pre><br />num_rows = 50000<br /></pre><br />When stats are available on both the all column of the tables, for TEST1 table cardinality is calculated as <br /><pre><br />Num_rows*(“ID column selectivity” * “VAL column Selectivity)<br />= round (100000 * (1/3 * 1/100000))<br />= 0<br /></pre><br />As cardinality cannot be 0 it will consider as 1.<br /><br />As the cardinality for TEST1 table is very low i.e. 1, TEST2 will be opted for index scan and hence cardinality is calculated as<br /><pre><br />Num_rows * (1/distinct) <br />= round (50000 * (1/50000)) <br />= 1<br /></pre><br />From above discussion it is concluded that the stats are necessary on all columns. In the above-mentioned test case, we have calculated the stats with histograms but the same result can be found without histograms, as the column stats are getting updated. <br /><br />To sum up, the stats on all columns are required for optimal execution plan.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-82729234842680471682008-05-18T13:00:00.009+05:302008-05-22T20:33:18.990+05:30Functions and Performance.Function plays an important role in development environment. The functions can be implicit (defined by oracle) or explicit (defined by user – PL/SQL code). Developers use the PL/SQL functions as they simplify the large code. <br /><br />The function may accept the value and must return the value. The user-defined functions are easy to use but they may degrade the performance badly. Hence it is always recommended to use the functions only and only when they are unavoidable. I have demonstrated the performance benefits by the simple query over user-defined functions. <br /><br />If the condition does not satisfy, function will return null value. It means the output rows will be the number of rows satisfied by the main query. For e.g. if suppose the main query returns 1000 rows and function satisfies only 400 rows, then the output will have 1000 rows, wherein the function column will show 600 null values and 400 value returned by the function. Hence we must have to use outer join whenever we are merging function into the simple query. The example below uses select sub-query that plays the role of function. <br /><br /><pre><br />SQL> create table fn_tables as select * from dba_tables;<br /><br />Table created.<br /><br />SQL> create table fn_indexes as select * from dba_indexes;<br /><br />Table created.<br /><br />SQL> create index fn_ind_tables on fn_tables (table_name);<br /><br />Index created.<br /><br />SQL> exec dbms_stats.gather_table_stats(user, 'FN_TABLES', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> exec dbms_stats.gather_table_stats(user, 'FN_INDEXES', cascade=>true);<br /><br />PL/SQL procedure successfully completed.<br /><br />SQL> create or replace function fn_test (tabowner varchar2, tabname varchar2)<br /> 2 return varchar2 is<br /> 3 tbsname varchar2(200);<br /> 4 begin<br /> 5 select tablespace_name into tbsname<br /> 6 from fn_tables<br /> 7 where owner=tabowner<br /> 8 and table_name=tabname;<br /> 9 return tbsname;<br /> 10 end;<br /> 11 /<br /><br />Function created.<br /></pre><br /><pre><br />SQL> set autot trace<br />SQL> select a.owner, a.index_name, a.status,<br /> 2 fn_test(a.owner, a.table_name) "TBS_NAME"<br /> 3 from fn_indexes a<br /> 4 /<br /><br />1072 rows selected.<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 2626245312<br /><br />--------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />--------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1072 | 51456 | 9 (0)| 00:00:01 |<br />| 1 | TABLE ACCESS FULL| FN_INDEXES | 1072 | 51456 | 9 (0)| 00:00:01 |<br />--------------------------------------------------------------------------------<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 1097 recursive calls<br /> 0 db block gets<br /><strong> 3308 consistent gets</strong><br /> 0 physical reads<br /> 0 redo size<br /> 36706 bytes sent via SQL*Net to client<br /> 1162 bytes received via SQL*Net from client<br /> 73 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1072 rows processed<br /></pre><br /><pre><br />SQL> select a.owner, a.index_name, a.status,<br /> 2 (select tablespace_name from fn_tables b<br /> 3 where b.owner=a.owner<br /> 4 and a.table_name=b.table_name) "TBS_NAME"<br /> 5 from fn_indexes a<br /> 6 /<br /><br />1072 rows selected.<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 321380953<br /><br />-------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost| Time |<br />-------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1072 | 51456 | 9| 00:00:01 |<br />|* 1 | TABLE ACCESS BY INDEX ROWID| FN_TABLES | 1 | 29 | 2| 00:00:01 |<br />|* 2 | INDEX RANGE SCAN | FN_IND_TABLES| 1 | | 1| 00:00:01 |<br />| 3 | TABLE ACCESS FULL | FN_INDEXES | 1072 | 51456 | 9| 00:00:01 |<br />-------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 1 - filter("B"."OWNER"=:B1)<br /> 2 - access("B"."TABLE_NAME"=:B1)<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 0 recursive calls<br /> 0 db block gets<br /><strong> 1573 consistent gets</strong><br /> 0 physical reads<br /> 0 redo size<br /> 36706 bytes sent via SQL*Net to client<br /> 1162 bytes received via SQL*Net from client<br /> 73 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1072 rows processed<br /></pre><br /><pre><br />SQL> select a.owner, a.index_name, a.status, b.tablespace_name "TBS_NAME"<br /> 2 from fn_indexes a,<br /> 3 fn_tables b<br /> 4 where b.owner(+) = a.owner<br /> 5 and b.table_name(+) = a.table_name<br /> 6 /<br /><br />1072 rows selected.<br /><br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 893717710<br /><br />------------------------------------------------------------------------------------<br />| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |<br />------------------------------------------------------------------------------------<br />| 0 | SELECT STATEMENT | | 1072 | 82544 | 18 (6)| 00:00:01 |<br />|* 1 | HASH JOIN RIGHT OUTER| | 1072 | 82544 | 18 (6)| 00:00:01 |<br />| 2 | TABLE ACCESS FULL | FN_TABLES | 879 | 25491 | 8 (0)| 00:00:01 |<br />| 3 | TABLE ACCESS FULL | FN_INDEXES | 1072 | 51456 | 9 (0)| 00:00:01 |<br />------------------------------------------------------------------------------------<br /><br />Predicate Information (identified by operation id):<br />---------------------------------------------------<br /><br /> 1 - access("B"."OWNER"(+)="A"."OWNER" AND<br /> "B"."TABLE_NAME"(+)="A"."TABLE_NAME")<br /><br /><br />Statistics<br />----------------------------------------------------------<br /> 0 recursive calls<br /> 0 db block gets<br /><strong> 135 consistent gets</strong><br /> 0 physical reads<br /> 0 redo size<br /> 36706 bytes sent via SQL*Net to client<br /> 1162 bytes received via SQL*Net from client<br /> 73 SQL*Net roundtrips to/from client<br /> 0 sorts (memory)<br /> 0 sorts (disk)<br /> 1072 rows processed<br /><br /></pre><br />Place all above required values in tabular format.<br /><br /><table border=1 cellpadding=2 cellspacing=1><br /><th>Query No.<th> Query Criteria <th> Cost <th> Recursive Calls <th> Consistent Gets<br /><tr><td> 1 <td> With Function <td> 9 <td> 1097 <td> 3308<br /><tr><td> 2 <td> With Sub-Query <td> 9 <td> 0 <td> 1573<br /><tr><td> 3 <td> With Simple Query <td> 18 <td> 0 <td> 135<br /></table><br /><br />As shown in above table, though the cost of the third query is high, the number of consistent gets has been reduced tremendously. When we are using function, the recursive calls and consistent gets are very high. But when using sub-query both the values have been reduced but this query also suffers from performance compared to third query. <br /><br />Recently I had got the following query from one of our customer. Due to confidentiality, I will not be posting the original query from the customer site but in this example I have made similar kind of query.<br /><br /><pre><br />create or replace function get_minamount (cust number) <br />return number is<br />amt number;<br /><br />begin<br /> select min (amount) into amt <br /> from contract <br /> where customer=cust and status = 'a';<br /><br /> if (amt is null) then<br /> select min (amount) into amt <br /> from contract <br /> where customer=cust and status = 'd';<br /> end if;<br /><br /> if (amt is null) then<br /> select min (amount) into amt <br /> from contract <br /> where customer=cust and status = 's';<br /> end if;<br /><br /> Return amt;<br /><br />End;<br />/<br /></pre><br /><br />The main query is as follows.<br /><pre><br />select customer, name, address, <br /> get_minamount (customer) "MIN_AMT"<br />from customer_master <br />where custcode is like 'Large%';<br /></pre><br /><br />The customer_master is having millions of records and to execute this query it was taking around 8 Hrs. The given suggestion on this query is as follows. <br /><br /><pre><br />select customer, name, address, min_amt, status<br />from (select a.customer, name, address, status, <br /> min(case when status in ('a','s','d') then amount end) <br /> over (partition by b.customer, b.status) "MIN_AMT", <br /> rank() over (partition by b.customer <br /> order by (case when status = 'a' and amount is not null then 1<br /> when status = 's' and amount is not null then 2<br /> when status = 'd' and amount is not null then 3 <br /> end), rownum) "RNK"<br /> from customer_master a, <br /> contract b<br /> where a.customer = b.customer(+))<br />where rnk = 1;<br /></pre><br /><br />After implementing the query the performance gain was very high. The modified query is taking around 20 minutes to execute against 8 Hrs. <br /><br />To sum up, the user-defined functions may contribute in performance degradation of the query. The simple query will give major performance benefits against the query with functions. The performance of the query is inversely proportional to the number of consistent gets. Hence to improve the performance, we must minimize the consistent gets.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-11095012439952713322008-05-04T10:15:00.003+05:302008-05-04T10:46:54.019+05:30Migration from Filesystem to ASM Files.This is continuation to my previous blog, wherein we discussed about interacting with ASM and OS database file. <br /><br />ASM is an integrated volume manager and file system for Oracle Database files. It simplifies, automates storage management, increase storage utilization and delivers predictable performance. Due to this many of us wants to migrate filesystem database to ASM. If the database is very large and critical, we may not be able to afford the migration downtime. In this blog, I would discuss the procedure to dramitically reduce downtime during the process of migration from raw/OS filesystem to ASM using Oracle Data Guard and RMAN. This blog will also be helpful to create standby database. This procedure explains how data guard can help to reduce the downtime of migration. <br /><br />Following is the summarised procedure<br /><br />1. Create standby (dataguard) of the production database.<br />2. Migrate the standby database to ASM<br />3. Do the switchover to migate the database to ASM. <br /><br />As mentioned above, it seems, migration is very simple and yes it is. Below mentioned is the detailed procedure.<br /><br /><strong>PREPARE FOR STANDBY DATABASE</strong><br /><br />In this demonstration, I have named primary instance as “ORA10G” and standby database as “ORA10GSTD”<br /><br />Create a backup of the database using RMAN. <br /><pre><br />RMAN> connect target /<br /><br />connected to target database: ORA10G (DBID=3970225046)<br /><br />RMAN> backup database include current controlfile for standby;<br /><br />Starting backup at 08-APR-08<br />using target database control file instead of recovery catalog<br />allocated channel: ORA_DISK_1<br />channel ORA_DISK_1: sid=154 devtype=DISK<br />channel ORA_DISK_1: starting full datafile backupset<br />channel ORA_DISK_1: specifying datafile(s) in backupset<br />input datafile fno=00001 name=D:\ORACLE\DATAFILES\ORA10G\SYSTEM01.DBF<br />input datafile fno=00002 name=D:\ORACLE\DATAFILES\ORA10G\UNDOTBS01.DBF<br />input datafile fno=00003 name=D:\ORACLE\DATAFILES\ORA10G\SYSAUX01.DBF<br />input datafile fno=00004 name=D:\ORACLE\DATAFILES\ORA10G\USERS01.DBF<br />channel ORA_DISK_1: starting piece 1 at 08-APR-08<br />channel ORA_DISK_1: finished piece 1 at 08-APR-08<br />piece handle=D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1 tag=TAG20080408T230957 comment=NONE<br />channel ORA_DISK_1: backup set complete, elapsed time: 00:00:56<br />channel ORA_DISK_1: starting full datafile backupset<br />channel ORA_DISK_1: specifying datafile(s) in backupset<br />including standby control file in backupset<br />including current SPFILE in backupset<br />channel ORA_DISK_1: starting piece 1 at 08-APR-08<br />channel ORA_DISK_1: finished piece 1 at 08-APR-08<br />piece handle=D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1 tag=TAG20080408T230957 comment=NONE<br />channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05<br />Finished backup at 08-APR-08<br /></pre><br /><br />Make the tnsnames.ora changes in both (production and standby) servers to make connections between both the servers for communication.<br /><br />Create and copy the parameter file and make appropriate changes for standby database.<br /><pre><br />SQL> create pfile from spfile;<br /></pre><br />We assume that, ASM instance is already created, mounted and ready for use. We are using diskgroup, named as ORADG. Also make sure this diskgroup should have sufficient space available.<br /><br />Following are the standby parameters, which we have to modify.<br /><br />Edit control_files parameter to appropriate location. This location should be ASM diskgroup. <br /><pre><br />control_files = ‘+ORADG\ORA10g\controlfiles\control01.ctl’<br /></pre><br />Add or edit DB_UNIQUE_NAME parameter to define the database uniqueness.<br /><pre><br />db_unique_name = ORA10GSTD ---- Instance name of Standby.<br /></pre><br />Add or edit LOG_ARCHIVE_CONFIG parameter to identify primary and standby database using unique name. The parameter value should be primary and standby instance name.<br /><pre> <br />log_archive_config = ’dg_config=(ORA10G,ORA10GSTD)’<br /></pre><br />Set following parameters to point to ASM diskgroup. <br /><pre><br />db_create_file_dest = '+ORADG'<br />db_create_online_log_dest_1 = '+ORADG'<br /></pre><br />The other parameters that needs to change are – <br /><pre><br />instance_name = ORA10gSTD<br />fal_server = ora10g --- TNSNAMES entry for Primary<br />fal_client = ora10gSTD --- TNSNAMES entry for Standby<br /></pre><br />Create Password file<br /><pre><br />$ orapwd file=$ORACLE_HOME/dbs/orapw<SID> password=<SYS PWD><br /></pre><br /><strong>INITIATE THE STANDBY DATABASE IN ASM</strong><br /><br />After confirmation ASM instance up and parametter, password file is prepared, now we initiate standby database.<br /><pre><br />SQL> startup force nomount<br />ORACLE instance started.<br /><br />Total System Global Area 301989888 bytes<br />Fixed Size 1248672 bytes<br />Variable Size 96469600 bytes<br />Database Buffers 201326592 bytes<br />Redo Buffers 2945024 bytes<br /></pre><br />RMAN provides single command that initiates the standby database using source database information. Connect to the production database using RMAN and initiate standby database.<br /><pre><br />C:\> rman target sys/oracle@ora10g<br /><br />Recovery Manager: Release 10.2.0.1.0 - Production <br /><br />Copyright (c) 1982, 2005, Oracle. All rights reserved.<br /><br />connected to target database: ORA10G (DBID=3970225046)<br /><br />RMAN> connect auxiliary /<br /><br />connected to auxiliary database: ORA10G (not mounted)<br /><br />RMAN> duplicate target database for standby;<br /><br />Starting Duplicate Db at 08-APR-08<br />using target database control file instead of recovery catalog<br />allocated channel: ORA_AUX_DISK_1<br />channel ORA_AUX_DISK_1: sid=157 devtype=DISK<br /><br />contents of Memory Script:<br />{<br /> restore clone standby controlfile;<br /> sql clone 'alter database mount standby database';<br />}<br />executing Memory Script<br /><br />Starting restore at 08-APR-08<br />using channel ORA_AUX_DISK_1<br /><br />channel ORA_AUX_DISK_1: starting datafile backupset restore<br />channel ORA_AUX_DISK_1: restoring control file<br />channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1<br />channel ORA_AUX_DISK_1: restored backup piece 1<br />piece handle=D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1 tag=TAG20080408T230957<br />channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17<br />output filename=+ORADG/ora10g/controlfiles/control01.ctl<br />Finished restore at 08-APR-08<br /><br />sql statement: alter database mount standby database<br />released channel: ORA_AUX_DISK_1<br /><br />contents of Memory Script:<br />{<br /> set newname for clone tempfile 1 to new;<br /> switch clone tempfile all;<br /> set newname for clone datafile 1 to new;<br /> set newname for clone datafile 2 to new;<br /> set newname for clone datafile 3 to new;<br /> set newname for clone datafile 4 to new;<br /> restore<br /> check readonly<br /> clone database<br /> ;<br />}<br />executing Memory Script<br /><br />executing command: SET NEWNAME<br /><br />renamed temporary file 1 to +ORADG in control file<br /><br />executing command: SET NEWNAME<br /><br />executing command: SET NEWNAME<br /><br />executing command: SET NEWNAME<br /><br />executing command: SET NEWNAME<br /><br />Starting restore at 08-APR-08<br />allocated channel: ORA_AUX_DISK_1<br />channel ORA_AUX_DISK_1: sid=157 devtype=DISK<br /><br />channel ORA_AUX_DISK_1: starting datafile backupset restore<br />channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set<br />restoring datafile 00001 to +ORADG<br />restoring datafile 00002 to +ORADG<br />restoring datafile 00003 to +ORADG<br />restoring datafile 00004 to +ORADG<br />channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1<br />channel ORA_AUX_DISK_1: restored backup piece 1<br />piece handle=D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1 tag=TAG20080408T230957<br />channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:46<br />Finished restore at 08-APR-08<br /><br />contents of Memory Script:<br />{<br /> switch clone datafile all;<br />}<br />executing Memory Script<br /><br />datafile 1 switched to datafile copy<br />input datafile copy recid=5 stamp=651539654 filename=+ORADG/ora10gstd/datafile/system.257.651539549<br />datafile 2 switched to datafile copy<br />input datafile copy recid=6 stamp=651539654 filename=+ORADG/ora10gstd/datafile/undotbs1.258.651539549<br />datafile 3 switched to datafile copy<br />input datafile copy recid=7 stamp=651539654 filename=+ORADG/ora10gstd/datafile/sysaux.259.651539551<br />datafile 4 switched to datafile copy<br />input datafile copy recid=8 stamp=651539654 filename=+ORADG/ora10gstd/datafile/users.260.651539551<br />Finished Duplicate Db at 08-APR-08<br /></pre><br />We have successfully transferred files from production to standby database in ASM datagroup. Modify log_archive_dest_2 parameter so that archive logs file will transfer from primary to standby database. Then place the standby database recovery mode so archive logs can be applied automatically.<br /><pre><br />SQL> recover managed standby database disconnect;<br /></pre><br /><strong>DO SWITCHOVER TO MIGRATE DATABASE TO ASM</strong><br /><br />This is final step of the migration. We will do switchover to migrate the database to ASM. Check that, both primary and standby must be in sync. <br /><br />On Primary Database – <br /><pre><br />Select max(sequence#) “MAXSEQ” from v$log_history;<br /><br />MAXSEQ<br />---------<br /> 124<br /></pre><br />On Standby Database – <br /><pre><br />Select max(sequence#) “MAXSEQ” from v$log_history;<br /><br />MAXSEQ<br />---------<br /> 124<br /></pre><br />As both primary and standby database, we can do switchover. <br /><pre><br />SQL> alter database commit to switchover to primary;<br /><br />Database altered.<br /><br />SQL> shut immediate;<br />ORA-01109: database not open<br /><br /><br />Database dismounted.<br />ORACLE instance shut down.<br /><br /><br />SQL> startup<br />ORACLE instance started.<br /><br />Total System Global Area 301989888 bytes<br />Fixed Size 1248672 bytes<br />Variable Size 96469600 bytes<br />Database Buffers 201326592 bytes<br />Redo Buffers 2945024 bytes<br />Database mounted.<br />Database opened.<br /></pre><br />We have completed ASM migration.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-69782016676653253062008-04-20T19:14:00.012+05:302008-04-20T21:40:47.577+05:30Manual Switchover.Data Guard maintains the standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage.<br /><br />Due to low network bandwidth or when using Standard Edition, we maintain manual dataguard, in which, we transfer the archive log files manually to the DR site and then apply them there. In such cases, it is very difficult to use oracle internal switch over mechanism. This document is prepared for the customers having manual dataguard and wants to do the switch over.<br /><br />This document consider following pre-requisites.<br /><br /> All Dataguard pre-requisites (same OS and Oracle version etc.) are fulfilled.<br /> Manual Dataguard setup is in place having standby database with standby controlfile.<br /> All the archive logs are properly applied on DR site.<br /><br /><strong>Switch-over Process</strong><br /><br />1. Shutdown the primary database.<br />2. Make sure all the archive logs are transferred and applied on DR site.<br />3. Shutdown the standby database<br />4. Copy the controlfile and online redo log files of primary database to DR site.<br />5. Copy the standby controlfile from DR site to primary site.<br />6. Startup the primary database (Current standby database). Here we may have to do recovery to apply changes of online redo log files. This must be a complete recovery.<br />7. Startup the standby database (Current primary)<br />8. Check the databases & their role.<br /><br />We have successfully completed the switchover activity and same steps can be followed for switch back activity.<br /><br />Following is the demonstration of the above-mentioned steps. We will create a table and check the availability of it after switchover.<br /><br /><strong>Primary Database –</strong><br /><br /><pre>SQL> create table std_test_1 as select * from dba_objects;<br /><br /><br />Table created.<br /><br /><br />SQL> insert into std_test_1 select * from std_test_1;<br /><br /><br />6165 rows created.<br /><br /><br />SQL> insert into std_test_1 select * from std_test_1;<br /><br />12330 rows created.<br /><br />SQL> commit;<br /><br />Commit complete.<br /></pre><br />Note down the current sequence number of primary database.<br /><pre><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 235<br /></pre><br />Now we will check the status of standby database. (Its current sequence no.)<br /><pre><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 233<br /></pre><br />The standby database is lagging of primary database. So we will recover the<br />standby database and sync it with primary.<br /><pre><br />SQL> recover standby database;<br />ORA-00279: change 50652 generated at 04/02/2008 22:32:37 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_234.DBF<br />ORA-00280: change 50652 for thread 1 is in sequence #234<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00279: change 50750 generated at 04/02/2008 22:36:08 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_235.DBF<br />ORA-00280: change 50750 for thread 1 is in sequence #235<br />ORA-00278: log file 'D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_234.DBF' no longer needed for this recovery<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00279: change 50759 generated at 04/02/2008 22:36:14 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_236.DBF<br />ORA-00280: change 50759 for thread 1 is in sequence #236<br />ORA-00278: log file 'D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_235.DBF' no longer needed for this recovery<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00308: cannot open archived log<br />'D:\ORACLE\DATAFILES\ORA9ISTD\ARCH\ORA9I_236.DBF'<br />ORA-27041: unable to open file<br />OSD-04002: unable to open file<br />O/S-Error: (OS 2) The system cannot find the file specified.<br /></pre><br />Check the current status of standby database.<br /><pre><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 235<br /></pre><br />Now we are ready to proceed for switchover process. To start with, we will first<br />shutdown both primary and standby database.<br /><br /><strong>Primary –</strong><br /><pre><br />SQL> shut immediate;<br />Database closed.<br />Database dismounted.<br />ORACLE instance shut down.<br /></pre><br /><strong>Standby –</strong><br /><pre><br />SQL> shut immediate;<br />ORA-01109: database not open<br /><br />Database dismounted.<br />ORACLE instance shut down.<br /></pre><br />We are doing manual switchover process, so we need to copy the control and<br />redo log files from primary to standby database. At the same time, we have to<br />copy the controlfile from standby database location to primary database.<br /><br />Assuming copy process is over, we will startup the Current Primary (former<br />standby) database.<br /><pre><br />SQL> startup nomount<br />ORACLE instance started.<br /><br />Total System Global Area 101784276 bytes<br />Fixed Size 453332 bytes<br />Variable Size 75497472 bytes<br />Database Buffers 25165824 bytes<br />Redo Buffers 667648 bytes<br /><br />SQL><br />SQL> alter database mount;<br /><br />Database altered.<br /><br />SQL> alter database open;<br /><br />alter database open<br />*<br />ERROR at line 1:<br />ORA-01113: file 1 needs media recovery<br />ORA-01110: data file 1: 'D:\ORACLE\DATAFILES\ORA9ISTD\SYSTEM01.DBF'<br /></pre><br />Here we are getting recovery error because this is standby database; the changes<br />lying in current redo logs (which we have copied) are yet to apply on current<br />primary. The important thing to note here is that, this recovery must be complete<br />recovery. Post to this, we will open the check the status of the database.<br /><pre><br />SQL> recover database;<br /><br />Media recovery complete.<br /><br />SQL> alter database open;<br /><br />Database altered.<br /><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 235<br /></pre><br />As shown above, the recovery has been completed and current primary (former<br />standby) database is up. We will check for test table created.<br /><pre><br />SQL> select count(1) from std_test_1;<br /><br /> COUNT(1)<br />----------<br /> 24660<br /></pre><br />We have successfully completed one phase of the switchover process. Now we<br />will startup current standby (former primary) database to proceed further.<br /><pre><br />SQL> startup nomount<br />ORACLE instance started.<br /><br />Total System Global Area 101784276 bytes<br />Fixed Size 453332 bytes<br />Variable Size 75497472 bytes<br />Database Buffers 25165824 bytes<br />Redo Buffers 667648 bytes<br /><br />SQL> alter database mount standby database;<br /><br />Database altered.<br /><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 235<br /><br />SQL> select open_mode, database_role from v$database;<br /><br />OPEN_MODE DATABASE_ROLE<br />---------- ----------------<br />MOUNTED PHYSICAL STANDBY<br /></pre><br />The primary and standby databases are in sync. We will generate the archive log<br />files at primary site and will apply these files at standby database.<br /><pre><br />SQL> alter system switch logfile;<br /><br />System altered.<br /><br />SQL> alter system switch logfile;<br /><br />System altered.<br /><br />SQL> alter system switch logfile;<br /><br />System altered.<br /><br />SQL> select max(sequence#) from v$log_history;<br /><br />MAX(SEQUENCE#)<br />--------------<br /> 238<br /><br />SQL> select open_mode, database_role from v$database;<br /><br />OPEN_MODE DATABASE_ROLE<br />---------- ----------------<br />READ WRITE PRIMARY<br /></pre><br />We have generated three archive log files here and now we will copy and apply<br />them at standby database.<br /><pre><br />SQL> recover standby database;<br />ORA-00279: change 50759 generated at 04/02/2008 22:36:14 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_236.DBF<br />ORA-00280: change 50759 for thread 1 is in sequence #236<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00279: change 51098 generated at 04/02/2008 22:49:37 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_237.DBF<br />ORA-00280: change 51098 for thread 1 is in sequence #237<br />ORA-00278: log file 'D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_236.DBF' no<br />longer needed for this recovery<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00279: change 51119 generated at 04/02/2008 22:49:50 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_238.DBF<br />ORA-00280: change 51119 for thread 1 is in sequence #238<br />ORA-00278: log file 'D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_237.DBF' no<br />longer needed for this recovery<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00279: change 51126 generated at 04/02/2008 22:49:51 needed for thread 1<br />ORA-00289: suggestion : D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_239.DBF<br />ORA-00280: change 51126 for thread 1 is in sequence #239<br />ORA-00278: log file 'D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_238.DBF' no<br />longer needed for this recovery<br /><br />Specify log: {<ret>=suggested filename AUTO CANCEL}<br />ORA-00308: cannot open archived log 'D:\ORACLE\DATAFILES\ORA9I\ARCH\ORA9I_239.DBF'<br />ORA-27041: unable to open file<br />OSD-04002: unable to open file<br />O/S-Error: (OS 2) The system cannot find the file specified.<br /></pre><br />The recovery is completed successfully. Here we have completed our switchover<br />process.<br /><br />It is important to note that, in standby database, it is mandatory to have the database in force logging mode. This feature has been introduced by oracle in version 9i. For pre-9i versions, do check <strong>UNRECOVERABLE_TIME </strong>column value of <strong>V$DATAFILE</strong> view. This value must be less than the standby database creation time fir all existing datafiles. If any datafile is having value greater than standby creation time, we need to restore the backup of this datafile and continue with recovery. The unrecoverable_time value gets updated by oracle for any nologging operation. To avoid such nologging operations, do ensure that, <strong>all objects must be in logging mode.</strong><br /><br />Note – The above-mentioned steps can be used for switchback process.Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com2tag:blogger.com,1999:blog-6890489880673875730.post-3618864202907436542008-03-02T12:11:00.010+05:302008-03-02T13:05:09.786+05:30ASM File Handling.Automatic Storage Management (ASM) is a feature of Oracle Database 10g that provides integrated cluster file system and volume management capabilities at no additional cost. ASM provided a foundation for highly efficient storage management with kernelized asynchronous I/O, direct I/O, redundancy, striping, and an easy way to manage storage.<br /><br />ASM disk groups are not visible outside the database for regular file system administration tasks such as copying and creating directories. It is always seems to be difficult task to backup or migrating the database. But in reality it is not the case. Oracle has provided various methods to deal with ASM files, discussed in following sections –<br /><br /><strong>ASM FILES HANDLING USING FTP<br /></strong><br />Starting with oracle 10g Release 2, Oracle has provided an option to transfer the files from ASM to locations outside of the diskgroups via FTP and ussing HTTP web browser.<br /><br />To access ASM files, file-mapping functionalities can be used provided by Oracle XML DB feature. Following is the method to setup FTP and HTTP to access and trasnfer the ASM files to regular OS files. FTP uses 21 as defualt port and HTTP uses port 80. ASM do not use default services, so to access ASM files, it is required to use two different unused ports. For e.g. we will use 8080 port for HTTP and 2100 port for FTP.<br /><br />1. Install XMLDB feature. <br /><br /><blockquote> <pre> # cd $ORACLE_HOME/rdbms/admin<br /> # sqlplus “/ as sysdba”<br /><br /> SQL> @catqm.sql XDB SYSAUX TEMP<br /><br /> Here XDB is username<br /> SYSAUX is default tablespace for xdb user.<br /> TEMP is temporary tablespace for xdb user.<br /><br /> SQL> @catxdbdbca 2100 8080<br /><br /> Here 2100 is FTP port.<br /> 8080 is HTTP port. </pre><br /> Check for any invalid XDB objects and compile, if any. <br /><br /> <pre><br /> SQL> select object_name, object_type, status <br /> from dba_objects<br /> where owner=’XDB’ and status=’INVALID’;<br /><br /> SQL> select comp_name, status, version from dba_registry<br /> where comp_name=’Oracle XML Database’;</pre></blockquote><br /><br />2. Configure FTP and HTTP Ports.<br /><br /> <blockquote><pre> SQL> execute dbms_xdb.sethttpport(8080);<br /> SQL> execute dbms_xdb.setftpport(2100);</pre><br /><br /> You can also check the ports being used.<br /><br /> <pre> Select dbms_xdb.gethttpport(), dbms_xdb.getftpport() <br /> from dual;</pre></blockquote><br /><br />3. Listener and Instance modifications<br /><br /> <blockquote>For Single Instance – <br /><pre><br /> SQL> alter system set dispatchers = ‘(PROTOCOL=TCP) <br /> (SERVICE=<sid>XDB)’ scope=both;</pre><br /><br /> For RAC Instance – <br /><br /> <pre> SQL> alter system set dispatchers = ‘(PROTOCOL=TCP) <br /> (SERVICE=<sid1>XDB)’ scope=both sid=INSTANCE1;<br /><br /> SQL> alter system set dispatchers = ‘(PROTOCOL=TCP) <br /> (SERVICE=<sid2>XDB)’ scope=both sid=INSTANCE2;</pre><br /> <br /> Restart the listener<br /><br /> <pre> # lsnrctl stop<br /> # lsnrctl start</pre><br /><br /> Listener should show http and ftp port entries.<br /><br /> <pre># lsnrctl status<br /><br /> ………… Truncated some entries ……………<br /><br /> Listening Endpoints Summary...<br /><br /> (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test)(PORT=1521)))<br /> (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test)(PORT=1521)))<br /> (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))<br /> (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test)(PORT=8080))(Presentation=HTTP)(Session=RAW))<br /> (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test)(PORT=2100))(Presentation=FTP)(Session=RAW))</pre></blockquote><br /><br />4. Use FTP at O/S level to do interaction with ASM files.<br /><br /> <blockquote><pre> C:\> ftp <br /> ftp> open test 2100 ---- Here test is the server name and 2100 is ftp port.<br /> Username: system ---- System is the database user.<br /> Password: ****** ---- Password of database system user. <br /><br /> ftp> cd /sys/asm</pre><br /><br /> Now directory can be changed to the directory containing the files to start FTP service and ls, get or put commands can be used as normal o/s level ftp command.</blockquote><br /><br /><strong>DBMS_FILE_TRANSFER Utility</strong><br /><br />Oracle also provides another feature to move files. The dbms_file_transfer utility introduced by oracle to copy/transfer the files to and from ASM. If you want to copy the file that is attached to a tablespace, you must put that tablespace into read-only mode or take it offline. Because of this limitation, you cannot copy the data files associated with the system, undo, sysaux or temporary tablespace. This utility treats the file as binary file during copy also character conversion is not possible during copy.<br /><br />Following is the method to transfer the files from file-system to ASM.<br /><br />List the file to copy.<br /><br /><pre>SQL> select file_name from dba_data_files <br /> 2 where tablespace_name=’USERS’;</pre><br /><br />Create directory for source and destination.<br /><br /><pre>SQL> create directory source_osfs as ‘/oracle/datafile’;<br />SQL> create directory dest_asmfs as ‘+asmdsk1/orcl/datafile';<br />SQL> grant read, write on source_osfs to testuser;<br />SQL> grant read, write on dest_asmfs to testuser;</pre><br /><br />Connect to the user and transfer the file.<br /><br /><pre>SQL> connect testuser/testuser@ora10gdb<br />SQL> alter tablespace users read only;<br /> -OR-<br />SQL> alter tablespace users offline;</pre><br /><br /><pre>SQL> exec dbms_transfer.copy_file (source_directory_object=>’source_osfs’, source_file_name=>’users01.dbf’, destination_directory_object=>’dest_asmfs’, destination_file_name=>’users01.dbf’);</pre><br /><br />File has been copied so now datafile path can be changed using “alter database rename command”. Finally make the tablespace online for use.<br /><br /><pre>SQL> alter tablespace users read write;<br /> -OR-<br />SQL> alter tablespace users online;</pre><br /><br />We can use get_file and put_file procedures of dbms_transfer package to copy the files across the database using database link.<br /><br />Limitations – <br /><br />1. File that we want to copy should be multiple of 512 bytes. Otherwise we would get following error. Also maximum file of file to transfer is 2 terabyte.<br /><br /><pre>ERROR at line 1: <br />ORA-19505: failed to identify file "/oracle/datafile/users01.dbf" <br />ORA-27046: file size is not a multiple of logical block size <br />Additional information: 1 <br />ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 84 <br />ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 193 <br />ORA-06512: at line 2 </pre><br /><br />2. Till Oracle 10g Rel 1 only following transfer is allowed.<br /><br />ASM to ASM.<br />OS file to OS file.<br /><br />This restriction is no more in Oracle 10g Rel 2. DBMS_TRANSFER has been enhanced to support following combinations of ASM.<br /><br />ASM to OS file.<br />OS file to ASM.<br /><br /><strong>RMAN & ASM FILES</strong><br /><br />So far we have completed two methods to copy the files from ASM to OS file system. Finally how can we forget RMAN utility to interact with ASM Files? Following is the method to copy the files using RMAN utility. We will copy the files from one disk group to another diskgroup. <br /><br /><blockquote>1. Identify the file.<br /><br /><pre>SQL> select file_name from dba_data_files <br /> 2 where tablespace_name='TBS_TEST';<br /><br />FILE_NAME<br />---------------------------------------------------<br />+ORA_DG/oraprod/datafile/tbs_test.1442.642809147</pre><br /><br />2. Take the file offline and copy it using rman utility.<br /><br /><pre>SQL> alter database datafile <br /> 2 ‘+ORA_DG/oraprod/datafile/tbs_test.1442.642809147’ offline;<br /><br /># rman target /<br /><br />RMAN> COPY DATAFILE ‘+ORA_DG/oraprod/datafile/tbs_test.1442.642809147’ TO ‘+ORA_DG_NEW’;</pre><br /><br />3. Now file has been copied, we can update the dictionary with the new ASM Diskgroup location.<br /><br /><pre>SQL> ALTER DATABASE RENAME FILE<br /> 2 ‘+ORA_DG/oraprod/datafile/tbs_test.1442.642809147’<br /> 3 TO ‘+ORA_DG_NEW/oraprod/datafile/tbs_test.264.59829765’;</pre><br /><br />We got the destination file name from rman copy output. <br /><br />4. Use RMAN to rename the ASM database file copy.<br /><br /><pre>RMAN> switch datafile ‘+ORA_DG/oraprod/datafile/tbs_test. 264.59829765’ to copy;</pre><br /><br />5. Finally recover and take the file online.<br /><br /><pre>SQL> Recover datafile ‘+ORA_DG/oraprod/datafile/tbs_test. 264.59829765’;<br /><br />SQL> alter database datafile ‘+ORA_DG/oraprod/datafile/tbs_test. 264.59829765’ online;</pre><br /><br />6. Delete the old file from ASM directory. <br /><br /><pre>$ export ORACLE_SID=+ASM<br />$ sqlplus “/ as sysdba” <br />SQL> alter diskgroup ora_dg_new drop file datafile ‘+ORA_DG/oraprod/datafile/tbs_test. 264.59829765’;</pre></blockquote>Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com0tag:blogger.com,1999:blog-6890489880673875730.post-90200028807858732932007-12-29T17:08:00.000+05:302007-12-29T17:45:31.096+05:30Index Internals.Tables can grow large, and when they do, it becomes difficult for users to quickly find the data they need. Oracle offers indexes as a method of speeding database performance when accessing tables with a lot of data. The b-tree index is the traditional and most useful indexing mechanism. It stores the data in a tree like structure.<br /><br />The b-tree index structure is made up of root block, branch blocks and leaf blocks. The root block is an entry point where search for data in index starts. Any index contains only and only one root block. The root block is always physically the same block. The branch blocks are next level of the root block that is having pointers to leaf blocks in the index.<br /><br />The leaf blocks are the highest level of the index, which contain indexed column values and the corresponding ROWIDs. Each leaf block is comprised of double-linked list structure. It means each leaf block is linked to the other block on its left and right, in order to make it possible to search in both the directions through a range of values in index. The index entries are always in ordered. <br /><br />The oracle's index is always maintains the balanced structure. To understand this, it is necessary to understand block split operation in index. There are two ways of block splits<br /><br /><strong>50-50 BLOCK SPLIT </strong><br /><br />The 50-50-block split can occur when there is an insert operation of a non-maximum value and when the corresponding block is full. The indexed column update operation for an index is internally delete followed by an insert. The split operation steps are as follows –<br /><br />1. Request for new block from free-list/bitmap structure (Depending on non-ASSM and ASSM tablespace option)<br />2. Distribute existing block so that upper half volume of an index move to the new requested block<br />3. Insert the column value in appropriate block.<br />4. Update the leaf block pointers such that previously full block right pointer will point to the new block and new block’s right pointer will point to the right pointed block of previously full block.<br />5. Finally update the branch block to reference previous full block and add a new entry for to point to new leaf block.<br /><br />The similar kind of operation is applicable to the branch and root block split. Even branch and root block split is more expensive as it involves corresponding next level pointer updations. Root block split allocate two new blocks wherein data is evenly distributed and root block is updated such that it will now point to these new blocks. So root block will always physically the same block. The root block split can increase the height of the index by 1. <br /><br /><strong>90-10 BLOCK SPLIT</strong><br /><br />The 90-10 block split can occur, when the new indexed column entry is the maximum value. In this case, new block will be requested and corresponding branch blocks are updated accordingly.<br /><br /><strong>Can deleted space of an index be reused?</strong><br /><br />There are multiple answers to this question – <br /> 1. Index will never use deleted space.<br /> 2. Index will use deleted space if the same column value is inserted again.<br /><br />But in reality, both the above statements are myth. Index will use deleted space even when the new inserted value is not same.<br /><br /><pre>Test Case – </pre><br />To validate above statemenet, we will create the test table and insert some records into it. Here temp table contains the serial values.<br /><br /><pre><br />SQL> create table temp as select rownum "A" from dba_objects a, dba_objects;<br /><br />Table created.</pre><pre><br /><br />SQL> create table test (a number, b number);<br /><br />Table created.</pre><pre><br />SQL> create index ind_test on test(a);<br /><br />Index created.</pre><pre><br />SQL> insert into test select a, a+50000 from temp where a >10000 and a<=20000;<br /><br />10000 rows created.</pre><pre><br />SQL> commit;<br /><br />Commit complete.</pre><pre><br />SQL> analyze index ind_test validate structure;<br /><br />Index analyzed.</pre><pre><br /><br />SQL> select name, lf_rows, del_lf_rows, used_space <br /> From index_stats where name='IND_TEST';</pre><pre><br />NAME LF_ROWS DEL_LF_ROWS USED_SPACE<br />------------------------------ ---------- ----------- ----------<br />IND_TEST 10000 0 160127<br /></pre><br /><br />As LF_ROWS column shows, there are 10000 records present in the index. Now check the values after deletion of some records.<br /><br /><pre><br />SQL> delete from test where a > 14000 and a <= 16000;<br /><br />2000 rows deleted.</pre><pre><br /><br />SQL> commit;<br /><br />Commit complete.</pre><pre><br /><br />SQL> analyze index ind_test validate structure;</pre><pre><br /><br />Index analyzed.</pre><pre><br /><br />SQL> select name, lf_rows, del_lf_rows, used_space from index_stats <br /> where name='IND_TEST';</pre><pre><br /><br />NAME LF_ROWS DEL_LF_ROWS USED_SPACE<br />---------- ---------- ----------- ----------<br />IND_TEST 10000 2000 160127</pre><pre><br /><br />SQL> select count(*) from test;<br /><br /> COUNT(*)<br />----------<br /> 8000<br /></pre><br /><br />Here the number of rows (LF_ROWS – DEL_LF_ROWS) = (10000-2000) = 8000 which matches the count of the test table. <br /><br />To check whether index uses the deleted space, we will add new records which are not same as the current or deleted rows.<br /><br /><pre><br />SQL> insert into test select a, a+50000 from temp where a > 20000 and a<=22000;<br /><br />2000 rows created.</pre><pre><br /><br />SQL> commit;<br /><br />Commit complete.</pre><pre><br /><br />SQL> analyze index ind_test validate structure;<br /><br />Index analyzed.</pre><pre><br /><br /><br />SQL> select name, lf_rows, del_lf_rows, used_space from index_stats <br /> where name='IND_TEST';</pre><pre><br /><br />NAME LF_ROWS DEL_LF_ROWS USED_SPACE<br />---------- ---------- ----------- ----------<br />IND_TEST 10500 500 168146</pre><pre><br /><br />SQL><br />SQL> select count(1) from test;<br /><br /> COUNT(1)<br />----------<br /> 10000<br /><br /></pre><br /><br />The total added rows are 2000. As the del_lf_rows column shows, there are only 500 deleted records found. Which means, out of 2000 deleted records, the index has used deleted space of 1500 records. Now check the status after insertion of the some previously deleted records. We had deleted records between 14000 and 16000 values, and now will insert any of these 500 values.<br /><br /><pre><br />SQL> insert into test select a, a+1000 <br /> From temp where a > 14500 and a <= 15500 and mod(a,2)=0;<br /><br />500 rows created.</pre><pre><br /><br />SQL> commit;<br /><br />Commit complete.</pre><pre><br /><br />SQL> analyze index ind_test validate structure;<br /><br />Index analyzed.</pre><pre><br /><br />SQL> select name, lf_rows, del_lf_rows, used_space <br /> From index_stats where name='IND_TEST';</pre><pre><br /><br />NAME LF_ROWS DEL_LF_ROWS USED_SPACE<br />---------- ---------- ----------- ----------<br />IND_TEST 10500 0 168141<br /><br /></pre><br /><br />The above statistics shows that index will always use the deleted space.<br /><br /><strong>Conclusion – </strong><br /><br />1. Rows deletions will either make the blocks empty or some space in block. <br />2. The empty blocks can be used for any rows irrespective of previous deleted column values. <br />3. If there is space available in existing blocks, space can be used if it satifies the column value between lower and upper existing values in that block.<br /><br />In many cases, it has also been observed that DBA's rebuild indexes to reclaim deleted space, but the test case above shows that the deleted space is reclaimed and hence does not require any rebuilding. Hence, if this is one of the primary reason to rebuild the indexes, then, dba's now can wonder whether do they really require index rebuilding?Sandeep Redkarhttp://www.blogger.com/profile/09850590013696861549noreply@blogger.com4