Sunday, May 18, 2008

Functions and Performance.

Function plays an important role in development environment. The functions can be implicit (defined by oracle) or explicit (defined by user – PL/SQL code). Developers use the PL/SQL functions as they simplify the large code.

The function may accept the value and must return the value. The user-defined functions are easy to use but they may degrade the performance badly. Hence it is always recommended to use the functions only and only when they are unavoidable. I have demonstrated the performance benefits by the simple query over user-defined functions.

If the condition does not satisfy, function will return null value. It means the output rows will be the number of rows satisfied by the main query. For e.g. if suppose the main query returns 1000 rows and function satisfies only 400 rows, then the output will have 1000 rows, wherein the function column will show 600 null values and 400 value returned by the function. Hence we must have to use outer join whenever we are merging function into the simple query. The example below uses select sub-query that plays the role of function.


SQL> create table fn_tables as select * from dba_tables;

Table created.

SQL> create table fn_indexes as select * from dba_indexes;

Table created.

SQL> create index fn_ind_tables on fn_tables (table_name);

Index created.

SQL> exec dbms_stats.gather_table_stats(user, 'FN_TABLES', cascade=>true);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user, 'FN_INDEXES', cascade=>true);

PL/SQL procedure successfully completed.

SQL> create or replace function fn_test (tabowner varchar2, tabname varchar2)
2 return varchar2 is
3 tbsname varchar2(200);
4 begin
5 select tablespace_name into tbsname
6 from fn_tables
7 where owner=tabowner
8 and table_name=tabname;
9 return tbsname;
10 end;
11 /

Function created.


SQL> set autot trace
SQL> select a.owner, a.index_name, a.status,
2 fn_test(a.owner, a.table_name) "TBS_NAME"
3 from fn_indexes a
4 /

1072 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2626245312

--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1072 | 51456 | 9 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| FN_INDEXES | 1072 | 51456 | 9 (0)| 00:00:01 |
--------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1097 recursive calls
0 db block gets
3308 consistent gets
0 physical reads
0 redo size
36706 bytes sent via SQL*Net to client
1162 bytes received via SQL*Net from client
73 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1072 rows processed


SQL> select a.owner, a.index_name, a.status,
2 (select tablespace_name from fn_tables b
3 where b.owner=a.owner
4 and a.table_name=b.table_name) "TBS_NAME"
5 from fn_indexes a
6 /

1072 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 321380953

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1072 | 51456 | 9| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| FN_TABLES | 1 | 29 | 2| 00:00:01 |
|* 2 | INDEX RANGE SCAN | FN_IND_TABLES| 1 | | 1| 00:00:01 |
| 3 | TABLE ACCESS FULL | FN_INDEXES | 1072 | 51456 | 9| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("B"."OWNER"=:B1)
2 - access("B"."TABLE_NAME"=:B1)


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1573 consistent gets
0 physical reads
0 redo size
36706 bytes sent via SQL*Net to client
1162 bytes received via SQL*Net from client
73 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1072 rows processed


SQL> select a.owner, a.index_name, a.status, b.tablespace_name "TBS_NAME"
2 from fn_indexes a,
3 fn_tables b
4 where b.owner(+) = a.owner
5 and b.table_name(+) = a.table_name
6 /

1072 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 893717710

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1072 | 82544 | 18 (6)| 00:00:01 |
|* 1 | HASH JOIN RIGHT OUTER| | 1072 | 82544 | 18 (6)| 00:00:01 |
| 2 | TABLE ACCESS FULL | FN_TABLES | 879 | 25491 | 8 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | FN_INDEXES | 1072 | 51456 | 9 (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("B"."OWNER"(+)="A"."OWNER" AND
"B"."TABLE_NAME"(+)="A"."TABLE_NAME")


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
135 consistent gets
0 physical reads
0 redo size
36706 bytes sent via SQL*Net to client
1162 bytes received via SQL*Net from client
73 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1072 rows processed


Place all above required values in tabular format.


Query No. Query Criteria Cost Recursive Calls Consistent Gets
1 With Function 9 1097 3308
2 With Sub-Query 9 0 1573
3 With Simple Query 18 0 135


As shown in above table, though the cost of the third query is high, the number of consistent gets has been reduced tremendously. When we are using function, the recursive calls and consistent gets are very high. But when using sub-query both the values have been reduced but this query also suffers from performance compared to third query.

Recently I had got the following query from one of our customer. Due to confidentiality, I will not be posting the original query from the customer site but in this example I have made similar kind of query.


create or replace function get_minamount (cust number)
return number is
amt number;

begin
select min (amount) into amt
from contract
where customer=cust and status = 'a';

if (amt is null) then
select min (amount) into amt
from contract
where customer=cust and status = 'd';
end if;

if (amt is null) then
select min (amount) into amt
from contract
where customer=cust and status = 's';
end if;

Return amt;

End;
/


The main query is as follows.

select customer, name, address,
get_minamount (customer) "MIN_AMT"
from customer_master
where custcode is like 'Large%';


The customer_master is having millions of records and to execute this query it was taking around 8 Hrs. The given suggestion on this query is as follows.


select customer, name, address, min_amt, status
from (select a.customer, name, address, status,
min(case when status in ('a','s','d') then amount end)
over (partition by b.customer, b.status) "MIN_AMT",
rank() over (partition by b.customer
order by (case when status = 'a' and amount is not null then 1
when status = 's' and amount is not null then 2
when status = 'd' and amount is not null then 3
end), rownum) "RNK"
from customer_master a,
contract b
where a.customer = b.customer(+))
where rnk = 1;


After implementing the query the performance gain was very high. The modified query is taking around 20 minutes to execute against 8 Hrs.

To sum up, the user-defined functions may contribute in performance degradation of the query. The simple query will give major performance benefits against the query with functions. The performance of the query is inversely proportional to the number of consistent gets. Hence to improve the performance, we must minimize the consistent gets.

Sunday, May 4, 2008

Migration from Filesystem to ASM Files.

This is continuation to my previous blog, wherein we discussed about interacting with ASM and OS database file.

ASM is an integrated volume manager and file system for Oracle Database files. It simplifies, automates storage management, increase storage utilization and delivers predictable performance. Due to this many of us wants to migrate filesystem database to ASM. If the database is very large and critical, we may not be able to afford the migration downtime. In this blog, I would discuss the procedure to dramitically reduce downtime during the process of migration from raw/OS filesystem to ASM using Oracle Data Guard and RMAN. This blog will also be helpful to create standby database. This procedure explains how data guard can help to reduce the downtime of migration.

Following is the summarised procedure

1. Create standby (dataguard) of the production database.
2. Migrate the standby database to ASM
3. Do the switchover to migate the database to ASM.

As mentioned above, it seems, migration is very simple and yes it is. Below mentioned is the detailed procedure.

PREPARE FOR STANDBY DATABASE

In this demonstration, I have named primary instance as “ORA10G” and standby database as “ORA10GSTD”

Create a backup of the database using RMAN.

RMAN> connect target /

connected to target database: ORA10G (DBID=3970225046)

RMAN> backup database include current controlfile for standby;

Starting backup at 08-APR-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=154 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=D:\ORACLE\DATAFILES\ORA10G\SYSTEM01.DBF
input datafile fno=00002 name=D:\ORACLE\DATAFILES\ORA10G\UNDOTBS01.DBF
input datafile fno=00003 name=D:\ORACLE\DATAFILES\ORA10G\SYSAUX01.DBF
input datafile fno=00004 name=D:\ORACLE\DATAFILES\ORA10G\USERS01.DBF
channel ORA_DISK_1: starting piece 1 at 08-APR-08
channel ORA_DISK_1: finished piece 1 at 08-APR-08
piece handle=D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1 tag=TAG20080408T230957 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:56
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including standby control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 08-APR-08
channel ORA_DISK_1: finished piece 1 at 08-APR-08
piece handle=D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1 tag=TAG20080408T230957 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
Finished backup at 08-APR-08


Make the tnsnames.ora changes in both (production and standby) servers to make connections between both the servers for communication.

Create and copy the parameter file and make appropriate changes for standby database.

SQL> create pfile from spfile;

We assume that, ASM instance is already created, mounted and ready for use. We are using diskgroup, named as ORADG. Also make sure this diskgroup should have sufficient space available.

Following are the standby parameters, which we have to modify.

Edit control_files parameter to appropriate location. This location should be ASM diskgroup.

control_files = ‘+ORADG\ORA10g\controlfiles\control01.ctl’

Add or edit DB_UNIQUE_NAME parameter to define the database uniqueness.

db_unique_name = ORA10GSTD ---- Instance name of Standby.

Add or edit LOG_ARCHIVE_CONFIG parameter to identify primary and standby database using unique name. The parameter value should be primary and standby instance name.
 
log_archive_config = ’dg_config=(ORA10G,ORA10GSTD)’

Set following parameters to point to ASM diskgroup.

db_create_file_dest = '+ORADG'
db_create_online_log_dest_1 = '+ORADG'

The other parameters that needs to change are –

instance_name = ORA10gSTD
fal_server = ora10g --- TNSNAMES entry for Primary
fal_client = ora10gSTD --- TNSNAMES entry for Standby

Create Password file

$ orapwd file=$ORACLE_HOME/dbs/orapw password=

INITIATE THE STANDBY DATABASE IN ASM

After confirmation ASM instance up and parametter, password file is prepared, now we initiate standby database.

SQL> startup force nomount
ORACLE instance started.

Total System Global Area 301989888 bytes
Fixed Size 1248672 bytes
Variable Size 96469600 bytes
Database Buffers 201326592 bytes
Redo Buffers 2945024 bytes

RMAN provides single command that initiates the standby database using source database information. Connect to the production database using RMAN and initiate standby database.

C:\> rman target sys/oracle@ora10g

Recovery Manager: Release 10.2.0.1.0 - Production

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORA10G (DBID=3970225046)

RMAN> connect auxiliary /

connected to auxiliary database: ORA10G (not mounted)

RMAN> duplicate target database for standby;

Starting Duplicate Db at 08-APR-08
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=157 devtype=DISK

contents of Memory Script:
{
restore clone standby controlfile;
sql clone 'alter database mount standby database';
}
executing Memory Script

Starting restore at 08-APR-08
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=D:\ORACLE\ORA10GHOME\DATABASE\02JDBCVT_1_1 tag=TAG20080408T230957
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17
output filename=+ORADG/ora10g/controlfiles/control01.ctl
Finished restore at 08-APR-08

sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1

contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
restore
check readonly
clone database
;
}
executing Memory Script

executing command: SET NEWNAME

renamed temporary file 1 to +ORADG in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 08-APR-08
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=157 devtype=DISK

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +ORADG
restoring datafile 00002 to +ORADG
restoring datafile 00003 to +ORADG
restoring datafile 00004 to +ORADG
channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=D:\ORACLE\ORA10GHOME\DATABASE\01JDBCU5_1_1 tag=TAG20080408T230957
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:46
Finished restore at 08-APR-08

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy recid=5 stamp=651539654 filename=+ORADG/ora10gstd/datafile/system.257.651539549
datafile 2 switched to datafile copy
input datafile copy recid=6 stamp=651539654 filename=+ORADG/ora10gstd/datafile/undotbs1.258.651539549
datafile 3 switched to datafile copy
input datafile copy recid=7 stamp=651539654 filename=+ORADG/ora10gstd/datafile/sysaux.259.651539551
datafile 4 switched to datafile copy
input datafile copy recid=8 stamp=651539654 filename=+ORADG/ora10gstd/datafile/users.260.651539551
Finished Duplicate Db at 08-APR-08

We have successfully transferred files from production to standby database in ASM datagroup. Modify log_archive_dest_2 parameter so that archive logs file will transfer from primary to standby database. Then place the standby database recovery mode so archive logs can be applied automatically.

SQL> recover managed standby database disconnect;

DO SWITCHOVER TO MIGRATE DATABASE TO ASM

This is final step of the migration. We will do switchover to migrate the database to ASM. Check that, both primary and standby must be in sync.

On Primary Database –

Select max(sequence#) “MAXSEQ” from v$log_history;

MAXSEQ
---------
124

On Standby Database –

Select max(sequence#) “MAXSEQ” from v$log_history;

MAXSEQ
---------
124

As both primary and standby database, we can do switchover.

SQL> alter database commit to switchover to primary;

Database altered.

SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.


SQL> startup
ORACLE instance started.

Total System Global Area 301989888 bytes
Fixed Size 1248672 bytes
Variable Size 96469600 bytes
Database Buffers 201326592 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.

We have completed ASM migration.