Thursday, March 13, 2014

Parallel processes and its details

Following script will show the parallel processes running currently in the oracle instance and also shows how many threads are running.
For example : if we are moving a table from one tablespace to another tablspace with PARALLEL clause, following script will show how many threads are parallally running.  

column child_wait  format a30
column parent_wait format a30
column server_name format a4  heading 'Name'
column x_status    format a10 heading 'Status'
column schemaname  format a10 heading 'Schema'
column x_sid format 9990 heading 'Sid'
column x_pid format 9990 heading 'Pid'
column p_sid format 9990 heading 'Parent'

break on p_sid skip 1

select x.server_name
     , x.status as x_status
     , x.pid as x_pid
     , x.sid as x_sid
     , w2.sid as p_sid
     , v.osuser
     , v.schemaname
     , w1.event as child_wait
     , w2.event as parent_wait
from  v$px_process x
    , v$lock l
    , v$session v
    , v$session_wait w1
    , v$session_wait w2
where x.sid <> l.sid(+)
and   to_number (substr(x.server_name,2)) = l.id2(+)
and   x.sid = w1.sid(+)
and   l.sid = w2.sid(+)
and   x.sid = v.sid(+)
and   nvl(l.type,'PS') = 'PS'
order by 1,2
/

Wednesday, February 26, 2014

Oracle database migration using Datapump API’s

Abstract
Huge size(database size in Terabytes) Oracle database migration from one server to another database server with minimal downtime is always obeys uncertainty principle. Physical database copy using RMAN, Transportable tablespace or Snapshot  will keep the database fragmentation over the migrated database and looking at the option export/import method will creates the dump file and transferring and importing it again in to the database will take longer downtime. Oracle introduced the datapump API’s and using the DB_LINK export and import will progresses simultaneously.

Problem
18 TB of data warehouse environment needed to upgrade to the new database server and version from 11.1.0.1 to 11.2.0.3.

Solution
There are methods of migration like RMAN, transportable tablespace, Hot backup copy but none of these methods will eliminate the fragmentation in the database. The one method which de-fragments the database is export/import. But the traditional export(expdp) will create a dump file and then we need to transfer(SCP) the dump file to the destination server and we need to import(impdp) the  data. This is the safest method and eliminates the fragmentation since the datapump export is the logical backup and SQL’s are used to import rather than the physical blocks. But this method takes a longer time since exporting, transferring the dump file and importing are not progressing sumultaniously.

Oracle introduced the datapump API’s and using the API’s we can able to handle export, transfer of data and import simultaneously.
And also it eliminates the fragmentation in the database after the migration.

If range partition tables are exists, export/import older partition tables(READ ONLY) earlier to the down time. This will again reduce the down time.

Requirement
  • Destination Server with Oracle Binaries and database created
  • DB_LINK in destination database – will communicate with source database and transfers the exported data from source to destination
  • Database User with IMP_FULL_DATABASE in destination
  • Database user with EXP_FULL_DATABASE in source
  • Directory object created in destination server to write the log file.
  • Before proceeding with this method, Create empty tablespaces on destination database which are related to the schemas we are going to migrate. Provide proper extendible sizes to the tablespace otherwise the migration will halt.
  • Create all the roles related to the user we are going to migrate.
Drawback of this method

This method will not migrate the LONG object columns on a table. Hence we are excluding the LONG objects in the script. We have to manually export and import the long objects tables later.
Footprint

Datapump API’s  are reduces the database migration downtime around  30-35%. And also eliminates the fragmentation and 18TB of database is migrated as 16 TB .

How to migrate

Source : From where you need to export the data(SOURCE.world)
Target: To where you need to import the data

Step 1 : Create DB User – DBMIG, in both Target and source
 
Create user DBMIG identified by dbmig default_tablespace USERS account unlock;
 
Step 2 : Grant DBMIG with IMP_FULL_DATABASE and EXP_FULL_DATABASE
grant IMP_FULL_DATABASE to DBMIG;
grant EXP_FULL_DATABASE to DBMIG;
 
Step 3 : Copy the source TNS entry(SOURCE.world) in to the targets tnsnames.ora file and test it using tnsping
 
Step 4 : Create DB Link - DBMIG_LINK1 in target database to source(Using DBMIG user)
 
CREATE PUBLIC DATABASE LINK DBMIG_LINK1  USING 'SOURCE.world';
 
Step 5 : Create Directory DBMIG_DIR1 in target with read write grants

create directory DBMIG_DIR1 as '<DIRECTORY_PATH>';
grant read, write on DIRECTORY DBMIG_DIR1 to DBMIG;

Step 6: Setup the Mig1.sh file as following -
Edit the MAILID with your email_id

Mig1.sh



#!/bin/ksh
MAILID=XXXXXXXXXXXXXXXX

sqlplus dbmig/dbmig << EOF > mig1.log
set echo on timing on time on
@mig1.sql
EOF
mailx -s "Script $SCRIPT Completed - Here is the log1 :" $MAILID < mig1.log