Abstract
Huge size(database size in Terabytes) Oracle database migration from one server to another database server with minimal downtime is always obeys uncertainty principle. Physical database copy using RMAN, Transportable tablespace or Snapshot will keep the database fragmentation over the migrated database and looking at the option export/import method will creates the dump file and transferring and importing it again in to the database will take longer downtime. Oracle introduced the datapump API’s and using the DB_LINK export and import will progresses simultaneously.
How to migrate
Source : From where you need to export the data(SOURCE.world)
Target: To where you need to import the data
Step 2 : Grant
DBMIG with IMP_FULL_DATABASE and EXP_FULL_DATABASE
create directory DBMIG_DIR1 as '<DIRECTORY_PATH>';
grant read, write on DIRECTORY DBMIG_DIR1 to DBMIG;
Step 6: Setup the Mig1.sh file as following -
Edit the MAILID with your email_id
Mig1.sh
Step 7: Set the mig1.sql file in to the same path where Mig1.sh is setup
Edit the SCHEMA_NAME with your database schema name
(You can also edit the parallel parameter as per your server CPUs capacity - DBMS_DATAPUMP.SET_PARALLEL. Currently it is set to 2)
DECLARE
handle1 number;
handle2 number;
ind number;
percent_done number;
job_state VARCHAR2(30);2
le ku$_LogEntry;
js ku$_JobStatus;
jd ku$_JobDesc;
sts ku$_Status;
iscn number;
DBMS_DATAPUMP.ADD_FILE(handle1, 'dbmig_dir1.out', 'DBMIG_DIR1', '', DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.METADATA_FILTER(handle1, 'SCHEMA_EXPR', 'IN (''SCHEMA_NAME'')');
DBMS_DATAPUMP.SET_PARAMETER(handle1, 'INCLUDE_METADATA', 1);
DBMS_DATAPUMP.SET_PARAMETER(handle1, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE');
DBMS_DATAPUMP.SET_PARALLEL(handle1, 2);
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/INDEX/STATISTICS/INDEX_STATISTICS''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/STATISTICS/TABLE_STATISTICS''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/CONSTRAINT/%''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/COMMENT''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/TRIGGER''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','=''INDEX''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'INCLUDE_PATH_EXPR','=''TABLE''');
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(handle1, dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' || to_char(js.percent_done));
percent_done := js.percent_done;
end if;
if(bitand(sts.mask, dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if(bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
dbms_output.put_line('Job1 has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(handle1);
Note1 :
The drowback of this method is - We are not able to export the LONG objects. You have to manually export and import the long objects tables later. Following is the sql to check how many table has the LONG datatype.
Check_LONG_objects.sql
Note 2 :
If you need to include multiple sheema's in this script, edit the following
from
DBMS_DATAPUMP.METADATA_FILTER(handle1, 'SCHEMA_EXPR', 'IN (''SCHEMA_NAME'')');
to
DBMS_DATAPUMP.METADATA_FILTER(handle1, 'SCHEMA_EXPR', 'IN (''SCHEMA1'',''SCHEMA2'',''SCHEMA3'')');
Reffere more details on Oracle Support : (Doc ID 553337.1) Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video]
Huge size(database size in Terabytes) Oracle database migration from one server to another database server with minimal downtime is always obeys uncertainty principle. Physical database copy using RMAN, Transportable tablespace or Snapshot will keep the database fragmentation over the migrated database and looking at the option export/import method will creates the dump file and transferring and importing it again in to the database will take longer downtime. Oracle introduced the datapump API’s and using the DB_LINK export and import will progresses simultaneously.
Problem
18 TB of data warehouse environment needed to upgrade to the
new database server and version from 11.1.0.1 to 11.2.0.3.
Solution
There are methods of migration like RMAN, transportable
tablespace, Hot backup copy but none of these methods will eliminate the
fragmentation in the database. The one method which de-fragments the database
is export/import. But the traditional export(expdp) will create a dump file and
then we need to transfer(SCP) the dump file to the destination server and we
need to import(impdp) the data. This is
the safest method and eliminates the fragmentation since the datapump export is
the logical backup and SQL’s are used to import rather than the physical blocks.
But this method takes a longer time since exporting, transferring the dump file
and importing are not progressing sumultaniously.
Oracle introduced the datapump API’s and using the API’s we
can able to handle export, transfer of data and import simultaneously.
And also it eliminates the fragmentation in the database after the migration.
If range partition tables are exists, export/import older partition tables(READ ONLY) earlier to the down time. This will again reduce the down time.
Requirement
And also it eliminates the fragmentation in the database after the migration.
If range partition tables are exists, export/import older partition tables(READ ONLY) earlier to the down time. This will again reduce the down time.
Requirement
- Destination Server with Oracle Binaries and database created
- DB_LINK in destination database – will communicate with source database and transfers the exported data from source to destination
- Database User with IMP_FULL_DATABASE in destination
- Database user with EXP_FULL_DATABASE in source
- Directory object created in destination server to write the log file.
- Before proceeding with this method, Create empty tablespaces on destination database which are related to the schemas we are going to migrate. Provide proper extendible sizes to the tablespace otherwise the migration will halt.
- Create all the roles related to the user we are going to migrate.
This method will not migrate the LONG object columns on a
table. Hence we are excluding the LONG objects in the script. We
have to manually export and import the long objects tables later.
Footprint
Datapump API’s are reduces
the database migration downtime around 30-35%.
And also eliminates the fragmentation and 18TB of database is migrated as 16 TB .
How to migrate
Source : From where you need to export the data(SOURCE.world)
Target: To where you need to import the data
Step 1 : Create
DB User – DBMIG, in both Target and source
Create user DBMIG identified by dbmig default_tablespace USERS account unlock;
grant IMP_FULL_DATABASE to DBMIG;
grant EXP_FULL_DATABASE to DBMIG;
Step 3 : Copy the source TNS entry(SOURCE.world) in to the targets tnsnames.ora file and test it using tnsping
Step 4 : Create
DB Link - DBMIG_LINK1 in target database to source(Using DBMIG user)
CREATE PUBLIC DATABASE LINK DBMIG_LINK1 USING 'SOURCE.world';
Step 5 : Create
Directory DBMIG_DIR1 in target with read write grants
create directory DBMIG_DIR1 as '<DIRECTORY_PATH>';
grant read, write on DIRECTORY DBMIG_DIR1 to DBMIG;
Step 6: Setup the Mig1.sh file as following -
Edit the MAILID with your email_id
#!/bin/ksh
MAILID=XXXXXXXXXXXXXXXX
sqlplus dbmig/dbmig << EOF > mig1.log
set echo on timing on time on
@mig1.sql
EOF
mailx -s "Script $SCRIPT Completed - Here is the log1 :" $MAILID < mig1.log
Step 7: Set the mig1.sql file in to the same path where Mig1.sh is setup
Edit the SCHEMA_NAME with your database schema name
(You can also edit the parallel parameter as per your server CPUs capacity - DBMS_DATAPUMP.SET_PARALLEL. Currently it is set to 2)
mig1.sql
set
serverout on;
execute
dbms_output.enable(5000000);DECLARE
handle1 number;
handle2 number;
ind number;
percent_done number;
job_state VARCHAR2(30);2
le ku$_LogEntry;
js ku$_JobStatus;
jd ku$_JobDesc;
sts ku$_Status;
iscn number;
BEGIN
handle1
:= DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA', 'DBMIG_LINK1'); DBMS_DATAPUMP.ADD_FILE(handle1, 'dbmig_dir1.out', 'DBMIG_DIR1', '', DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.METADATA_FILTER(handle1, 'SCHEMA_EXPR', 'IN (''SCHEMA_NAME'')');
DBMS_DATAPUMP.SET_PARAMETER(handle1, 'INCLUDE_METADATA', 1);
DBMS_DATAPUMP.SET_PARAMETER(handle1, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE');
DBMS_DATAPUMP.SET_PARALLEL(handle1, 2);
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/INDEX/STATISTICS/INDEX_STATISTICS''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/STATISTICS/TABLE_STATISTICS''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/CONSTRAINT/%''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/COMMENT''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','like''%/TABLE/TRIGGER''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'EXCLUDE_PATH_EXPR','=''INDEX''');
-- DBMS_DATAPUMP.METADATA_FILTER(handle1,'INCLUDE_PATH_EXPR','=''TABLE''');
DBMS_DATAPUMP.METADATA_FILTER(handle1,
name => 'NAME_EXPR', value => 'NOT IN (select table_name from
dba_tab_columns where data_type like ''LONG%'')',object_type => 'TABLE');
DBMS_DATAPUMP.START_JOB(handle1);
percent_done
:=0;
job_state
:= 'UNDEFINED'; while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(handle1, dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' || to_char(js.percent_done));
percent_done := js.percent_done;
end if;
if(bitand(sts.mask, dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if(bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
dbms_output.put_line('Job1 has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(handle1);
END;
/
select
table_name,OWNER from dba_tab_columns where data_type like 'LONG%' and OWNER
not in ('SYS','SYSTEM','SYSAUX')
No comments:
Post a Comment