Automating and delegating datapump imports with PL/SQL

We recently bought another company and we are in the process of integrating our systems with theirs. One requirement was to import their daily business data into our Business Intelligence system for reporting. We agreed they would provide an Oracle datapump export file for us to import. We wanted to automate this and have a non-privileged application owner perform the import. I created a stored procedure that could run the import as the system user and granted execute on the procedure to the application owner. The dbms_datapump package has lots of options and it was fun exploring them to find the best way of doing this.

Setup –

Run as sys :

 grant execute on dbms_lock to system;

Run as system:

CREATE OR REPLACE DIRECTORY SOURCE AS '/nfsmount/exp/incoming';
CREATE OR REPLACE PROCEDURE SYSTEM.SCHEMA_IMPORT (
 dumpfilename IN VARCHAR2)
IS
 h1 NUMBER; -- data pump job handle
 job_state VARCHAR2 (30);
 status ku$_Status; -- data pump status
 job_not_exist EXCEPTION;
 PRAGMA EXCEPTION_INIT (job_not_exist, -31626);
BEGIN
 h1 :=
 DBMS_DATAPUMP.open (operation => 'IMPORT',
   job_mode => 'SCHEMA',
   job_name => NULL);
 DBMS_DATAPUMP.set_parameter (h1, 'TABLE_EXISTS_ACTION', 'TRUNCATE');
 DBMS_DATAPUMP.add_file (h1, dumpfilename, 'SOURCE');
 DBMS_DATAPUMP.add_file (h1,
   dumpfilename || '.log',
   'SOURCE',
   NULL,
   DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
 DBMS_DATAPUMP.metadata_remap (h1,
   'REMAP_SCHEMA',
   'FROMSCHEMA',
   'TOSCHEMA');
 DBMS_DATAPUMP.start_job (h1);
 job_state := 'UNDEFINED';
BEGIN
 WHILE (job_state != 'COMPLETED') AND (job_state != 'STOPPED')
 LOOP
 status :=
   DBMS_DATAPUMP.get_status (
   handle => h1,
   mask => DBMS_DATAPUMP.ku$_status_job_error
   + DBMS_DATAPUMP.ku$_status_job_status
   + DBMS_DATAPUMP.ku$_status_wip,
   timeout => -1);
 job_state := status.job_status.state;
 DBMS_LOCK.sleep (10);
 END LOOP;
 EXCEPTION
 WHEN job_not_exist
 THEN
 DBMS_OUTPUT.put_line ('job finished');
 END;
COMMIT;
END;
/
grant execute on schema_import to APP_OWNER;

Execution –

Run as APP_OWNER:

EXECUTE SYSTEM.SCHEMA_IMPORT('FILENAME.DMP')
Advertisements

Migrating to different endianness

I recently moved a 2TB database from Linux, Oracle 11.1 to AIX, Oracle 11.2 using impdp with network_link and parallel options. The data was imported in about two hours and the indexes took another 7 hours. There was no downtime but we kept the users off the system while this was happening.

Here is the command I used on the new server:

nohup impdp system/secret NETWORK_LINK=olddb FULL=y  PARALLEL=25 &
Then you can run this to see which parallel process is importing which table:
impdp system attach
Import> status
Import> parallel=30 << this will increase the parallel processes if you want
I had created all the datafiles in the new database and created a db link named olddb beforehand. The servers were on the same local area network and the network was the bottleneck.

 

ORA-31693: Table data object “schema”.”table” failed to load/unload and is being skipped due to error:

I often get ORA-31693 when doing DataPump imports. The error can be caused by a lot of things and the solution is to fix the underlying problem and re-import the table. It’s time consuming to sift through hundreds of lines in an import log file to find the skipped tables so I use this script which generates a list of tables that I can include in the next import script’s ‘TABLES=’ parameter.


cat import.log | grep failed | awk '{ print $5 }' | sed s/\"//g |  awk '{ print $1 " ,"}'

ORA-31696: unable to export/import TABLE_DATA:”schema”.”table” using client specified AUTOMATIC method

Part of my job is to support a Business Intelligence development team. They often need to have certain test schemas refreshed from production. I’ve been using DataPump and on one schema I always get ORA-31696 for the same three tables. It’s due to the way I’m refreshing tables. I truncate the tables first and then I load them using the “content=data_only” parameter. It works for most tables but these three tables contain a LONG column definition. The fix for this is to run a second import using “table_exists_action=replace” and TABLES=(list of skipped tables).