Datapump


Slow starting impdp from NFS Share

Unfortunately I lost the logs for this issue, but I try to document for information.

My customer has ExaCC with various 2-node clusters.

  • Export ACFS mount point as NFS from cluster1
  • Mount NFS mount point on cluster2, cluster3 and cluster4

He did an export from cluster1 to the ACFS mount point.

All was working fine until mid December, when impdp reading a dumpfile from the NFS mount point seems hanging when was called from cluster3 and cluster4. From cluster2 it was still fine.

Few days later, the impdp was slow everywhere, except locally on cluster1.

The behavior was very bizarre:

  • impdp starting showing timestamp
  • exactly 5 minutes later first output comes “W-1 Startup took 1 second”
  • exactly 5 minutes after comes second line “W-1 Master table … successfully loaded/unloaded”
  • and 5 minutes later runs the rest, quickly.

The NFS mount point seemed ok, ‘dd’ command tests did not show any slowness.

I started to investigating by enabling the DataPump tracing, as explained by Daniel Hansen on his Databases are Fun blog:

alter system set events 'sql_trace {process: pname = dw | process: pname = dm} level=8';

The trace files generated on Diagnostics directory did not help much – they are mostly for performance problems.

Then I did start a “strace” on the PID of the impdp

strace -p <pid> -o /tmp/strace.out

There I could see some “ECONNREFUSED” to one of the IPs of the Cluster1. But few lines above, there was the same connection without error.

Quite strange. Finally with the help of one system administrator, we found out that the nfs-server was not running on one of the cluster1 nodes. And the NFS mount point was using a hostname which dynamically would go either to one or another node of the cluster1. After making sure nfs-server was running on both nodes from cluster1, the problem was solved and impdp was fast to start again.

Learnings:

  • Use the clusterware to manage exportfs – srvctl add exportfs
  • Make use of VIPs which move from one node to another instead of round-robin DNS entries.


Oracle automatic maintenance, expdp and ORA-01466

At a client request I’ve implemented a daily consistent export of few schemas, using something like:

expdp "/ as sysdba" SCHEMAS=user1 dumpfile=export_%1_%datetime%.dmp logfile=export_%1_%datetime%.log flashback_time=systimestamp

I was set to run at 22:00 every night.

Next day the log showed some errors like:

ORA-31693: Table data object "USER1"."TABLE_TEST" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01466: unable to read data - table definition has changed

Looking closer, it was due to the automatic statistics collection, done within the daily maintenance window of Oracle. The LAST_ANALYZED column of DBA_TABLES changed, as well as LAST_DDL_TIME on DBA_OBJECTS. I did not expect neither I see nowhere else complaining about this interference between the maintenance windows and a export data pump. Strange!

To avoid any problem in the future we decided to change the maintenance window time and lenght.
On Oracle 11g and 12c there is a window per day. So to change lets say to 19:00 and having a duration of 2 hours you do:

EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW','repeat_interval','freq=daily;byday=MON;byhour=19;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW',attribute => 'DURATION',value     => numtodsinterval(2, 'hour'));
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW','repeat_interval','freq=daily;byday=TUE;byhour=19;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW',attribute => 'DURATION',value     => numtodsinterval(2, 'hour'));
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW','repeat_interval','freq=daily;byday=WED;byhour=19;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW',attribute => 'DURATION',value     => numtodsinterval(2, 'hour'));
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW','repeat_interval','freq=daily;byday=THU;byhour=19;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW',attribute => 'DURATION',value     => numtodsinterval(2, 'hour'));
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW','repeat_interval','freq=daily;byday=FRI;byhour=19;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW',attribute => 'DURATION',value     => numtodsinterval(2, 'hour'));

Datapump elapsed time on 11.2.0.4

Data-Pump-for-OracleToday I notice a small difference when you export/import using Datapump and you connect to a 11.2.0.4 database: there is the elapsed time at the end.

Both times I was using the binary from Oracle 11.2.0.4:

Export: Release 11.2.0.4.0 - Production on Sun Mar 22 21:07:25 2015

But the output is slight different at the end of the job:

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
...
Job "RMAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Mar 22 21:10:07 2015 elapsed 0 00:02:21

Or:

Connected to: Oracle Database 11g Release 11.2.0.3.0 - Production
...
Job "RMAN"."SYS_EXPORT_SCHEMA_03" successfully completed at 21:07:43

 


Datapump, remap_schema and passwords 2

Something that is quite logical after you think about, but surprises you at first.

When you do expdp/impdp with the remap_schema parameter, the password of the created user on the target database will not be a valid one.

This because on Oracle 10g the password hash is calculated using the “username” as a seed. When you import a user with impdp, the password hash is not recalculated. So when the user changes (by the remap_schema parameter), the new schema password will not work.

Clean environment:

SQL> select username, password
from dba_users 
where username='T1';
no rows selected

Create user T1 with simple password…

SQL> grant create session to t1 identified by t1;
Grant succeeded.

Check password hash_values

SQL> select username, password 
from dba_users 
where username='T1';
USERNAME                       PASSWORD
------------------------------ ------------------------------
T1                             2A6EC3E5F234DF52

We can connect:

SQL> connect t1/t1
Connected.

Run the export…

$expdp schemas=t1
Export: Release 10.2.0.4.0 - 64bit Production on Thursday, 19 April, 2012 11:16:30
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  /******** AS SYSDBA schemas=t1
...

Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:17:00

And import with remap_schema:

$impdp remap_schema=t1:t2
Import: Release 10.2.0.4.0 - 64bit Production on Thursday, 19 April, 2012 11:18:30
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  /******** AS SYSDBA remap_schema=t1:t2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 11:18:35

We see now that the password hash of the two users is the same:

SQL> select username, password from dba_users where username in ('T1','T2');
USERNAME                       PASSWORD
------------------------------ ------------------------------
T1                             2A6EC3E5F234DF52
T2                             2A6EC3E5F234DF52

But connecting with the original password does not work:

SQL> connect t2/t1
ERROR:
ORA-01017: invalid username/password; logon denied