ORA-17628: Oracle error 19505 returned by remote Oracle server 1

Today I was doing a procedure for duplicating databases at a client environment. One error that pop up time to time was the one below:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/13/2014 18:49:01
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of backup command on t1 channel at 05/13/2014 18:49:00
ORA-17628: Oracle error 19505 returned by remote Oracle server

At several websites you can see different causes:

– Control file location specification was wrong;

– db_file_name_convert were not specified;

– FRA diskgroup inexisting in the target;

– etc.

I had the following two cases on my experiments:

– destination database directory not existing on the DGDATA diskgroup (+DGDATA/<db_unique_name>)

– I did not connect to auxiliary using a standard network naming (I was using ezconnect).


Impossible to change virtual column

On my series of using datapump with remap_schema and virtual columns, I end up yesterday with a new case:

– impossible to change a virtual column after the referenced function does not exist anymore.

This bug is present on 11.2.0.3, corrected on 11.2.0.4.

Example:

SQL> grant connect,resource to u1 identified by u1;
Grant succeeded.

SQL> connect u1/u1
Connected.

SQL> create or replace function f1 (v1 number)
  return number DETERMINISTIC as
begin
  return v1;
end;
/  

Function created.

-- Create table with virtual column referencing the function

SQL> create table t1 (c1 number, c2 number as (f1(c1)));
Table created.

SQL> drop function f1;
Function dropped.

-- Try to change the virtual column. Impossible!
SQL> alter table t1 drop column c2;

alter table t1 drop column c2
*
ERROR at line 1:
ORA-00904: "U1"."F1": invalid identifier

-- Workaround is to recreate the function...

-- clean up
SQL> connect / as sysdba
Connected.

SQL> drop user u1 cascade;
User dropped.

Primary key not unique

Today I discovered that primary key indexes do not need to be unique. This makes a mess for checking supplemental logging for GoldenGate.

Here is the example:

SQL> create table t(c1 number, c2 number);
Table created.


SQL> create index i on t(c1,c2);
Index created.


SQL> alter table t add primary key (c1,c2) using index i;
Table altered.


SQL> select uniqueness from dba_indexes where table_name=’T';
UNIQUENES
———
NONUNIQUE


SQL>  select CONSTRAINT_NAME,CONSTRAINT_TYPE from dba_constraints where table_name=’T';
CONSTRAINT_NAME                C
—————————— -
SYS_C001632047                 P


SQL> insert into t values (1,2);
1 row created.


SQL> commit;
Commit complete.


SQL> insert into t values (1,2);
insert into t values (1,2)
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C001632047) violated


SQL> alter table t drop constraint SYS_C001632047;
Table altered.


SQL> insert into t values (1,2);
1 row created.


SQL> commit;
Commit complete.


SQL> alter table t add primary key (c1,c2) using index i;
alter table t add primary key (c1,c2) using index i
*
ERROR at line 1:
ORA-02437: cannot validate (SYS.SYS_C001632048) – primary key violated


SQL> alter table t add primary key (c1,c2) using index i novalidate;
Table altered.


SQL>  select uniqueness from dba_indexes where table_name=’T';
UNIQUENES
———
NONUNIQUE


SQL> select * from t;
        C1         C2
———- ———-
         1          2
         1          2


SQL> select VALIDATED from dba_constraints where table_name=’T';
VALIDATED
————-
NOT VALIDATED


SQL> insert into t values (1,2);
insert into t values (1,2)
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C001632049) violated


SQL> drop table t;
Table dropped.

Tablespaces and partitions

Today I discussed with a database user who wanted to have different tablespaces for different partitions.

I had to disagree with his points, even though sometimes I think the split of tablespaces on DWH system can be useful for future migrations (then could be done tablespace per tablespace).

However is again the question of data+indexes on same tablespace or not. Since long years I see no sense on the separation. There is no performance gain, only more administrative work. Once a colleague told me about the fact that in the need of quicker recover from backup, you could just recover the data tablespace and not the indexes. We did not have dataguard in use at the time. Nowadays if you need a quick recover, then you use dataguard.
– What if you want to recover one table from a backup of 1 month ago?
– Ok, there might be still extreme situations where more smaller tablespaces make sense.

But on my life as a DBA, I want that database users (my customers) do not care about the keyword “tablespace” into their DML/DDL. My strategy is to give a default tablespace per application.

Found also a nice discussion from Charles Hooper blog about the tablespace split or not:
http://hoopercharles.wordpress.com/2012/02/17/tables-and-their-indexes-should-be-located-on-different-disks-what-is-wrong-with-this-quote/


Goldengate, RAC One node and PROCESSTHREADS parameter

I use Goldengate on classic capture mode to replicate several RAC One Node databases to an external system.
Since the first time there was a relocation of the RAC One Node database, there are two redo log threads (because the DB was temporarily a 2-node RAC).

Now the current thread# has changed and GoldenGate complaints that he cannot access anymore to the redologs.

What was necessary to do was:

1. Check which is the current thread:

SQL> select thread# from v$log where status=’CURRENT’; –> current=1

2. Add to the extract parameter file the parameter:
THREADOPTIONS PROCESSTHREADS EXCEPT 1

3. Recreate the extract with:

delete extract e_xx
add extract e_xx tranlog threads 2 begin now
add exttrail /oraacfs/goldengate/gg/dirdat/e_xx/xx extract e_xx

4. Check with
info e_xx
that the RBA for one of the threads increases with the time.

All this should not be needed when running in integrated capture mode.


GoldenGate – SCN translation

On GoldenGate when you do “SEND <extract> status” you get a SCN field which does not correspond to the database SCN:
GGSCI> send e_test status
Sending STATUS request to EXTRACT E_TEST …
EXTRACT E_TEST (PID 9372244)
Current status: Recovery complete: Processing data
Current read position:
Redo thread #: 1
Sequence #: 198315
RBA: 335376
Timestamp: 2013-11-11 16:11:57.000000
SCN: 78.924145222
Current write position:
Sequence #: 1548
RBA: 401683
Timestamp: 2013-11-11 16:12:31.811533
Extract Trail: /acfs/goldengate/dirdat/test/tt


SQL> select current_scn from v$database;
CURRENT_SCN
———————-
335931612762


To translate that SCN into the database SCN there is the calculation to make:


78 * (2^32) + 924145222 = 335931594310or with a hexadecimal translation:
78 to hexa        ==> 4E
924145222 to hexa ==> 37155646

Concatenate both and then
4E37155646 to decimal ==> 335931594310

Well, with the most recent versions of GoldenGate you can simply to “INFO <extract>” and you get the database SCN between parentheses.

GGSCI> info e_test
EXTRACT  E_TEST  Last Started 2013-11-13 08:08   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Log Read Checkpoint  Oracle Redo Logs
                  2013-11-13 09:57:40  Seqno 200447, RBA 16256592
                  SCN 78.1163096680 (336170545768)

Goldengate – Refresh single table

With the recent versions of GoldenGate >11.2 you can see the current SCN on with the extract process is reading, even when it is stopped:

GGSCI> INFO E_TEST
EXTRACT  E_TEST  Last Started 2013-11-13 03:50   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Log Read Checkpoint  Oracle Redo Logs
                   2013-11-13 09:32:34  Seqno 508717, RBA 4366336
                   SCN 78.1156360444 (336163809532)

This makes it easiear to refresh a single table on a GoldenGate replication, if you can allow a small delay. There can be this need if you discover that it does not match anymore, or if you need to add a new table to the configuration. For that you do, in case of a Oracle to Oracle replication:
1 – Stop extract process and do INFO <extract> (see above). Note down the SCN between parentheses.
2 – Add the table to the parameter file (or remove an existing TABLEEXCLUDE on extract process or MAPEXCLUDE on replicat process) 
3 – Export the table from the source with FLASHBACK_SCN parameter pointing to the SCN taken from parameter 1.
4 – Make sure the replicat process has no delay with the command LAG <replicat>. You should get “At EOF, no more records to process.”
GGSCI> LAG R_TEST
Sending GETLAG request to REPLICAT R_TEST …
Last record lag: 4 seconds.
At EOF, no more records to process.
5 – Import the table at the destination (with TABLE_EXISTS=REPLACE, if needed)
6 – Re-start the extract process.
You can make the operation simpler if you have a database link between destination and source. In that case you skip the step 3 and use directly the FLASHBACK_SCN on step 5:
impdp NETWORK_LINK=source_db TABLES=test.table1 EXCLUDE=TRIGGER 

Oracle DBA role granted privileges per version

Below you can find the Oracle defaults
DBA role privileges per database version since Oracle 10.2. It is based on Enterprise Edition with Java and XML installed (on 12.1 you cannot anymore choose what to install).
On Oracle
12.1 there are 3 new roles : CAPTURE_ADMIN, EM_EXPRESS_ALL and
OPTIMIZER_PROCESSING_RATE. Comments about these new roles/privileges can be found below.

admin_option
comment
CAPTURE_ADMIN
yes
>12.1
Provides
the privileges necessary to create and manage privilege analysis policies.
DATAPUMP_EXP_FULL_DATABASE
>11.1
DATAPUMP_IMP_FULL_DATABASE
>11.1
DELETE_CATALOG_ROLE
yes
EM_EXPRESS_ALL
>12.1
Enables
users to connect to Oracle Enterprise Manager (EM) Express and use all the
functionality provided by EM Express (read and write access to all EM Express
features). The EM_EXPRESS_ALL role includes the EM_EXPRESS_BASIC role
EXECUTE_CATALOG_ROLE
yes
EXP_FULL_DATABASE
GATHER_SYSTEM_STATISTICS
IMP_FULL_DATABASE
JAVA_ADMIN
JAVA_DEPLOY
OPTIMIZER_PROCESSING_RATE
>12.1
Provides privileges to execute the
GATHER_PROCESSING_RATE, SET_PROCESSING_RATE, and DELETE_PROCESSING_RATE
procedures in the DBMS_STATS package. These procedures manage the
processing rate of a system for automatic degree of parallelism (Auto DOP).
Auto DOP uses these processing rates to determine the optimal degree of
parallelism for a SQL statement.
SCHEDULER_ADMIN
yes
SELECT_CATALOG_ROLE
yes
WM_ADMIN_ROLE
XDBADMIN
XDB_SET_INVOKER
>11.1
XDBWEBSERVICES
<10.2

The query used is:
set pages 20 lines 200
col grantee for a10
col granted_role for a40
select * from dba_role_privs where grantee=’DBA’ order by 1,2;

Duplicate problem with non standard cache sizes

Today I had a problem creating a standby database. Just after RMAN shutdowns and restarts the standby for copying the files, I was getting:

RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-04031: unable to allocate 4128 bytes of sha
red memory (“shared pool”,”unknown object”,”sga heap(1,0)”,”object queue ha
sh buckets”)

After investigating the problem was because I was decreasing the size of SGA on the target database (and putting it on Automatic Memory Management) but not decreasing the non-standard cache settings.
The solution was to add to the duplicate command the decrease of the non-standard cache (on the source it is 800M):
  SET memory_target ‘1G’
  SET db_32k_cache_size ’50M’