Miguel Anjo


New mandatory unified audit policy on 19.26   Recently updated !

This feature was just backported from Oracle 23ai. The new ORA$MANDATORY audit policy was added with the Oracle 19.26 RU. This policy is not visible at UNIFIED_AUDIT_POLICIES or AUDIT_UNIFIED_ENABLED_POLICIES.

After patching the database to 19.26, then you see entries on UNIFIED_AUDIT_TRAIL:

SYS@CDB2.CDB$ROOT> select EVENT_TIMESTMAP, SYSTEM_PRIVILEGE_USED, ACTION_NAME 
from UNIFIED_AUDIT_TRAIL 
where UNIFIED_AUDIT_POLICIES='ORA$MANDATORY' 
order by EVENT_TIMESTMAP;

                  EVENT_TIMESTAMP     SYSTEM_PRIVILEGE_USED       ACTION_NAME
_________________________________ _________________________ _________________
02-FEB-2025 21:54:56.192982000    SYSDBA                    LOGON
02-FEB-2025 21:54:56.216549000    SYSDBA                    SELECT
02-FEB-2025 21:55:00.381577000    SYSDBA, ALTER DATABASE    ALTER DATABASE
02-FEB-2025 21:55:00.393882000    SYSDBA                    LOGOFF
...

The actions that are audited by ORA$MANDATORY policy are described on Oracle 23ai documentation.

What I find interesting, is that the “ALTER DATABASE MOUNT” during startup is audited, so we can have a good history of database startups.

(more…)

How to change Goldengate Adminclient default editor permanently

Goldengate Microservices architecture replaced the “ggsci” tool with “adminclient”. This new client has few limitations and does not work well with “rlwrap” – my favorite tool to have history between sessions.

The Adminclient provides some options you can easily change after starting the tool:

$ ./adminclient
...
OGG (not connected) 1> show
Current directory: /home/oracle
COLOR            : OFF
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF

OGG (not connected) 2> set color ON
OGG (not connected) 3> set pager less

OGG (not connected) 4> show

Current directory: /home/oracle
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : less
VERBOSE          : OFF

However to keep the settings across sessions it is not very straight forward. The way to do it is to set variables:

$ export ADMINCLIENT_COLOR=ON  # ON, OFF in uppercase!

And ADMINCLIENT_DEBUG and ADMINCLIENT_VERBOSE for DEBUG and VERBOSE respectively

For the editor and pager, the variables are simply:

export EDITOR=nano
export PAGER=less

Attention that the variable EDITOR is used also by other clients, like sqlplus “edit” command.

So the way I do it, is to set everything within an alias:

alias gg="cd $OGG_HOME/bin; EDITOR=nano PAGER=less ADMINCLIENT_COLOR=ON $RLWRAP ./adminclient ; cd -"

And inside .bash_profile or something that sets the environment:

RLWRAP="$(command -v rlwrap)" && RLWRAP="${RLWRAP} -c"

export OGG_HOME="/u00/app/oracle/product/ogg/21.15"
alias gg="cd $OGG_HOME/bin; EDITOR=nano PAGER=less ADMINCLIENT_COLOR=ON $RLWRAP ./adminclient ; cd -" 


Slow starting impdp from NFS Share

Unfortunately I lost the logs for this issue, but I try to document for information.

My customer has ExaCC with various 2-node clusters.

  • Export ACFS mount point as NFS from cluster1
  • Mount NFS mount point on cluster2, cluster3 and cluster4

He did an export from cluster1 to the ACFS mount point.

All was working fine until mid December, when impdp reading a dumpfile from the NFS mount point seems hanging when was called from cluster3 and cluster4. From cluster2 it was still fine.

Few days later, the impdp was slow everywhere, except locally on cluster1.

The behavior was very bizarre:

  • impdp starting showing timestamp
  • exactly 5 minutes later first output comes “W-1 Startup took 1 second”
  • exactly 5 minutes after comes second line “W-1 Master table … successfully loaded/unloaded”
  • and 5 minutes later runs the rest, quickly.

The NFS mount point seemed ok, ‘dd’ command tests did not show any slowness.

I started to investigating by enabling the DataPump tracing, as explained by Daniel Hansen on his Databases are Fun blog:

alter system set events 'sql_trace {process: pname = dw | process: pname = dm} level=8';

The trace files generated on Diagnostics directory did not help much – they are mostly for performance problems.

Then I did start a “strace” on the PID of the impdp

strace -p <pid> -o /tmp/strace.out

There I could see some “ECONNREFUSED” to one of the IPs of the Cluster1. But few lines above, there was the same connection without error.

Quite strange. Finally with the help of one system administrator, we found out that the nfs-server was not running on one of the cluster1 nodes. And the NFS mount point was using a hostname which dynamically would go either to one or another node of the cluster1. After making sure nfs-server was running on both nodes from cluster1, the problem was solved and impdp was fast to start again.

Learnings:

  • Use the clusterware to manage exportfs – srvctl add exportfs
  • Make use of VIPs which move from one node to another instead of round-robin DNS entries.


ORA-64307 when creating compressed table and /home

My customer running on ExaCC (Exadata Cloud@Customer) was getting “ORA-64307: Exadata Hybrid Columnar Compression is not supported for tablespaces on this storage type” on one of his test databases.

I did test connecting to SYS and no problem. Then I try to do using his tablespace and indeed, I get the error:

Quite going around, to check what was different on the user tablespace than on others. I test a self created tablespace and it works.

Strange. Until I found that… some datafiles were not in ASM!

Seems the ASM Diskgroup is almost full and the client DBA just put the datafiles somewhere else!


JumpHost Matryoshka

My client just added an extra jumphost before arriving to the server. So now I’ve to connect, to connect, to connect and then open the connection. 🙂


Warning: OPatchauto ignores disabled components – possible licensing issues   Recently updated !

Since many years at my customer I’m using “opatchauto” to perform a out-of-place patching of Oracle Restart (GI+RDBMS).

My customer is concerned about database users using not licensed options, like partitioning. To avoid it, at the installation time the partitioning option is disabled using chopt, like described at Doc ID 948061.1.

Today during a check we noticed that Partitioning option was activated everywhere, which is not the client standard! We found out the origin of the problem was the out-of-place patching with “opatchauto”.

The big advantage of using “opatchauto” is that it allows easily either a single-step or a two-step Out-of-Place patching. We just write in a properties file the name of the new Oracle Homes and it does:

  • Clone current GI + RDBMS homes to new Homes (prepare clone)
  • Patches the new homes (prepare clone)
  • Stops GI and DBs (switch clone)
  • Switches GI and DBs from current homes to new Homes (switch clone)
  • Restart everything (switch clone)
  • Runs Datapatch on DBs if not standby (switch clone)

This allows to decrease the patching downtime without RAC to about 10 minutes, with the two-step (prepare clone + switch clone) operation.

Here the steps to reproduce de bug:

(more…)

Solve “OGG-08224 Error: CONTAINER option was specified though the database does not support containers” error

Quick post to add info about the following Goldengate error:

OGG (http://localhost:9300 test1 as ogg_pdb1@CDB2) 10> REGISTER EXTRACT E_TEST1 DATABASE CONTAINER (pdb1)

2024-12-08T17:16:58Z ERROR OGG-08224 Error: CONTAINER option was specified though the database does not support containers.

This means that you are connected directly to the PDB, and not to CDB$ROOT.

To register Goldengate 21 extracts you need to connect to the Root container with a common user.

OGG (http://localhost:9300 test1 as ogg_pdb1@CDB2) 12> DBLOGIN USERIDALIAS ogg_cdb2
Successfully logged into database CDB$ROOT.

OGG (http://localhost:9300 test1 as ogg_cdb2@CDB2/CDB$ROOT) 13> REGISTER EXTRACT E_TEST1 DATABASE CONTAINER (pdb1)
2024-12-08T17:20:36Z  INFO    OGG-02003  Extract group E_TEST1 successfully registered with database at SCN 8039188.

Well, in the future this is not anymore true, as new version from Goldengate and DBs will work only at PDB level.


Using AI to confirm a wrongly cabled Exadata switch – or how to fix verify_roce_cables.py script for Python3.

One of the preparation steps when installing an Exadata X10M is to verify that the cabling of the RoCE switches is correctly done. The next step is to upgrade the Cisco switches with the latest firmware. During my intervention for Tradeware at the customer, the first didn’t work as the provided script is not compatible with Python3 and the latter complained about wrong cabling.

Here I show how studied the wrong cabling of the X10M switches and how I use Claude.ai (ChatGPT and other AI tools probably also work) to quickly fix the Python script provided by Oracle.

(more…)

Oracle postpones release of 23ai on-premises to 2H CY2024

Oracle just updated the Release Schedule of Current Database Releases (Doc ID 742060.1) and changed the release date of database version 23ai on-premises to next half-year. Lets see how many months and bug fixing that means. 🙂

Update on 20.06.2024 – “Added new release dates for Oracle Autonomous Database – Dedicated Exadata Infrastructure, Autonomous Database on Exadata Cloud@Customer, ODA, Exadata and Linux-x86 64”


The DBT-16051 when creating a standby database using DBCA is still around. 7 years after.

Sometimes I ask myself why some bugs are not solved. When looking for DBT-16071 we find a blog post from Frank Pachot from more than 7 years ago. He shows that with Oracle 12.2 you can “create” standby databases directly with dbca. But that the script does only a duplicate for standby and nothing more.

I decided to try with 19.22 to see how the situation evolved. It didn’t.

The first thing I got was a DBT-16051 error:

$ dbca -createDuplicateDB -gdbName anjodb01 -primaryDBConnectionString "anjovm01.local.wsl/anjodb01_s1.local.wsl" -sid anjodb01 -createAsStandby -dbUniqueName anjodb01_s2 -silent
Enter SYS user password:
*****
[FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
   ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.

Quick check shows the primary is correctly in archivelog mode. The problem is the Easy Connect string. The string I gave “anjovm1.local.wsl/anjodb1_s1.local.wsl” works well on sqlplus, but not with dbca. There you need to specify the port, also when you are just using the default one:

$ dbca -createDuplicateDB -gdbName anjodb01 -primaryDBConnectionString "anjovm01.local.wsl:1521/anjodb01_s1.local.wsl" -sid anjodb01 -createAsStandby -dbUniqueName anjodb01_s2 -silent
Enter SYS user password:
*****
[WARNING] [DBT-10331] Specified SID Name (anjodb01) may have a potential conflict with an already existing database on the system.
   CAUSE: The specified SID Name without the trailing numeric characters ({2}) may have a potential conflict with an already existing database on the system.
   ACTION: Specify a different SID Name that does not conflict with existing databases on the system.
Prepare for db operation
22% complete
Listener config step
44% complete
Auxiliary instance creation
67% complete
RMAN duplicate
89% complete
Post duplicate database operations
100% complete

The warning DBT-10331 appears because I’ve a “anjodb02” in the same VM, and this could create a problem, as they share the prefix “anjodb”. I don’t expect on a single instance environment that to be a problem though.

And it starts the new standby in ‘read only’ mode, which requires adequate licenses.

SQL> select name, db_unique_name, database_role, open_mode, dataguard_broker from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE    OPEN_MODE            DATAGUAR
--------- ------------------------------ ---------------- -------------------- --------
ANJODB01 ANJODB02_S2                  PHYSICAL STANDBY READ ONLY            DISABLED

For the moment, I’ll stay with my set of scripts which do the operations in the right way.