Oracle


Warning: OPatchauto ignores disabled components – possible licensing issues

Since many years at my customer I’m using “opatchauto” to perform a out-of-place patching of Oracle Restart (GI+RDBMS).

My customer is concerned about database users using not licensed options, like partitioning. To avoid it, at the installation time the partitioning option is disabled using chopt, like described at Doc ID 948061.1.

Today during a check we noticed that Partitioning option was activated everywhere, which is not the client standard! We found out the origin of the problem was the out-of-place patching with “opatchauto”.

The big advantage of using “opatchauto” is that it allows easily either a single-step or a two-step Out-of-Place patching. We just write in a properties file the name of the new Oracle Homes and it does:

  • Clone current GI + RDBMS homes to new Homes (prepare clone)
  • Patches the new homes (prepare clone)
  • Stops GI and DBs (switch clone)
  • Switches GI and DBs from current homes to new Homes (switch clone)
  • Restart everything (switch clone)
  • Runs Datapatch on DBs if not standby (switch clone)

This allows to decrease the patching downtime without RAC to about 10 minutes, with the two-step (prepare clone + switch clone) operation.

Here the steps to reproduce de bug:

– install GI + RDBMS Oracle 19c (p.e 19.24) on server

– on RDBMS ORACLE_HOME disable partitioning component as described on Doc ID 948061.1:

$ chopt disable partitioning
Writing to /u00/app/oracle/product/19.24.0/install/disable_partitioning_2025-01-09_11-40-06AM.log...
/usr/bin/make -f /u00/app/oracle/product/19.24.0/rdbms/lib/ins_rdbms.mk part_off ORACLE_HOME=/u00/app/oracle/product/19.24.0
/usr/bin/make -f /u00/app/oracle/product/19.24.0/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u00/app/oracle/product/19.24.0

– Check – partition is disabled:

$ ar -tv /u00/app/oracle/product/19.24.0/rdbms/lib/libknlopt.a | grep -E 'kkpoban|ksnkkpo'
rw-r--r-- 1000/1001 5240 Apr 17 07:25 2019 ksnkkpo.o

– Perform out of place patching of DB from GI ORACLE_HOME (single-step example here):

$ cat /staging/clone.properties
/u00/app/grid/19.24.0=/u00/app/grid/19.25.0
/u00/app/oracle/product/19.24.0=/u00/app/oracle/product/19.25.0

$ echo $ORACLE_HOME
/u00/app/grid/19.24.0

$ sudo ${ORACLE_HOME}/OPatch/opatchauto apply -phBaseDir /staging/RU_19c_OCT2024/ -outofplace -silent /staging/clone.properties

OPatchauto session is initiated at Thu Jan  9 12:30:25 2025

System initialization log file is /u00/app/grid/19.24.0/cfgtoollogs/opatchautodb/systemconfig2025-01-09_12-30-30PM.log.

Session log file is /u00/app/grid/19.24.0/cfgtoollogs/opatchauto/opatchauto2025-01-09_12-30-34PM.log
The id for this session is GAEM

Executing OPatch prereq operations to verify patch applicability on home /u00/app/grid/19.24.0
Patch applicability verified successfully on home /u00/app/grid/19.24.0


Executing OPatch prereq operations to verify patch applicability on home /u00/app/oracle/product/19.24.0
Patch applicability verified successfully on home /u00/app/oracle/product/19.24.0


Executing patch validation checks on home /u00/app/grid/19.24.0
Patch validation checks successfully completed on home /u00/app/grid/19.24.0


Executing patch validation checks on home /u00/app/oracle/product/19.24.0
Patch validation checks successfully completed on home /u00/app/oracle/product/19.24.0


Verifying SQL patch applicability on home /u00/app/oracle/product/19.24.0
Skipping SQL patch step execution on standby database : C0201Z01_ENG1
No sqlpatch prereq operations are required on the local node for this home
No step execution required.........

Copying the files from the existing oracle home /u00/app/grid/19.24.0 to a new location. Please wait...

Copying the files from the existing oracle home /u00/app/oracle/product/19.24.0 to a new location. Please wait...
Clone of oracle home /u00/app/grid/19.24.0 is /u00/app/grid/19.25.0 on host vm01
Copying the files from the existing oracle home /u00/app/grid/19.24.0 to a new location is successful.

Clone of oracle home /u00/app/oracle/product/19.24.0 is /u00/app/oracle/product/19.25.0 on host vm01
Copying the files from the existing oracle home /u00/app/oracle/product/19.24.0 to a new location is successful.


Unlocking CRS clone home for home /u00/app/grid/19.24.0.
Prepatch operation log file location: /u00/app/oracle/crsdata/vm01/crsconfig/hapatch_2025-01-09_12-49-31AM.log
Unlocked CRS clone home successfully for home /u00/app/grid/19.24.0.


Creating clone for oracle home /u00/app/grid/19.24.0.
Clone operation successful for oracle home /u00/app/grid/19.24.0.


Creating clone for oracle home /u00/app/oracle/product/19.24.0.
Clone operation successful for oracle home /u00/app/oracle/product/19.24.0.


Performing post clone operation for oracle home /u00/app/grid/19.24.0.
Performing post clone operation was successful for oracle home /u00/app/grid/19.24.0.


Performing post clone operation for oracle home /u00/app/oracle/product/19.24.0.
Performing post clone operation was successful for oracle home /u00/app/oracle/product/19.24.0.


Start applying binary patch on home /u00/app/grid/19.25.0
Binary patch applied successfully on home /u00/app/grid/19.25.0


Start applying binary patch on home /u00/app/oracle/product/19.25.0
Binary patch applied successfully on home /u00/app/oracle/product/19.25.0


Running rootadd_rdbms.sh on home /u00/app/grid/19.25.0
Successfully executed rootadd_rdbms.sh on home /u00/app/grid/19.25.0


Running rootadd_rdbms.sh on home /u00/app/oracle/product/19.25.0
Successfully executed rootadd_rdbms.sh on home /u00/app/oracle/product/19.25.0


Update nodelist in the inventory for oracle home /u00/app/grid/19.25.0.
Update nodelist in the inventory is completed for oracle home /u00/app/grid/19.25.0.


Bringing down database service on home /u00/app/oracle/product/19.25.0
Database service successfully brought down on home /u00/app/oracle/product/19.25.0


Performing postpatch operations on CRS - starting CRS service on home /u00/app/grid/19.25.0
Postpatch operation log file location: /u00/app/oracle/crsdata/vm01/crsconfig/hapatch_2025-01-09_01-27-10PM.log
CRS service started successfully on home /u00/app/grid/19.25.0


Preparing home /u00/app/oracle/product/19.25.0 after database service restarted
No step execution required.........


Confirm that all resources have been started from home /u00/app/grid/19.25.0.
All resources have been started successfully from home /u00/app/grid/19.25.0.


Modifying  the database(s) to use new location home /u00/app/oracle/product/19.25.0.
All database(s) modified successfully to run from new location home /u00/app/oracle/product/19.25.0.


Trying to apply SQL patch on home /u00/app/oracle/product/19.25.0
Skipping SQL patch step execution on standby database : C0201Z01_ENG1
No SQL patch operations are required on local node for this home


OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:vm01
SIHA Home:/u00/app/grid/19.24.0
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36758186
Log: /u00/app/grid/19.24.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-33-55PM_1.log
Reason: /staging/RU_19c_OCT2024/36866740/36916690/36758186 is not required to be applied to oracle home /u00/app/grid/19.24.0


==Following patches were SUCCESSFULLY applied:

Patch: /staging/RU_19c_OCT2024/36866740/36878697
Log: /u00/app/grid/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-51-54PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36912597
Log: /u00/app/grid/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-51-54PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36917397
Log: /u00/app/grid/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-51-54PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36917416
Log: /u00/app/grid/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-51-54PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36940756
Log: /u00/app/grid/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_12-51-54PM_1.log


Host:vm01
SIDB Home:/u00/app/oracle/product/19.24.0
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36917397
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36758186
Reason: This patch is not applicable to this specified target type - "oracle_database"

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36940756
Reason: This patch is not applicable to this specified target type - "oracle_database"


==Following patches were SUCCESSFULLY applied:

Patch: /staging/RU_19c_OCT2024/36866740/36878697
Log: /u00/app/oracle/product/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_13-09-38PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36912597
Log: /u00/app/oracle/product/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_13-09-38PM_1.log

Patch: /staging/RU_19c_OCT2024/36866740/36916690/36917416
Log: /u00/app/oracle/product/19.25.0/cfgtoollogs/opatchauto/core/opatch/opatch2025-01-09_13-09-38PM_1.log


Patching session reported following warning(s):
_________________________________________________

[Note]: Please verify the database is running from the desired Oracle home, if not then manually execute
 $ORACLE_HOME/bin/srvctl modify database command to fix the problem


Out of place patching clone home(s) summary
____________________________________________
Host : vm01
Actual Home : /u00/app/grid/19.24.0
Version:19.0.0.0.0
Clone Home Path : /u00/app/grid/19.25.0

Host : vm01
Actual Home : /u00/app/oracle/product/19.24.0
Version:19.0.0.0.0
Clone Home Path : /u00/app/oracle/product/19.25.0


OPatchauto session completed at Thu Jan  9 13:32:41 2025
Time taken to complete the session 62 minutes, 16 seconds

– Check – partition component is enabled on the new Oracle Home:

$ ar -tv /u00/app/oracle/product/19.25.0/rdbms/lib/libknlopt.a | grep -E 'kkpoban|ksnkkpo'
rw-r--r-- 1000/1001 5144 Oct 11 13:17 2024 kkpoban.o

This problematic is also true for the other components: Advanced Analytics, OLAP and Real Application Testing.

And probably happens also when using Opatchauto with RAC (not tested), as described here at Doc ID 2419319.1.

I’ve open a Bug/SR today on Oracle Support site and will let here know when I’ve more information.


Using AI to confirm a wrongly cabled Exadata switch – or how to fix verify_roce_cables.py script for Python3.

One of the preparation steps when installing an Exadata X10M is to verify that the cabling of the RoCE switches is correctly done. The next step is to upgrade the Cisco switches with the latest firmware. During my intervention for Tradeware at the customer, the first didn’t work as the provided script is not compatible with Python3 and the latter complained about wrong cabling.

Here I show how studied the wrong cabling of the X10M switches and how I use Claude.ai (ChatGPT and other AI tools probably also work) to quickly fix the Python script provided by Oracle.

(more…)

The DBT-16051 when creating a standby database using DBCA is still around. 7 years after.

Sometimes I ask myself why some bugs are not solved. When looking for DBT-16071 we find a blog post from Frank Pachot from more than 7 years ago. He shows that with Oracle 12.2 you can “create” standby databases directly with dbca. But that the script does only a duplicate for standby and nothing more.

I decided to try with 19.22 to see how the situation evolved. It didn’t.

The first thing I got was a DBT-16051 error:

$ dbca -createDuplicateDB -gdbName anjodb01 -primaryDBConnectionString "anjovm01.local.wsl/anjodb01_s1.local.wsl" -sid anjodb01 -createAsStandby -dbUniqueName anjodb01_s2 -silent
Enter SYS user password:
*****
[FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
   ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.

Quick check shows the primary is correctly in archivelog mode. The problem is the Easy Connect string. The string I gave “anjovm1.local.wsl/anjodb1_s1.local.wsl” works well on sqlplus, but not with dbca. There you need to specify the port, also when you are just using the default one:

$ dbca -createDuplicateDB -gdbName anjodb01 -primaryDBConnectionString "anjovm01.local.wsl:1521/anjodb01_s1.local.wsl" -sid anjodb01 -createAsStandby -dbUniqueName anjodb01_s2 -silent
Enter SYS user password:
*****
[WARNING] [DBT-10331] Specified SID Name (anjodb01) may have a potential conflict with an already existing database on the system.
   CAUSE: The specified SID Name without the trailing numeric characters ({2}) may have a potential conflict with an already existing database on the system.
   ACTION: Specify a different SID Name that does not conflict with existing databases on the system.
Prepare for db operation
22% complete
Listener config step
44% complete
Auxiliary instance creation
67% complete
RMAN duplicate
89% complete
Post duplicate database operations
100% complete

The warning DBT-10331 appears because I’ve a “anjodb02” in the same VM, and this could create a problem, as they share the prefix “anjodb”. I don’t expect on a single instance environment that to be a problem though.

And it starts the new standby in ‘read only’ mode, which requires adequate licenses.

SQL> select name, db_unique_name, database_role, open_mode, dataguard_broker from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE    OPEN_MODE            DATAGUAR
--------- ------------------------------ ---------------- -------------------- --------
ANJODB01 ANJODB02_S2                  PHYSICAL STANDBY READ ONLY            DISABLED

For the moment, I’ll stay with my set of scripts which do the operations in the right way.


How to use Zabbix to monitor Oracle DB – a quick test run

Currently I’m testing different solutions for monitoring (mostly) Oracle databases. One solution in the shortlist is Zabbix.

Zabbix is open-source and currently has a quite active community helping out.

On Zabbix website there is a nice quick download and setup instructions:

I’ll use a Oracle Linux 9 VM under Windows WSL2 for installing Zabbix. For the exercise, I’ll configure it with PostgreSQL. The database installation step is missing. So, here are all the steps I’ve done.

(more…)

DBCA doesn’t like personal SQL*Plus prompt

The database creation (19c) using DBCA was hanging at 33%:

[ 2024-04-24 10:31:41.770 CEST ] Prepare for db operation
DBCA_PROGRESS : 8%
[ 2024-04-24 10:31:41.808 CEST ] Creating and starting Oracle instance
DBCA_PROGRESS : 11%
DBCA_PROGRESS : 12%
DBCA_PROGRESS : 17%
[ 2024-04-24 10:32:11.412 CEST ] Creating database files
DBCA_PROGRESS : 18%
DBCA_PROGRESS : 25%
[ 2024-04-24 10:32:11.702 CEST ] Creating data dictionary views
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 33%

The catproc0.log shows many connection errors:

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Apr 24 11:36:32 2024
Version 19.21.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

connect SYS/SP2-0640: Not connected
connect SYS/  2  SP2-0640: Not connected
connect SYS/SP2-0640: Not connected
connect SYS/connect SYS/  2  SP2-0640: Not connected
connect SYS/connect SYS/  2  SP2-0640: Not connected
connect SYS/connect SYS/
connect SYS/connect SYS/  2  SP2-0640: Not connected
connect SYS/connect SYS/SP2-0640: Not connected
connect SYS/SP2-0640: Not connected
connect SYS/  2  SP2-0640: Not connected
connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/connect SYS/SQL> SQL> SQL> SQL> SQL> SQL>   2  SP2-0640: Not connected
...

The generated trace file shows at the end:

[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:220]  /u01/app/oracle/product/19.0.0/dbhome_1/rdbms/admin/catproc.sql
[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:232]  Execing PERL process...
[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:274]  m_bReaderStarted: false
[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:278]  Starting Reader Thread... 
[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:332]  Inputing secret arg: 0
[progressPage.flowWorker] [ 2024-04-24 10:32:17.334 CEST ] [PerlEngine.execute:332]  Inputing secret arg: 1
[Thread-233] [ 2024-04-24 10:32:17.379 CEST ] [PerlEngine.readPerlOutput:597]  perl engine output:Enter Password: 
[Thread-233] [ 2024-04-24 10:32:17.379 CEST ] [PerlEngine.readPerlOutput:597]  perl engine output:Enter Password: 

This was part of an automation script I wrote, which was started during the VM installation using ansible. It was working fine until not long ago. What might have happened?

I end up opening an SR and searching a bit more, until I found out that the culprit was that in the previous steps there are a ORACLE_PATH variable added to .bash_profile . And recently I changed the login.sql to show the name of DB and PDB where the user is connected to. However I forgot to test for the cases when the database is down, on which it was simply showing:

$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Apr 24 11:36:32 2024
Version 19.21.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

>

for some reason dbca did not like it. Ok, partly was my fault, lack of tests. Solution was to make sure that even with the database down (or without database), the sqlplus shows a meaningful prompt.

In a next blog post I’ll show how to personalize SQL*Plus and SQLcl prompt, which is compatible to dbca.


Oracle Engineering Systems (Community) Day

On a full room in the centre of Zurich, Switzerland, two great events dedicated to ODA, Exadata, ExaCC, Exadata at OCI or Azure, PCA and other derivatives took place on March 20th 2024. In the morning organized and sponsored by Tradeware AG was the 21st Oracle Engineering Systems Community meeting, followed by 11th Swiss Oracle Engineering Systems day, organized by Oracle.

Manfred Drodz from Peakmarks showed the results of his performance benchmarks with the new Exadata X10M and how it improves up to 35% compared with X7. Peakmarks is specialized on benchmarking Oracle workloads, helping to choose the right number of CPUs needed when migrating to a new platform.

In Switzerland, Twint is a relatively new payment platform, meanwhile used by more than 50% from the population at least once a month! Erich Steiger showed us how on Exadata performance dilutes application problems, making them quite difficult to spot. But, after convincing the developers to avoid doing unnecessary queries, finally two application bugs could be found, making the existing Exadata resistant for some more years. Currently Twint is looking for the ExaCC solution and its dynamic scaling to absorb the midday peaks.

Mike Dietrich from Oracle presented the new M5 cross-platform migration script and how it was used for migrating a 200TB database from Solaris Supercluster to Exadata. More information on Mike’s website.

After lunch, Ashish Ray presented the Oracle strategy in terms of Exadata, ExaCC and AI. It presented how Oracle look forward with future ExaScale architecture which might remove the limited amount of VMs in a Exadata with its elastic infrastructure.

Swisscom – a major Swiss infrastructure provider – had Roger Fehr and Roger Staubli presenting their ExaCC strategy and how they are together with two core banking solutions (Avaloq and Finova) doing its certification against the Cloud@Customer solution.

Later in the day Marcus Praetzas from Deutsche Bank showed the converged strategy they follow and how ExaCC and Autonomous DB on Exadata allows them to have Oracle as a main database for almost all types of usage.

As a Recap of this intensive day, it was clear that many Swiss and German customers are happy with the Oracle’s Exadata Cloud@Customer solution and pretend to continue using it in their strategy.


Install Oracle Linux 9 and Oracle Database 19c on Windows WSL 1

Since a few days I’ve a Oracle Database 19c running on my laptop (4 year old i5 processor with 16GB memory) under the Windows Subsystem for Linux with the latest Oracle Linux 9.3. It is working great for testing functionalities. It survives without problem sleep and restart without first shutting down correctly the database.

While there are no snapshot possibility like with Virtualbox, it is possible to export the running image and re-import later.

Below I describe the main steps to quickly install the system.

(more…)

Swiss OUC – Database Circle Conference 2024

I had the pleasure to assist to the first Swiss Oracle User Community conference in Zurich the last two days. Great event the the Oracle offices in the “Circle” at the Zurich Airport. It was opportunity to see some ex-Trivadis colleagues – Ludo, Frank, Dani and meet all the Oracle conference addicts.

The two parallel sessions were creating me every hour a dilemma: which presentation will I miss.

Here some of the highlights I noticed – this are what for me was important or new, certainly I will not cover all the knowledge that was in the air.

Swiss OUC day 1

Dominic Giles (Oracle) gave an overview of 23c new Generative AI features and more:

  • New datatype VECTOR for AI data models was introduced and delayed the 23c release. It allows to use ML models and calculate distance between vectors using different methods.
  • AI Assist will soon allow to use natural language to create SQL specific for the data model of “our” database. Will also be available on premises – similar but easier than “Select AI” from Autonomous DB
  • SQL Developer extension for VS Code allows using DB connections directly on VS Code – already available:

Martin Klier explained well and visually how networks, subnets, routers work and what happens when the way to the destination is cut. Really good reminder.

Øyvind Isene shared his story of being the sole developer of an DB automation API leaving the following messages:

  • “You don’t have the time to do all the errors again. Learn from other errors.” –> share your errors with others
  • Use Logger when scripting; Use ORDS for DB automation

Neil Chandler reminded some Oracle Security fundamentals, like: always enable Oracle Network Encryption in sqlnet.ora and add Password Verify Function to default database profile.

Again Martin Klier said that Performance is rarely result of an accident. For this, when doing performance analysis, document every action you do, even if will just show that the problem is/was not there; do not use averages and pay attention to skewness. 

Chris Saxon and Jasmin Fluri did a small theater play around SQL version management and explained why they use either use Flyway or Liquidbase (integrated into SQLcl) for that.

Swiss OUC day 2

Chris Saxon on a plenum session did a example based overview of the 23c new features for developers:

  • new DB_DEVELOPER_ROLE – combination of CONNECT+RESOURCE+some other privs.
  • create if not exists, drop if exist DDLs to avoid errors on multiple execution (might hide things with create if exists, but different)
  • No more need of “from dual”
  • Domain datatype extensions – allow named column types with integrated constraints and defaults used schema wide, with extension to have a display and order specific functions (for instance INTERVAL DAY datatype that displays in minutes)
  • Annotations for columns/tables/domains – kind of extension of comments in name-value pair format
  • Multiple insert values in a single command – also helpful to generate data
  • Group by using column alias
  • sum/floor/ceil/avg on interval and other datatypes
  • fuzzy_match function to replace like when looking  for similar results (without need of Oracle Text)
  • json converted to insert
  • boolean usage everywhere (eg: function returns boolean then possible to do “select … from … where func_name(input)”
  • SQL_TRANSPILER parameter injects content of function directly in SQL, allowing the optimizer to better improve query (default false)
  • GRANT … ANY … on Schema
  • Remind to use NOAUTHENTICATION for owner schema and proxy users instead of share passwords.

Mike Dietrich went on with 23c new features on Data Pump:

  • New dumpfile format with special functionalities for OCI Object Storage
  • INCLUDE and EXCLUDE can be combined in the same impdp/expdp
  • All transportable export/import can run now in parallel
  • Checksum for dumpfiles
  • REMAP_TABLESPACE=%:USERS (wildcard usage)
  • Recommendation to gather SYS+SYSTEM schema stats instead of using “gather_dictionary_stats”
  • Always install DATAPUMP merge patch. It is not part of an RU because it it not RAC Rolling installable

Neil Chandler shared the story of a bank that wanted to replace end-of-life Exadata with something cheaper. Neil spend some time investigating all resource usage – mainly the CPU from storage servers + saved IO from cell offloading and then calculating costs. Conclusion: if they want to keep the performance, there is no cheaper option by using commodity hardware or open source DB.

Frank Pachot on a tmux terminal session explained that the ‘load’ high values in the ‘top’ command from Linux do not mean the machine is overloaded. Basically my get ways were :

  • There is in recent kernel the PSI tool, not enabled by default, which can show better weather there is pressure and where this is happening.
  • Different types of IO – specifically a type of direct io can count for the load avg. 

Erik van Roon is a fan of SQLcl and convinced the audience by showing why that is superior to SQL*Plus. Check many of its features here or here or here.

Fernando Simon moved from Exadata to ExaCC and shared his thoughts about the new experience:

  • ExaCC offers good automation, click type of actions for inexperienced DBAs
  • ExaCC lacks basic checks like if there is enough space to backup OH before a patch [why it backups OH?, does not do out-of-place?]
  • No integration of created OH images
  • CPU scale up works well and is immediate, but scale down has a hard minimum wait time of 2h since last scale up
  • There is no documented tool that allows create a DB with all parameters (block size, charset, silent mode with password)
  • Even Zero Downtime Migration tool makes as workaround to create a dummy DB, drop it manually and create manually again using DBCA. DBs created directly with DBCA, will not be seen in the OCI, there is no ‘discovery’ command
  • No possibility to use DBs with different domain in the name in the same ExaCC without unofficial workarounds
  • ExaCC makes all admin operations serial which can be slow

Dani Schneider showed the new SQL Analysis Report and how it allows to get hints of bad sql when running explain plan.

Kamil Stawiarski presented “Jas-min” – a word play with the name of one conference organizer – JSON AWR Statspack Minier. And how this tool can help in cases like you have flashback queries and cursor: pin s wait on x. Basically, do not use use flashback queries in the application if the query will be use concurrently

Thanks a lot Gianni, Jasmin and Julian from Swiss Oracle User Community for this great conference. Hope to see you again next year!


Nice table for Oracle OLVM: VMs, hosts, clusters and vcpu pinning 2

On my client infrastructure, for improved performance, we need to make sure each VM is pinning to different physical cores.

We do the changes for the moment using the web interface. However, to read easily what the current status is, I’ve wrote a python script that draws a nice table showing where each VM is running and to which cores its vcpu are pinned.

The result is something like this:

(more…)

Get history of Switchover and Failover of an Oracle database

For physical standbys, there is no DBA_* view that I know which tracks the history of switchover and failover. For Logical standby there is DBA_LOGSTDBY_HISTORY.

Ways to find the information are the Dataguard Broker log – when Broker is in use -, or using the database alert log and the following grep command:

$ grep -EB1 '^ALTER DATABASE (SWITCHOVER|FAILOVER) TO' alert_*.log

2023-03-06T14:12:38.905705+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'
--
2023-03-06T14:37:43.209653+01:00
ALTER DATABASE FAILOVER TO MYDB1
--
2023-03-06T14:38:03.352141+01:00
ALTER DATABASE SWITCHOVER TO PRIMARY (MYDB1)
--
2023-03-11T16:00:22.224218+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'