On a full room in the centre of Zurich, Switzerland, two great events dedicated to ODA, Exadata, ExaCC, Exadata at OCI or Azure, PCA and other derivatives took place on March 20th 2024. In the morning organized and sponsored by Tradeware AG was the 21st Oracle Engineering Systems Community meeting, followed by 11th Swiss Oracle Engineering Systems day, organized by Oracle.
Manfred Drodz from Peakmarks showed the results of his performance benchmarks with the new Exadata X10M and how it improves up to 35% compared with X7. Peakmarks is specialized on benchmarking Oracle workloads, helping to choose the right number of CPUs needed when migrating to a new platform.
In Switzerland, Twint is a relatively new payment platform, meanwhile used by more than 50% from the population at least once a month! Erich Steiger showed us how on Exadata performance dilutes application problems, making them quite difficult to spot. But, after convincing the developers to avoid doing unnecessary queries, finally two application bugs could be found, making the existing Exadata resistant for some more years. Currently Twint is looking for the ExaCC solution and its dynamic scaling to absorb the midday peaks.
Mike Dietrich from Oracle presented the new M5 cross-platform migration script and how it was used for migrating a 200TB database from Solaris Supercluster to Exadata. More information on Mike’s website.
After lunch, Ashish Ray presented the Oracle strategy in terms of Exadata, ExaCC and AI. It presented how Oracle look forward with future ExaScale architecture which might remove the limited amount of VMs in a Exadata with its elastic infrastructure.
Swisscom – a major Swiss infrastructure provider – had Roger Fehr and Roger Staubli presenting their ExaCC strategy and how they are together with two core banking solutions (Avaloq and Finova) doing its certification against the Cloud@Customer solution.
Later in the day Marcus Praetzas from Deutsche Bank showed the converged strategy they follow and how ExaCC and Autonomous DB on Exadata allows them to have Oracle as a main database for almost all types of usage.
As a Recap of this intensive day, it was clear that many Swiss and German customers are happy with the Oracle’s Exadata Cloud@Customer solution and pretend to continue using it in their strategy.
Since a few days I’ve a Oracle Database 19c running on my laptop (4 year old i5 processor with 16GB memory) under the Windows Subsystem for Linux with the latest Oracle Linux 9.3. It is working great for testing functionalities. It survives without problem sleep and restart without first shutting down correctly the database.
While there are no snapshot possibility like with Virtualbox, it is possible to export the running image and re-import later.
Below I describe the main steps to quickly install the system.
I had the pleasure to assist to the first Swiss Oracle User Community conference in Zurich the last two days. Great event the the Oracle offices in the “Circle” at the Zurich Airport. It was opportunity to see some ex-Trivadis colleagues – Ludo, Frank, Dani and meet all the Oracle conference addicts.
The two parallel sessions were creating me every hour a dilemma: which presentation will I miss.
Here some of the highlights I noticed – this are what for me was important or new, certainly I will not cover all the knowledge that was in the air.
Swiss OUC day 1
Dominic Giles (Oracle) gave an overview of 23c new Generative AI features and more:
New datatype VECTOR for AI data models was introduced and delayed the 23c release. It allows to use ML models and calculate distance between vectors using different methods.
AI Assist will soon allow to use natural language to create SQL specific for the data model of “our” database. Will also be available on premises – similar but easier than “Select AI” from Autonomous DB
SQL Developer extension for VS Code allows using DB connections directly on VS Code – already available:
Martin Klier explained well and visually how networks, subnets, routers work and what happens when the way to the destination is cut. Really good reminder.
Øyvind Isene shared his story of being the sole developer of an DB automation API leaving the following messages:
“You don’t have the time to do all the errors again. Learn from other errors.” –> share your errors with others
Use Logger when scripting; Use ORDS for DB automation
Again Martin Klier said that Performance is rarely result of an accident. For this, when doing performance analysis, document every action you do, even if will just show that the problem is/was not there; do not use averages and pay attention to skewness.
Chris Saxon and Jasmin Fluri did a small theater play around SQL version management and explained why they use either use Flyway or Liquidbase (integrated into SQLcl) for that.
Swiss OUC day 2
Chris Saxon on a plenum session did a example based overview of the 23c new features for developers:
new DB_DEVELOPER_ROLE – combination of CONNECT+RESOURCE+some other privs.
create if not exists, drop if exist DDLs to avoid errors on multiple execution (might hide things with create if exists, but different)
No more need of “from dual”
Domain datatype extensions – allow named column types with integrated constraints and defaults used schema wide, with extension to have a display and order specific functions (for instance INTERVAL DAY datatype that displays in minutes)
Annotations for columns/tables/domains – kind of extension of comments in name-value pair format
Multiple insert values in a single command – also helpful to generate data
Group by using column alias
sum/floor/ceil/avg on interval and other datatypes
fuzzy_match function to replace like when looking for similar results (without need of Oracle Text)
json converted to insert
boolean usage everywhere (eg: function returns boolean then possible to do “select … from … where func_name(input)”
SQL_TRANSPILER parameter injects content of function directly in SQL, allowing the optimizer to better improve query (default false)
GRANT … ANY … on Schema
Remind to use NOAUTHENTICATION for owner schema and proxy users instead of share passwords.
Mike Dietrich went on with 23c new features on Data Pump:
New dumpfile format with special functionalities for OCI Object Storage
INCLUDE and EXCLUDE can be combined in the same impdp/expdp
All transportable export/import can run now in parallel
Checksum for dumpfiles
REMAP_TABLESPACE=%:USERS (wildcard usage)
Recommendation to gather SYS+SYSTEM schema stats instead of using “gather_dictionary_stats”
Always install DATAPUMP merge patch. It is not part of an RU because it it not RAC Rolling installable
Neil Chandler shared the story of a bank that wanted to replace end-of-life Exadata with something cheaper. Neil spend some time investigating all resource usage – mainly the CPU from storage servers + saved IO from cell offloading and then calculating costs. Conclusion: if they want to keep the performance, there is no cheaper option by using commodity hardware or open source DB.
Frank Pachot on a tmux terminal session explained that the ‘load’ high values in the ‘top’ command from Linux do not mean the machine is overloaded. Basically my get ways were :
There is in recent kernel the PSI tool, not enabled by default, which can show better weather there is pressure and where this is happening.
Different types of IO – specifically a type of direct io can count for the load avg.
Erik van Roon is a fan of SQLcl and convinced the audience by showing why that is superior to SQL*Plus. Check many of its features here or here or here.
Fernando Simon moved from Exadata to ExaCC and shared his thoughts about the new experience:
ExaCC offers good automation, click type of actions for inexperienced DBAs
ExaCC lacks basic checks like if there is enough space to backup OH before a patch [why it backups OH?, does not do out-of-place?]
No integration of created OH images
CPU scale up works well and is immediate, but scale down has a hard minimum wait time of 2h since last scale up
There is no documented tool that allows create a DB with all parameters (block size, charset, silent mode with password)
Even Zero Downtime Migration tool makes as workaround to create a dummy DB, drop it manually and create manually again using DBCA. DBs created directly with DBCA, will not be seen in the OCI, there is no ‘discovery’ command
No possibility to use DBs with different domain in the name in the same ExaCC without unofficial workarounds
ExaCC makes all admin operations serial which can be slow
Dani Schneider showed the new SQL Analysis Report and how it allows to get hints of bad sql when running explain plan.
On my client infrastructure, for improved performance, we need to make sure each VM is pinning to different physical cores.
We do the changes for the moment using the web interface. However, to read easily what the current status is, I’ve wrote a python script that draws a nice table showing where each VM is running and to which cores its vcpu are pinned.
It all started with the connections from Goldengate to DB2 not working anymore:
2023-09-11 15:31:27 WARNING OGG-00552 Database operation failed: Couldn't connect to DB2_DB1. ODBC error: SQLSTATE 08001 native database error -30073. [IBM][CLI Driver] SQL30073N "0x0000" Parameter value "0x0000" is not supported. SQLSTATE=58017.
Failed to open data source DB2_DB1for user GG01.
The DB2 administrators say “We changed nothing. Maybe you changed something? And the DB2 documentation says nothing for this error.
Next email from DB2 admins “well, we did activated the option TCPALVER=SERVER_ENCRYPT and maybe that is the problem”
Indeed, that was the problem. With that option on the server side, then:
SERVER_ENCRYPT (recommended)
A user ID and password are required for connection requests. Kerberos tickets are also accepted. Also, one of the following conditions must be true:
The user ID and password is AES (Advanced Encryption Standard)-encrypted.
…
Non-encrypted security credentials are not accepted unless the connection is secured by the TCP/IP network. DES-based (Data Encryption Standard) encryption is also considered insecure.
Ok, I needed to activate on my client side the AES encrytion for username and password. Not so difficult:
For physical standbys, there is no DBA_* view that I know which tracks the history of switchover and failover. For Logical standby there is DBA_LOGSTDBY_HISTORY.
Ways to find the information are the Dataguard Broker log – when Broker is in use -, or using the database alert log and the following grep command:
$ grep -EB1 '^ALTER DATABASE (SWITCHOVER|FAILOVER) TO' alert_*.log
2023-03-06T14:12:38.905705+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'
--
2023-03-06T14:37:43.209653+01:00
ALTER DATABASE FAILOVER TO MYDB1
--
2023-03-06T14:38:03.352141+01:00
ALTER DATABASE SWITCHOVER TO PRIMARY (MYDB1)
--
2023-03-11T16:00:22.224218+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'
These days I’ve been trying Goldengate 21 for DB2 on z/OS (Extract). And still surprises!
Last year, with Goldengate 21.6, it would simply corrupt trailfiles if the extract abended.
This year, with Goldengate 21.10, After adding a new extract,the extract abends without any error or explanation. The report file would remain with this:
2023-06-21 15:11:31 INFO OGG-25184
Using DB2 subsystem DBZOS.
Only in the /var/log/messages one could see:
Jun 21 15:11:31 oraclevm1 kernel: extract[281329]: segfault at 0 ip 00007f1c5cca609f sp 00007ffcabd65c28 error 4 in libc-2.17.so[7f1c5cc0b000+1c4000]
SR open and Oracle says, it is fixed with Goldengate 21.11, which is released in two weeks. Well… almost. With Goldengate 21.11 the extract still crashes but only about five minutes after the start command. And the error on /var/log/messages is little different:
Aug 28 13:29:50 oraclevm1 kernel: extract[154775]: segfault at 170 ip 00005569d4916a82 sp 00007ffe69d1e600 error 4 in extract[5569d4415000+aaa000]
Oracle says – yes, sorry. Please try the workaround to create the extract with:
Checking with Oracle, the information is that when doing:
ADD EXTRACT e_ab, TRANLOG, BEGIN NOW
It stores the time we run the command and, when we start the extract, it will scan the DB2 log to find the LSN (Log Sequence Number or the CSN from DB2) corresponding to that time. In this scan operation is the bug that makes the segmentation fault. Using EOF, it will always use the end of the Transaction Log at the time we start the Extract.
The Goldengate documentation will be updated soon to add that EOF is also valid for DB2 for z/OS.
With OPatch version 12.2.0.1.36 for databases (and version 13.9.4.2.11 for Middleware), a new utility was included: obfuscate.
This utility was released to workaround the increased security needed around databases servers. We cannot escape having vulnerability scanners to run there. These vulnerability scanners sometimes do not distinguish between used and unused files.
When patching a database, backup copy of the modified files are kept in $ORACLE_HOME/.patch_storage. Their hash sometime trigger the vulnerability scanners and says – 🚨server not patched ⚠️. Which is misleading.
Starting with OPatch 12.2.0.1.36, released together with the January 2023 Release Update, the backup of patch files are automatically obfuscated.
The new “opatch util obfuscate” allows to do the same for older patches. Let’s see how it works.
When you are a sensible person, you patch regularly your Oracle database. After a few years, maybe you noticed that the $ORACLE_HOME increased size from new ~7 GB to …. 26 GB!
$ du -hs /u00/app/oracle/product/19.18.0
26G /u00/app/oracle/product/19.18.0
Looking better, you find the culprit is the hidden directory $ORACLE_HOME/.patch_storage :
$ du -hs /u00/app/oracle/product/19.18.0/.patch_storage
17G /u00/app/oracle/product/19.18.0/.patch_storage
Today I show how a new option from OPatch allowed me to regain 11 GB of disk space in a couple of minutes. In a supported way.
Last year I had already presented other options to recover space from Oracle Patching. Below I remind them, but todays post is about the latest and improved way that OPatch tool has to keep only the usual needed patch backups.