Miguel Anjo


We did changed nothing. How to set up AES encrypted username password in DB2 client?

It all started with the connections from Goldengate to DB2 not working anymore:

2023-09-11 15:31:27  WARNING OGG-00552  Database operation failed: Couldn't connect to DB2_DB1. ODBC error: SQLSTATE 08001 native database error -30073. [IBM][CLI Driver] SQL30073N  "0x0000" Parameter value "0x0000" is not supported.  SQLSTATE=58017.
Failed to open data source DB2_DB1for user GG01.

The DB2 administrators say “We changed nothing. Maybe you changed something? And the DB2 documentation says nothing for this error.

Next email from DB2 admins “well, we did activated the option TCPALVER=SERVER_ENCRYPT and maybe that is the problem”

Indeed, that was the problem. With that option on the server side, then:

SERVER_ENCRYPT (recommended)


A user ID and password are required for connection requests. Kerberos tickets are also accepted. Also, one of the following conditions must be true:

The user ID and password is AES (Advanced Encryption Standard)-encrypted.

Non-encrypted security credentials are not accepted unless the connection is secured by the TCP/IP network. DES-based (Data Encryption Standard) encryption is also considered insecure.

Ok, I needed to activate on my client side the AES encrytion for username and password. Not so difficult:

db2cli writecfg add -dsn DB2_DB1  -parameter "Authentication=SERVER_ENCRYPT_AES"

And voilĂ , the connection to DB2 works again, and it is secured.


Get history of Switchover and Failover of an Oracle database

For physical standbys, there is no DBA_* view that I know which tracks the history of switchover and failover. For Logical standby there is DBA_LOGSTDBY_HISTORY.

Ways to find the information are the Dataguard Broker log – when Broker is in use -, or using the database alert log and the following grep command:

$ grep -EB1 '^ALTER DATABASE (SWITCHOVER|FAILOVER) TO' alert_*.log

2023-03-06T14:12:38.905705+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'
--
2023-03-06T14:37:43.209653+01:00
ALTER DATABASE FAILOVER TO MYDB1
--
2023-03-06T14:38:03.352141+01:00
ALTER DATABASE SWITCHOVER TO PRIMARY (MYDB1)
--
2023-03-11T16:00:22.224218+01:00
ALTER DATABASE SWITCHOVER TO 'mydb2'


Goldengate 21 for DB2 on z/OS and segmentation fault

These days I’ve been trying Goldengate 21 for DB2 on z/OS (Extract). And still surprises!

Last year, with Goldengate 21.6, it would simply corrupt trailfiles if the extract abended.

This year, with Goldengate 21.10, After adding a new extract,the extract abends without any error or explanation. The report file would remain with this:

2023-06-21 15:11:31 INFO OGG-25184

Using DB2 subsystem DBZOS.

Only in the /var/log/messages one could see:

Jun 21 15:11:31 oraclevm1 kernel: extract[281329]: segfault at 0 ip 00007f1c5cca609f sp 00007ffcabd65c28 error 4 in libc-2.17.so[7f1c5cc0b000+1c4000]

SR open and Oracle says, it is fixed with Goldengate 21.11, which is released in two weeks. Well… almost. With Goldengate 21.11 the extract still crashes but only about five minutes after the start command. And the error on /var/log/messages is little different:

Aug 28 13:29:50 oraclevm1 kernel: extract[154775]: segfault at 170 ip 00005569d4916a82 sp 00007ffe69d1e600 error 4 in extract[5569d4415000+aaa000]

Oracle says – yes, sorry. Please try the workaround to create the extract with:

ADD EXTRACT e_ab, TRANLOG, EOF

When I check the Goldengate 21 documentation for “ADD EXTRACT” , this option is not valid for DB2 for z/OS, but ok. I try and it works.

Checking with Oracle, the information is that when doing:

ADD EXTRACT e_ab, TRANLOG, BEGIN NOW

It stores the time we run the command and, when we start the extract, it will scan the DB2 log to find the LSN (Log Sequence Number or the CSN from DB2) corresponding to that time. In this scan operation is the bug that makes the segmentation fault. Using EOF, it will always use the end of the Transaction Log at the time we start the Extract.

The Goldengate documentation will be updated soon to add that EOF is also valid for DB2 for z/OS.


My ARDCI cheatsheet

Check last problems

adrci> show problem -last 10
adrci> show incident -last 10

adrci> show incident -p "incident_id=180781" -mode detail

Create Package

adrci> ips create package problem 5
adrci> ips generate package 1 in "/tmp"

Purge

Value for control is hours

adrci> show control
adrci> set control (SHORTP_POLICY = 720)   /* 30 days */
adrci> set control (LONGP_POLICY= 8760)   /* 1 year */

Value for purge is minutes

adrci> purge -age 1440  /* 1 day */

Fix “No ADR base is set”

Sometimes also “DIA-48494: ADR home is not set”

SQL> select value from v$diag_info where name='ADR Base';
VALUE
------------------
/u01/app/oracle

Create a ${ORACLE_HOME}/log/diag/adrci_dir.mif file with:

$ mkdir -p ${ORACLE_HOME}/log/diag/
$ printf "%s\0" "/u01/app/oracle" > ${ORACLE_HOME}/log/diag/adrci_dir.mif

DIA-48448: This command does not support multiple ADR homes

Use set home diag/xxxxx (remove the ORACLE_BASE path)

adrci> show problems -last 10

ADR Home = /u01/app/oracle/diag/rdbms/mydb1/MYDB1:
...

adrci>  set home diag/rdbms/mydb1/MYDB1

Opatch now obfuscates its own backups – the new “opatch util Obfuscate” option explained

With OPatch version 12.2.0.1.36 for databases (and version 13.9.4.2.11 for Middleware), a new utility was included: obfuscate.

This utility was released to workaround the increased security needed around databases servers. We cannot escape having vulnerability scanners to run there. These vulnerability scanners sometimes do not distinguish between used and unused files.

When patching a database, backup copy of the modified files are kept in $ORACLE_HOME/.patch_storage. Their hash sometime trigger the vulnerability scanners and says – 🚨server not patched ⚠️. Which is misleading.

Starting with OPatch 12.2.0.1.36, released together with the January 2023 Release Update, the backup of patch files are automatically obfuscated.

The new “opatch util obfuscate” allows to do the same for older patches. Let’s see how it works.

Read More

New “OPatch util DeleteInactivePatches” tool for reducing the $ORACLE_HOME size

When you are a sensible person, you patch regularly your Oracle database. After a few years, maybe you noticed that the $ORACLE_HOME increased size from new ~7 GB to …. 26 GB!

$ du -hs /u00/app/oracle/product/19.18.0
26G     /u00/app/oracle/product/19.18.0

Looking better, you find the culprit is the hidden directory $ORACLE_HOME/.patch_storage :

$ du -hs /u00/app/oracle/product/19.18.0/.patch_storage
17G     /u00/app/oracle/product/19.18.0/.patch_storage

Today I show how a new option from OPatch allowed me to regain 11 GB of disk space in a couple of minutes. In a supported way.

Last year I had already presented other options to recover space from Oracle Patching. Below I remind them, but todays post is about the latest and improved way that OPatch tool has to keep only the usual needed patch backups.

Read More

How to workaround Oracle Text primary key limitations (and DRG-10528)

One of my clients had a quite easy desire to try Oracle Text for an existing application. Oracle Text allows to use “standard SQL to index, search, and analyze text and documents stored in the Oracle database, in files, and on the web”.

It seemed simple, until we tried to implement on table named after the city where I studied: T_COIMBRA

create table T_COIMBRA (COL_ID timestamp not null, COL_TXT varchar2(100));

create unique index COIMBRA_IDX_ID on T_COIMBRA(COL_ID);

alter table T_COIMBRA add constraint PK_COIMBRA primary key (COL_ID) using index COIMBRA_IDX_ID;

create index COIMBRA_IDX_TXT ON T_COIMBRA(COL_TXT) indextype is ctxsys.context;

Nothing special it seems. But we get an error:

create index COIMBRA_IDX_TXT ON T_COIMBRA(COL_TXT) indextype is ctxsys.context

Error report -
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-10528: primary keys of type TIMESTAMP(6) are not allowed
ORA-06512: at "CTXSYS.DRUE", line 186
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 320
29855. 00000 -  "error occurred in the execution of ODCIINDEXCREATE routine"
*Cause:    Failed to successfully execute the ODCIIndexCreate routine.
*Action:   Check to see if the routine has been coded correctly.

Below I show how to workaround this and keep an “unsupported” column as unique identifier of the table.

Read More

Upgrade to 19c: only 6 months left with support from Oracle

Oracle updated on 7 June 2023 the Release Schedule of Current Database Releases (Doc ID 742060.1) was updated, clarifying that only until December 2023 is possible to get support from Oracle ACS to perform the Upgrade the databases to the version 19c, the current stable release.

Below the updated slide from Oracle showing the support timelines of each version. Oracle 19c has also “Waived Extended Fee” support until 30 April 2025, even if that is not yet marked in the slide.

Don’t miss this opportunity!


Can ChatGPT steal our (Oracle DBA) job?

I would not say it will steal our job, but it can facilitate in some cases to write small pieces of code without having much to think. Well, help first to be lazy, before getting our job. Last example today:

Request ChatGPT to write a trigger that enables BCT after the database opens.

The power is amazing. After looking at the result, you think about a small detail, ask again and it just answers:

Ask ChatGPT to improve so it triggers only when a switchover happened.

But one must always test carefully the answers. Like this case, where the EVENT_TYPE column does not exist on V$DATAGUARD_STATUS table. No idea how he discovers it. I also got non existing syntax and complex solutions that could be easily made simpler.


Root mailbox huge on Exadata after upgrade to OL7

Finally I got to work a bit more with an Exadata. And finding its unknown “features”.

Our monitoring just detected that “/ diskspace 6% free (1.9/23.5 GB) (DiskWarning)”. Looking for the guilty folder or file, I end up here:

# for d in /*; do egrep " ${d} " /proc/mounts > /dev/null || du -sh ${d}; done
...
15G     /var

# cd /var/
# du -hs * | sort -h
...
12G     spool

# cd spool/
# du -hs * | sort -h
...
12G     mail

# cd mail/
# du -hs * | sort -h
...
12G     root

The huge file was /var/spool/mail/root – the mailbox of the root user.

Trying to open it, just created another small problem:

# mail
/tmp: No space left on device

(parenthesis here: on this Exadata VM we have DBFS running. The check done by the CRS for the DBFS – official mount-dbfs.sh script -, also writes to /tmp. If /tmp is full when the DBFS check runs, it fails, and the clusterware moves DBFS to another node. Goldengate, which was using DBFS, just crashed)

The root mailbox is just a text file, so using other tools I saw that is full of emails that come from “0logwatch”. This matches something in cron.daily:

# ls -l /etc/cron.daily/

-rwx------ 1 root root  408 Apr 15  2022 0logwatch

Looking on Metalink, quickly we end up on this note:

Output of Daily Cronjob 0logwatch Sent to Mail instead of Saving to File on Oracle Linux 7 (Doc ID 2564364.1)

The problem comes from an Exadata update, which did not care about syntax change of the /etc/logwatch/conf/logwatch.conf file with OL7. What is a pity, is that later Exadata patches did not seem to care of fixing that.