Exadata


ORA-64307 when creating compressed table and /home

My customer running on ExaCC (Exadata Cloud@Customer) was getting “ORA-64307: Exadata Hybrid Columnar Compression is not supported for tablespaces on this storage type” on one of his test databases.

I did test connecting to SYS and no problem. Then I try to do using his tablespace and indeed, I get the error:

Quite going around, to check what was different on the user tablespace than on others. I test a self created tablespace and it works.

Strange. Until I found that… some datafiles were not in ASM!

Seems the ASM Diskgroup is almost full and the client DBA just put the datafiles somewhere else!


Using AI to confirm a wrongly cabled Exadata switch – or how to fix verify_roce_cables.py script for Python3.

One of the preparation steps when installing an Exadata X10M is to verify that the cabling of the RoCE switches is correctly done. The next step is to upgrade the Cisco switches with the latest firmware. During my intervention for Tradeware at the customer, the first didn’t work as the provided script is not compatible with Python3 and the latter complained about wrong cabling.

Here I show how studied the wrong cabling of the X10M switches and how I use Claude.ai (ChatGPT and other AI tools probably also work) to quickly fix the Python script provided by Oracle.

(more…)

Oracle postpones release of 23ai on-premises to 2H CY2024

Oracle just updated the Release Schedule of Current Database Releases (Doc ID 742060.1) and changed the release date of database version 23ai on-premises to next half-year. Lets see how many months and bug fixing that means. 🙂

Update on 20.06.2024 – “Added new release dates for Oracle Autonomous Database – Dedicated Exadata Infrastructure, Autonomous Database on Exadata Cloud@Customer, ODA, Exadata and Linux-x86 64”


Root mailbox huge on Exadata after upgrade to OL7

Finally I got to work a bit more with an Exadata. And finding its unknown “features”.

Our monitoring just detected that “/ diskspace 6% free (1.9/23.5 GB) (DiskWarning)”. Looking for the guilty folder or file, I end up here:

# for d in /*; do egrep " ${d} " /proc/mounts > /dev/null || du -sh ${d}; done
...
15G     /var

# cd /var/
# du -hs * | sort -h
...
12G     spool

# cd spool/
# du -hs * | sort -h
...
12G     mail

# cd mail/
# du -hs * | sort -h
...
12G     root

The huge file was /var/spool/mail/root – the mailbox of the root user.

Trying to open it, just created another small problem:

# mail
/tmp: No space left on device

(parenthesis here: on this Exadata VM we have DBFS running. The check done by the CRS for the DBFS – official mount-dbfs.sh script -, also writes to /tmp. If /tmp is full when the DBFS check runs, it fails, and the clusterware moves DBFS to another node. Goldengate, which was using DBFS, just crashed)

The root mailbox is just a text file, so using other tools I saw that is full of emails that come from “0logwatch”. This matches something in cron.daily:

# ls -l /etc/cron.daily/

-rwx------ 1 root root  408 Apr 15  2022 0logwatch

Looking on Metalink, quickly we end up on this note:

Output of Daily Cronjob 0logwatch Sent to Mail instead of Saving to File on Oracle Linux 7 (Doc ID 2564364.1)

The problem comes from an Exadata update, which did not care about syntax change of the /etc/logwatch/conf/logwatch.conf file with OL7. What is a pity, is that later Exadata patches did not seem to care of fixing that.