How to reclaim storage space on Oracle Autonomous Database

Man shall pay only for what it uses. This is also a motto of the Cloud and Oracle with second-level billing pushes this model.

Concerning disk space, however, it is not always easy. While terabyte prices are getting cheaper, sometimes you make a big cleanup of your database and then you would like to pay only for what is being used.

On Oracle Autonomous Databases it is the sum of datafiles size that counts.

Image now that you have a huge table and then drop it. The datafile space is not recovered.

In order to recover space you need:

  • Purge the recycle bin:

SQL> purge dba_recyclebin

  • reduce size of DATA tablespace datafile

SQL> alter database datafile <file_id> resize yyyM;

Now, this will be possible only if there are no extents used at the end of the datafile. Otherwise, one can try to alter table <table_name> move online; and then alter tablespace <tbs_name> coalesce; but this is not sure to help.

During my tests I only had one table, which made things easier.

Let’s hope that Oracle either changes the way to calculate the space used or provides a way to (continuous) defragment a datafile and make the size dynamic.

To check the storage used on Autonomous Database and find the datafile file_id, you can run the following query:

-- Get Space used by tablespace and file_id
select TBS "File_ID-Tablespace",
  round(sum(bytes)/1024/1024/1024,2) USED_GB,
  round(sum(bytes)/max(PROPERTY_VALUE)*100) PCT 
from
  (select file_id||'-'||tablespace_name TBS, bytes 
    from DBA_DATA_FILES
    where tablespace_name!='SAMPLESCHEMA'),
  (select PROPERTY_VALUE 
    from DATABASE_PROPERTIES 
    where PROPERTY_NAME = 'MAX_PDB_STORAGE')
group by rollup(TBS);

FILE_ID-TABLESPACE USED_GB PCT 
------------------ ------- --- 
3252-SYSTEM           0.41   2 
3253-SYSAUX           3.16  16 
3254-UNDOTBS1         0.44   2 
3255-DATA              0.1   0 
3256-DBFS_DATA         0.1   0 
                       4.2  21 

-- Get Total space used by DB
select round(USED_BYTES/1024/1024/1024,2) USED_GB,
  round(MAX_BYTES/1024/1024/1024,2) MAX_GB,
  round(USED_BYTES/MAX_BYTES*100,2) PCT_USED 
from
  (select PROPERTY_VALUE MAX_BYTES
    from DATABASE_PROPERTIES 
    where PROPERTY_NAME = 'MAX_PDB_STORAGE'),
  (select sum(BYTES) USED_BYTES
    from DBA_DATA_FILES 
    where TABLESPACE_NAME != 'SAMPLESCHEMA');

USED_GB MAX_GB PCT_USED 
------- ------ -------- 
    4.2     20    21.01 

.anjo


Oracle Cloud: what parameters make ATP different from ADW?

Using the Free Tier of Oracle Cloud I created one Autonomous DB of each type – one Autonomous Transaction Processing and one Autonomous Data Warehouse (Autonomous JSON are not yet available). Then did run

select name, display_value
from v$parameter
where isdefault='FALSE'
order by 1;

on each of the DBs, I got the follow differences (empty means not set):

ParameterATPADW
optimizer_ignore_hintsTRUE
optimizer_ignore_parallel_hintsTRUE
parallel_degree_policyAUTO
parallel_min_degreeCPU
pdb_lockdownOLTPDWCS
pga_aggregate_target3000M5100M
resource_manager_planFORCE:OLTP_PLANFORCE:DWCS_PLAN
result_cache_max_result1
result_cache_modeMANUALFORCE
sga_target8000M3400M

Both the databases (PDB) share the same Container (CDB).

I did check also

select *
from database_properties;

but there are no initial state differences.

Something I found interesting. I had a 2-month old ATP when I created the ADW. Immediately I saw that my old ATP was not using ASM, compared to the ADW, also that ADW was a cluster DB while the old ATP was single instance.

I recreated the ATP to check if this remained. But no. My new ATP was co-located on the same database as the ADW, so parameters are mostly the same as we could see above.

For historical reasons, I leave here the parameter changes between ATP created in June and end-August 2020. For paths, only the differences are highlighted:

parameterATP June 2020ATP August 2020
audit_file_dest/u01/…/u02/…
cluster_databaseFALSETRUE
control_files/u01/…,/u05/…+DATA/…,+RECO/….
db_create_file_dest/u01/app/oracle/oradata+DATA
db_nameFBPOD1feio1pod
db_recovery_file_dest/u05/fra+RECO
de_recovery_file_dest_size571558474874921167764M
diagnostic_dest/u01/app/oracle/u02/app/oracle
drcp_connection_limit600
external_keystore_credential/u01/…/wallets/tde_seps+DATA/encrypt_wallet/tde_seps
gcs_server_processes04
instance_number2
log_archive_dest_1LOCATION=USE_DB_RECOVERY_FILE_DEST MANDATORY
log_archive_dest_state_4ENABLE
log_archive_dest_state_5ENABLE
log_archive_dest_state_6ENABLE
max_pdbs4096
processes2000040000
sga_max_size321G301G
thread2

Oracle 11.1 on Windows – ORA-02778

I’ve a funny challenge these days:

Recover a Oracle 11.1.0.6 on Windows installation. I’ve all files in an external drive and I’m trying to make the DB to run on my laptop.

  • I’ve copied to my local disk;
  • Added the entries to the registry that found necessary
  • Re-created the Inventory
  • Re-created a init file with correct locations from the spfile

Now I had still had the error:

SQL> startup nomount
ORA-02778: Name given for the log directory is invalid

Nothing on the logs, all directories mentioned on pfile exist and are writable.

I’ve downloaded the Microsoft ProcessMonitor, configured to trace only Oracle company processes and looked what it tried to read.

Finally I found the culprit: %ORACLE_HOME%\RDBMS\log was missing. Just the log subfolder.

This was during the xcopy to the local disk, I excluded all “log” files, as there were some huge ones. Unfortunately it also excluded this folder. Now looks better:

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1720328192 bytes
Fixed Size                  2130160 bytes
Variable Size             989859600 bytes
Database Buffers          721420288 bytes
Redo Buffers                6918144 bytes
SQL>

SQL query for queue table with priorities 6

Queue tables are used to take care of events. There is a mechanism that insert rows and another that takes care of the existing rows, usually deleting them at the end.

When possible one should use Oracle Advanced Queuing mechanism which takes care of managing the queue and a simple request will give you the next in the line.

Some applications develop their own queuing systems and it is a good way to learn how queue works.

Imagine a table with a list of elements and two specific columns:

  • order of arrival
  • priority

The first to be served is the one that, having the highest priority, was the first to arrive.

(more…)

Impact of Azure-Oracle OCI interconnect on a multi-cloud solution 3

Introduction

Oracle and Microsoft announced in June 2019 a cloud interoperability partnership which enables workloads across Microsoft Azure and Oracle Cloud. By creating a first joint multi-cloud solution, the software giants can each continue to provide the best of their services. At the same time customers do not need to decide which vendor they opt-out when moving their on-premises constructs.

Being myself an Oracle Database Administrator, this article aims to check what is the impact of distributing resources in multiple clouds, with databases remaining on Oracle Cloud Infrastructure.

It is not aim of the article to discuss the costs of resources on any of the clouds.

(more…)

OCI / Azure and Oracle DB: firewall and security rules

Many of the problems with VMs on Clouds are related to firewall or security rules. Here some elements that might help.

These examples are based on Oracle Linux 7.7 virtual machines.

Firewall: check if it is running

When is loaded and running (active) is shows like:

[opc@alfama ~]$ sudo systemctl status firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-05-15 05:19:12 GMT; 15min ago
Docs: man:firewalld(1)
Main PID: 1640 (firewalld)
Tasks: 2
Memory: 33.2M
CGroup: /system.slice/firewalld.service
└─1640 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

When it is loaded but stopped:

[opc@alfama ~]$ sudo systemctl status firewalld -l
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2020-05-15 05:36:09 GMT; 7s ago
Docs: man:firewalld(1)
Process: 1640 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 1640 (code=exited, status=0/SUCCESS)

Firewall: check ports open

[opc@boavista ~]$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: dhcpv6-client ssh
  ports: 1521/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

Firewall: open Oracle listener port

If we want to open the default listener port, two commands are needed. One for adding the rule and another to enable the new rule:

[opc@mouraria bin]$ sudo firewall-cmd --zone=public --permanent --add-port=1521/tcp
[opc@mouraria bin]$ sudo firewall-cmd --reload

Firewall: allow ping reponses

Ping uses a separate protocol. To allow the VMs to answer pings, this rule needs to be added to the firewall:

[opc@mouraria bin]$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p icmp -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
[opc@mouraria bin]$ sudo systemctl restart firewalld.service

On OCI, to be able to ping between hosts, it might be necessary to add an Ingress rule that allows pings. The rule should look like below. The most important is the Protocol ICMP and type 8. The source CIDR should not be larger than the VCN CIDR.

Test if port is open:

To check if the remote port is open, I use the follow commands. This works both with IPs or hostnames and also lets you know if port is open, but listener is not running (on this port):

[opc@alfama ~]$ ### PORT UNREACHEABLE, SERVER DOWN/WRONG? ###
[opc@alfama ~]$ export CHECK_IP="10.1.2.4/1521"
[opc@alfama ~]$ timeout 1 bash -c '</dev/tcp/${CHECK_IP} &amp;&amp; echo Port ${CHECK_IP} is open || echo Port ${CHECK_IP} is closed' || echo Connection timeout
Connection timeout

[opc@alfama ~]$ ### PORT OPEN, USING HOSTNAME ###
[opc@alfama ~]$ export CHECK_IP="luz.subnetpriv2/1521"
[opc@alfama ~]$ timeout 1 bash -c '</dev/tcp/${CHECK_IP} &amp;&amp; echo Port ${CHECK_IP} is open || echo Port ${CHECK_IP} is closed' || echo Connection timeout
Port luz.subnetpriv2/1521 is open

[opc@alfama ~]$ ### PORT OPEN, USING IP ###
[opc@alfama ~]$ timeout 1 bash -c '</dev/tcp/${CHECK_IP} &amp;&amp; echo Port ${CHECK_IP} is open || echo Port ${CHECK_IP} is closed' || echo Connection timeout
Port 10.1.5.3/1521 is open

[opc@alfama ~]$ ### PORT CLOSED ###
[opc@alfama ~]$ export CHECK_IP="luz.subnetpriv2/1522"
[opc@alfama ~]$ timeout 1 bash -c '</dev/tcp/${CHECK_IP} &amp;&amp; echo Port ${CHECK_IP} is open || echo Port ${CHECK_IP} is closed' || echo Connection timeout
bash: connect: No route to host
bash: /dev/tcp/luz.subnetpriv2/1522: No route to host
Port luz.subnetpriv2/1522 is closed

[opc@alfama ~]$ ### PORT OPEN but LISTENER DOWN ###
[opc@alfama ~]$ timeout 1 bash -c '</dev/tcp/${CHECK_IP} &amp;&amp; echo Port ${CHECK_IP} is open || echo Port ${CHECK_IP} is closed' || echo Connection timeout
bash: connect: Connection refused
bash: /dev/tcp/10.1.5.3/1521: Connection refused
Port 10.1.5.3/1521 is closed

Bonus

At Oracle OCI, one can access other VMs of the VCN using the <hostname>.<subnet>

On Azure, the <hostname> is enough to access other host of the VNet.


SSH tips & tricks

These days I’ve been playing with Oracle Cloud and Azure. It is best practice to have a bastion or jumphost server with public IP address and all the rest on private networks, only accessible internally.

Also, there is no use of passwords and only the ssh public key of the user is located on the servers. Usually this is what we try to do:

To start with, I use MobaXterm to access my servers. There I’ve configured my SSH private keys to be loaded:

MobaXterm configuration menu for SSH

Connect to jumphost server and take SSH key

Use the -A option when connecting:

ssh -A opc@bastion-server

Then you can do ssh to the next server in the private subnet without password. Example of not using and using this option:

Connect to private server via jumphost directly

For this we use both -A and -J options:

ssh -A -J opc@bastion-server opc@private-server

SCP directly to private server via JumpHost

Here we have to use a ProxyJump option of scp in this case:

scp -o 'ProxyJump opc@bastion-server' file.zip opc@private-server:~/


Attention OPatch 12.2.0.1.19: ‘opatch util cleanup’ command deletes files from /etc, /bin, /lib 2

On January 28th 2020 I reported to Oracle on a Severy 1 SR that the latest OPatch version tries to delete system files from /etc, /bin, /lib :

I wrote in the SR the use case, provided the logs. On my case, the opatch cleanup command “just” deleted the configuration files of my backup client in /etc (opatch deletes files that are writable by the user who calls the command – ‘oracle’/’grid’ in most of the cases).

The location from where ‘opatch cleanup util’ tries to delete files depends from where you call it! So, if you call from / or from $ORACLE_BASE or from $ORACLE_HOME/OPatch you’ll get different behaviours!

It took two months until I got an answer to that SR, saying a bug was created and it will be fixed on the next version of OPatch:

Today there was a release of April 2020 Security patches, the latest Oracle Patch Bundles. To my surprise, for Oracle versions 12.2, 18c and 19c, there is the requirement of using OPatch 12.2.0.1.19 version, the exact same version that has this dangerous bug!

The bug reported end of January was not yet fixed! At least, on OPatch readme there is now a warning:

So, be careful until then and do not use ‘opatch util cleanup’. Anyway, this command is not anymore needed, as a cleanup (without deleting any wrong file) of the $ORACLE_HOME/.patch_storage directory is automatically done at the end of the patching.


Oracle 19c – TFA and Oracle Restart (SIHA)

It took Oracle two months and some persuasion effort to make the analyst to read and believe what I’ve written on the Service Request, but the answer finally come from development:

TFA is disabled in all 19.* SIHA versions. Will let you know once enabled.
You can download TFA from MOS and install it.

My original request in February was:

I’m installing Oracle Restart in silent mode.

I see that TFA is provided both in GI and RDBMS homes under $ORACLE_HOME/suptools/tfa/release/tfa_home

How can I make it to install THIS VERSION OF TFA (and not downloaded from Oracle Support) automatically part of a silent mode installation?

At the moment, when it runs the root.sh from RDBMS installation, it calls both
. /u00/app/oracle/product/19.6.0/install/utl/rootmacro.sh "$@"
and
/u00/app/oracle/product/19.6.0/suptools/tfa/release/tfa_home/install/roottfa.sh

The rootmacro.sh sets:
INSTALL_TFA=0

On the output of the root.sh script there is:

Oracle Trace File Analyzer (TFA) is available at : /u00/app/oracle/product/19.6.0/bin/tfactl

but this does not exist:
-bash: /u00/app/oracle/product/19.6.0/bin/tfactl: No such file or directory

So, I repeat the question:
How can I make it to install the TFA provided part of GI/RDBMS home (and not downloaded from Oracle Support) AUTOMATICALLY and PART OF SILENT MODE INSTALLATION?