Today I had to explain why the pga_aggregate_target was showing a value, but one does not have to care about it most of the cases. Here the same explanation.
SQL> show parameter pga
NAME TYPE VALUE
------------------------------------ -------------------------------------------- ----------------------
pga_aggregate_limit big integer 3000M
pga_aggregate_target big integer 1G
One would think that pga_aggregate_target is set to 3000M. However when one checks on the spfile, there is nothing defined:
All started when I wanted to create a query to check which parameters are set on a PDB and the difference from the CDB$ROOT container.
set pages 110
col pdb_name for a10
col name for a30
col value for a20
col pdb_value for a20
col root_value for a20
col source for a10
select a.pdb_name, a.name, a.value PDB_VALUE, b.value ROOT_VALUE,source from
(select pdb_name,name,value,a.con_id, decode(ismodified,'MODIFIED','PDB SPFILE','PDB$SEED') SOURCE
from v$system_parameter a left join dba_pdbs b on (a.CON_ID=b.pdb_id)
where a.con_id>2 and (ismodified='MODIFIED' or isdefault='FALSE')) a,
(select 'CDB$ROOT' pdb_name,name,value,con_id,null
from v$system_parameter where con_id=0) b
where a.name=b.name and a.con_id>2
order by 1,2;
But I know there is also one view called pdb_spfile$ that would show the parameters on the PDB pseudo-spfiles:
col pdb_name for a10
col name for a20
col value$ for a20
from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid)
order by name;
The V$SYSTEM_PARAMETER is well documented, while the PDB_SPFILE$ is not.
Now, the set or unset parameters do not work the same way, as I expected, it trigger some strange behaviours.
The problem of this simple profile is that we can lock ourselves, also as common user, inside the lock profile.
Image that you want to enable this profile on several PDBs:
SQL> alter session set container=pdb01;
SQL> alter system set pdb_lockdown=lock_test;
SQL> alter session set container=samplepdb;
ORA-01031: insufficient privileges
Oups, you cannot anymore change the active container!
The download page of Oracle OPatch has quite some room for improvement: put some ‘order by’ on the version and platform would be welcome. Also, make clear that there are very few versions of it.
In fact, for database, there are just two versions of OPatch! One OPatch version that covers all database supported versions from 12.1 to 20c. For paid long-term supported Oracle 11.2 there is another version.
So, in summary, here the OPatch version you need to patch your DBs:
For Enterprise Manager (middleware) there is another OPatch version, 13.9.x which I don’t have experience with.
The information about which OPatch versions is needed to apply the Database RU, RUR, is now part of the Patch Availability Document. For instance for OCtober 2020, this is what we can see:
Note 1: For Enterprise Manager (middleware) there is another OPatch version, 13.9.x which I don’t have experience with.
Note 2 – for Oracle guys out there: when we see the current size of the Release Updates, maybe it would be worth to include the latest version of OPatch within it. It would not increase so much the size and avoid the need of checking if we have the latest OPatch.
Man shall pay only for what it uses. This is also a motto of the Cloud and Oracle with second-level billing pushes this model.
Concerning disk space, however, it is not always easy. While terabyte prices are getting cheaper, sometimes you make a big cleanup of your database and then you would like to pay only for what is being used.
On Oracle Autonomous Databases it is the sum of datafiles size that counts.
Image now that you have a huge table and then drop it. The datafile space is not recovered.
In order to recover space you need:
Purge the recycle bin:
SQL> purge dba_recyclebin
reduce size of DATA tablespace datafile
SQL> alter database datafile <file_id> resize yyyM;
Now, this will be possible only if there are no extents used at the end of the datafile. Otherwise, one can try to alter table <table_name> move online; and then alter tablespace <tbs_name> coalesce; but this is not sure to help.
During my tests I only had one table, which made things easier.
Let’s hope that Oracle either changes the way to calculate the space used or provides a way to (continuous) defragment a datafile and make the size dynamic.
To check the storage used on Autonomous Database and find the datafile file_id, you can run the following query:
-- Get Space used by tablespace and file_id
select TBS "File_ID-Tablespace",
(select file_id||'-'||tablespace_name TBS, bytes
where PROPERTY_NAME = 'MAX_PDB_STORAGE')
group by rollup(TBS);
FILE_ID-TABLESPACE USED_GB PCT
------------------ ------- ---
3252-SYSTEM 0.41 2
3253-SYSAUX 3.16 16
3254-UNDOTBS1 0.44 2
3255-DATA 0.1 0
3256-DBFS_DATA 0.1 0
-- Get Total space used by DB
select round(USED_BYTES/1024/1024/1024,2) USED_GB,
(select PROPERTY_VALUE MAX_BYTES
where PROPERTY_NAME = 'MAX_PDB_STORAGE'),
(select sum(BYTES) USED_BYTES
where TABLESPACE_NAME != 'SAMPLESCHEMA');
USED_GB MAX_GB PCT_USED
------- ------ --------
4.2 20 21.01
Using the Free Tier of Oracle Cloud I created one Autonomous DB of each type – one Autonomous Transaction Processing and one Autonomous Data Warehouse (Autonomous JSON are not yet available). Then did run
select name, display_value from v$parameter where isdefault='FALSE' order by 1;
on each of the DBs, I got the follow differences (empty means not set):
Both the databases (PDB) share the same Container (CDB).
I did check also
select * from database_properties;
but there are no initial state differences.
Something I found interesting. I had a 2-month old ATP when I created the ADW. Immediately I saw that my old ATP was not using ASM, compared to the ADW, also that ADW was a cluster DB while the old ATP was single instance.
I recreated the ATP to check if this remained. But no. My new ATP was co-located on the same database as the ADW, so parameters are mostly the same as we could see above.
For historical reasons, I leave here the parameter changes between ATP created in June and end-August 2020. For paths, only the differences are highlighted: