Is PGA_AGGREGATE_LIMIT set?

Today I had to explain why the pga_aggregate_target was showing a value, but one does not have to care about it most of the cases. Here the same explanation.

When doing:

SQL> show parameter pga

NAME                                 TYPE                                         VALUE
------------------------------------ -------------------------------------------- ----------------------
pga_aggregate_limit                  big integer                                  3000M
pga_aggregate_target                 big integer                                  1G

One would think that pga_aggregate_target is set to 3000M. However when one checks on the spfile, there is nothing defined:

SQL> show spparameter pga_aggregate_limit

SID      NAME                          TYPE                                         VALUE
-------- ----------------------------- -------------------------------------------- --------------------
*        pga_aggregate_limit           big integer

The answer comes from the documentation. When PGA_AGGREGATE_LIMIT is not defined on spfile, or if this value is not big enough, then it calculated based on PGA_AGGREGATE_TARGET or PROCESSES parameter, as it is clearly described on the documentation.

The value used will be the greatest of:

  • value of PGA_AGGREGATE_LIMIT parameter
  • 2 times the value of PGA_AGGREGATE_TARGET parameter
  • 3 times the number of PROCESSES parameter (= 3MB/process)

One can use the following query to show what value will be used by Oracle as PGA_AGGREGATE_LIMIT:

select PGA_AGGREGATE_LIMIT,
       PROCESSES_x_3,
       PGA_AGGREGATE_TARGET_x_2, 
       greatest(nvl(PGA_AGGREGATE_LIMIT,0), PROCESSES_x_3, PGA_AGGREGATE_TARGET_x_2) USED_VALUE 
from (select value/(1024*1024) PGA_AGGREGATE_LIMIT from v$spparameter where name='pga_aggregate_limit'),
     (select value*3 PROCESSES_x_3 from v$spparameter where name='processes'),
     (select value/(1024*1024)*2 PGA_AGGREGATE_TARGET_x_2 from v$spparameter where name='pga_aggregate_target');

PGA_AGGREGATE_LIMIT PROCESSES_X_3 PGA_AGGREGATE_TARGET_X_2 USED_VALUE
------------------- ------------- ------------------------ ----------
                             3000                     1024       3000

If you still use Automatic Memory Management (MEMORY_TARGET parameter), then the PGA_AGGREGATE_TARGET will be the value of MEMORY_MAX_SIZE, which I don’t mention on the SQL.


Strange behaviours on setting PDB parameters

[Tests performed on Oracle 19.8]

All started when I wanted to create a query to check which parameters are set on a PDB and the difference from the CDB$ROOT container.

col pdb_name for a10 
col value for a20
col pdb_value for a20
col root_value for a20
select a.pdb_name, a.name, a.value PDB_VALUE, b.value ROOT_VALUE from
  (select pdb_name,name,value 
   from v$system_parameter a left join dba_pdbs b on (a.CON_ID=b.pdb_id)
   where a.con_id>2 and isdefault='FALSE') a,
  (select 'CDB$ROOT' pdb_name,name,value 
   from v$system_parameter where con_id=0) b
where a.name=b.name
order by 1,2;

But I know there is also one view called pdb_spfile$ that would show the parameters on the PDB pseudo-spfiles:

col pdb_name for a10 
col name for a20
col value$ for a20
select pdb_name,name,value$ 
from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) 
where con_id>2 order by name;

The V$SYSTEM_PARAMETER is well documented, while the PDB_SPFILE$ is not.

Strange behaviours

ALTER SYSTEM … default SCOPE on PDB

Oracle Documentation 19c only mentions the default SCOPE on CDB root:

"If a server parameter file was used to start up the database, then BOTH is the default. If a parameter file was used to start up the database, then MEMORY is the default, as well as the only scope you can specify."

On a PDB, the default SCOPE depends(!) on the action:

Setting a parameter, then default is SCOPE=BOTH :

TVDCDB1.PDB01 SQL> show parameter cursor_sharing
NAME             TYPE        VALUE
---------------- ----------- ----------------
cursor_sharing   string      EXACT

TVDCDB1.PDB01 SQL> show spparameter cursor_sharing
SID      NAME             TYPE        VALUE
-------- ---------------- ----------- ----------------
*        cursor_sharing   string

TVDCDB1.PDB01 SQL> alter system set cursor_sharing=FORCE;
System altered.

TVDCDB1.PDB01 SQL> show parameter cursor_sharing
NAME             TYPE        VALUE
---------------- ----------- ----------------
cursor_sharing   string      FORCE

TVDCDB1.PDB01 SQL> show spparameter cursor_sharing
SID      NAME             TYPE        VALUE
-------- ---------------- ----------- ----------------
*        cursor_sharing   string      FORCE

Resetting a parameter, then default is SCOPE=SPFILE :

TVDCDB1.PDB01 SQL> alter system reset cursor_sharing;
System altered.

TVDCDB1.PDB01 SQL> show parameter cursor_sharing
NAME             TYPE        VALUE
---------------- ----------- ----------------
cursor_sharing   string      FORCE

TVDCDB1.PDB01 SQL> show spparameter cursor_sharing
SID      NAME             TYPE        VALUE
-------- ---------------- ----------- ----------------
*        cursor_sharing   string

TVDCDB1.PDB01 SQL> alter system reset cursor_sharing scope=both;
System altered.

TVDCDB1.PDB01 SQL> show parameter cursor_sharing
NAME             TYPE        VALUE
---------------- ----------- ----------------
cursor_sharing   string      EXACT

PDB_SPFILE$ not updated on parameter reset

When going back to the CDB root, I would expect to not see anymore the parameter on PDB_SPFILE$ view. Well, it is still there:

SQL> connect / as sysdba
Connected.

SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;

PDB_NAME   NAME                      VALUE$
---------- ------------------------- --------------------
PDB01      cursor_sharing            'FORCE'
1 row selected.

And this remains even after restart the PDB or the complete instance.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2634018904 bytes
Fixed Size                  9138264 bytes
Variable Size            1526726656 bytes
Database Buffers          822083584 bytes
Redo Buffers                7634944 bytes
In-Memory Area            268435456 bytes
Database mounted.
Database opened.

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;

PDB_NAME   NAME                      VALUE$
---------- ------------------------- --------------------
PDB01      cursor_sharing            'FORCE'
1 row selected.

And also after unplug and plug the PDB back!

SQL> alter pluggable database pdb01 close;
Pluggable database altered.

SQL> alter pluggable database pdb01 unplug into '/tmp/pdb01.xml';
Pluggable database altered.

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;

PDB_NAME   NAME                      VALUE$
---------- ------------------------- --------------------
PDB01      cursor_sharing            'FORCE'
1 row selected.

SQL> drop pluggable database pdb01 keep datafiles;
Pluggable database dropped.

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;

no rows selected

SQL> create pluggable database pdb01 using '/tmp/pdb01.xml' nocopy;
Pluggable database created.

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;
PDB_NAME   NAME                      VALUE$
---------- ------------------------- --------------------
PDB01      cursor_sharing            'FORCE'
1 row selected.

SQL> alter pluggable database pdb01 open;
Pluggable database altered.

SQL> select pdb_name,name,value$ from pdb_spfile$ left join dba_pdbs on (CON_UID=pdb_uid) where name='cursor_sharing' order by name;
PDB_NAME   NAME                      VALUE$
---------- ------------------------- --------------------
PDB01      cursor_sharing            'FORCE'
1 row selected.

But the value is not used on the PDB itself:

SQL> alter session set container=pdb01;
Session altered.

SQL> show con_name
CON_NAME
------------------------------
PDB01

SQL> show spparameter cursor_sharing

SID      NAME             TYPE        VALUE
-------- ---------------- ----------- ----------------
*        cursor_sharing   string

SQL> show parameter cursor_sharing

NAME             TYPE        VALUE
---------------- ----------- ----------------
cursor_sharing   string      EXACT

Default values shown on V$SYSTEM_PARAMETER

The “annoying” part of the using the V$SYSTEM_PARAMETER is that it will show always the SEED non-default parameters, even if these are not specified on the PDB pseudo-spfile:

SQL> create pluggable database pdb02 admin user pdb_admin identified by pdb_admin file_name_convert=('pdbseed','pdb02');
Pluggable database created.

SQL> alter pluggable database pdb02 open;
Pluggable database altered.

select a.pdb_name, a.name, a.value PDB_VALUE, b.value ROOT_VALUE from
  (select pdb_name,name,value 
   from v$system_parameter a left join dba_pdbs b on (a.CON_ID=b.pdb_id)
   where a.con_id>2 and isdefault='FALSE') a,
  (select 'CDB$ROOT' pdb_name,name,value 
   from v$system_parameter where con_id=0) b
where a.name=b.name and a.pdb_name='PDB02'
order by 1,2;


PDB_NAME   NAME                 PDB_VALUE            ROOT_VALUE
---------- -------------------- -------------------- --------------------
PDB02      db_securefile        PREFERRED            PREFERRED
PDB02      sga_target           0                    2634022912
PDB02      shared_pool_size     0                    637534208
PDB02      undo_tablespace      UNDOTBS1             UNDOTBS1

4 rows selected.

SQL> select 'PDB$SEED' pdb_name, name,value from v$system_parameter where con_id=2 and isdefault='FALSE';

PDB_NAME   NAME                 VALUE
---------- -------------------- --------------------
PDB$SEED   shared_pool_size     0
PDB$SEED   sga_target           0
PDB$SEED   undo_tablespace
PDB$SEED   db_securefile        PREFERRED

4 rows selected.

Summary: I would prefer to use the PDB_SPFILE$, as it tries to imitate a real SPFILE (at least name-wise), but seems it is not really working.


HugePages not used when starting DB with srvctl (but they are with sqlplus)

Once again I end up with my clients database swapping. Why? After quick investigation, could see that HugePages were not used on the last restart of the database.

oracle@myvm1:./trace/ [oracle19] grep -B1 -A4 PAGESIZE alert*.log
2020-04-14T04:36:34.601494+02:00
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
2020-04-14T04:36:34.601550+02:00
        4K       Configured              10              10        NONE
2020-04-14T04:36:34.601642+02:00
     2048K           247816            8193            8193        NONE
--
2020-10-13T22:59:28.856763+02:00
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
2020-10-13T22:59:28.856818+02:00
        4K       Configured              10         4186122        NONE
2020-10-13T22:59:28.856925+02:00
     2048K           202479            8193              17        NONE

Why was that? I did use a normal start command:

oracle@myvm1:./trace/ [oracle19] srvctl start database -db mydb

Let’s put the context. This is a Oracle Restart server, with separation between oracle and grid users.

The database in the past used HugePages, so they are configured (I skip the check here, but I did it).

I’ve looked around and could see that ulimits were correctly set for both users:

oracle@myvm1:./trace/ [oracle19] grep 'memlock' /etc/security/limits.d/99-grid-oracle-limits.conf
oracle soft memlock 1342177280
oracle hard memlock 1342177280
grid soft memlock 1342177280
grid hard memlock 1342177280

I see the file was changed after the previous DB restart. The server was not restarted in-between.

But on the alertlog I see that the ulimit seen by the database changed compared the the last restart:

oracle@myvm1:./trace/ [oracle19] grep -h -B1 "Per proc" alert*.log
2020-04-14T04:36:34.301642+02:00
 Per process system memlock (soft) limit = 1280G
--
2020-10-13T22:59:28.756763+02:00
 Per process system memlock (soft) limit = 64K

I decided to stop the database and starting it again but with sqlplus. And HugesPages were used!

So, where could be the problem?

The srvctl uses de oraagent.bin to start databases. It is so important to check what the limits used by this process:

grid@myvm1:~/ [grid19] grep memory /proc/$(pgrep oraagent.bin)/limits
Max locked memory         65536                65536                bytes

Here is the problem, the oraagent.bin was started before the change of the limits file. And new limits are only valid for new sessions.

When we connect to a new session and start a DB with sqlplus, it will get this session limits. But if we use srvctl, it will get the limits that were in place when oraagent.bin started.

We are on a production server, restart the whole clusterware stack would be quite annoying. What is nice, is that we can simply “kill” the oraagent.bin process and it will respawn. Even better if we do it with the ohasd.bin.

grid@myvm1:~/ [grid19] ps -ef | grep ohasd.bin
grid       2854      1  0 Sep02 ?        03:23:19 /u00/app/grid/product/19/bin/ohasd.bin reboot

grid@myvm1:~/ [grid19] kill -9 2854

grid@myvm1:~/ [grid19] ps -ef | grep ohasd.bin
grid     185684      1 17 14:22 ?        00:00:01 /u00/app/grid/product/19/bin/ohasd.bin restart

grid@myvm1:~/ [grid19] psg pmon
grid       6458      1  0 Sep02 ?        00:03:38 asm_pmon_+ASM
oracle     7069      1  0 Oct13 ?        00:05:18 ora_pmon_mydb

And now the new limits are correctly set for the oraagent.bin:

grid@myvm1:~/ [grid19] grep memory /proc/$(pgrep oraagent.bin)/limits
Max locked memory         1374389534720        1374389534720        bytes

And further restarts with srvctl are using HugePages.

Lesson: check the if the ulimits are correctly applied to oragent.bin.

If you have a file with a list of servers separated by spaces or lines, then you can use a similar loop to go through all of them:

for server in $(cat server_list.txt | grep -v '^#'); do 
  echo "=== $server"; 
  ssh -o ConnectTimeout=1 -o ConnectionAttempts=1 -oBatchMode=yes $server \
    "echo '99-grid - ' \$(grep 'grid soft memlock' /etc/security/limits.d/99-grid-oracle-limits.conf); echo 'oraagent - ' \$(grep memory /proc/\$(pgrep oraagent.bin)/limits)"; 
done;


Get unlock from PDB Lockdown profiles

PDB Lockdown profiles allow, on a multitenant database, to limit what a user can do inside a PDB.

One use case is when we want to avoid high privileged users (like Application DBAs) to perform ALTER SYSTEM or ALTER SESSION commands.

Quickly we would think we can do a PDB Lockdown profile like:

SQL> create lockdown profile lock_test;
SQL> alter lockdown profile lock_test disable statement=('ALTER SESSION');
SQL> alter lockdown profile lock_test disable statement=('ALTER SYSTEM');

The problem of this simple profile is that we can lock ourselves, also as common user, inside the lock profile.

Image that you want to enable this profile on several PDBs:

SQL> alter session set container=pdb01;
Session altered.

SQL> alter system set pdb_lockdown=lock_test;
System altered.

SQL> alter session set container=samplepdb;
ERROR:
ORA-01031: insufficient privileges

Oups, you cannot anymore change the active container!

One has to reconnect to the CDB and then can go to the second PDB and apply the lockdown profile:

SQL> connect / as sysdba
Connected.

SQL> alter session set container=samplepdb;
Session altered.

SQL> alter system set pdb_lockdown=lock_test;
System altered.

What if now I (I’m the CDB DBA and have SYSDBA rights) want to change a parameter on one of the PDBs?

It is not possible to disable the pdb_lockdown profile:

SQL> connect / as sysdba
Connected.

SQL> alter session set container=samplepdb;
Session altered.

SQL> alter session set pdb_lockdown='';
ERROR:
ORA-01031: insufficient privileges

SQL> alter system set pdb_lockdown='';
alter system set pdb_lockdown=''
*
ERROR at line 1:
ORA-01031: insufficient privileges

Two possible options are:

  • drop the PDB Lockdown on Root container
SQL> connect / as sysdba
Connected.

SQL> drop lockdown profile lock_test ;
Lockdown Profile dropped.

SQL> alter session set container=samplepdb;
Session altered.

SQL> alter system set pdb_lockdown='';
System altered.
  • change the existing lockdown profile temporarily to allow operations
SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile lock_test enable statement=('ALTER SYSTEM') users=common;
Lockdown Profile altered.

SQL> alter session set container=samplepdb;
Session altered.

SQL> alter system set pdb_lockdown='';
System altered.

The problem of this two solutions is that the relax of the lockdown profile will apply to all the PDBs where it is enabled.

If we want just temporarily be super users and skip the lockdown profile, the solution is to disable on the session on the cdb$root and then change to another container. During our session we will not have any lockdown profile enabled.

SQL> -- Connected to CDB$ROOT
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> -- Lockdown profiles enabled on PDBs
SQL> select b.name PDB,a.name, value$ from pdb_spfile$ a join v$pdbs b on b.con_uid=a.pdb_uid where a.name='pdb_lockdown';
PDB        NAME                 VALUE$
---------- -------------------- ---------------
SAMPLEPDB  pdb_lockdown         'LOCK_TEST'
PDB01      pdb_lockdown         'LOCK_TEST'

SQL> -- Lockdown profile disables alter system and alter session
SQL> select con_id,profile_name,rule,clause,status,users,except_users from cdb_lockdown_profiles;
    CON_ID PROFILE_NA RULE                      CLAUS STATUS  USERS  EXCEPT_USE
---------- ---------- ------------------------- ----- ------- ------ ----------
         1 LOCK_TEST  ALTER SESSION                   DISABLE ALL
         1 LOCK_TEST  ALTER SYSTEM                    DISABLE ALL

SQL> -- As common user on cdb$root we can change the session pdb_lockdown profile
SQL> alter session set pdb_lockdown='';
Session altered.

SQL> -- And this will go with us to the next container
SQL> alter session set container=PDB01;
Session altered.

SQL> show parameter pdb_lockdown
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pdb_lockdown                         string

SQL> alter system set open_cursors=300;
System altered.

SQL> -- If we reconnect, the lockdown profile is still there
SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB01;
Session altered.

SQL> show parameter pdb_lockdown
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pdb_lockdown                         string      LOCK_TEST

SQL> alter system set open_cursors=300;
alter system set open_cursors=300
*
ERROR at line 1:
ORA-01031: insufficient privileges

What is interesting to notice from this behaviour is:

when changing a session parameter on cdb$root, this parameter will remain active also when going to another containers within the same session.

The parameters that were not changed on session level, will get adapted when changing to another container, with the values set specifically for that container.


Which OPatch to use? One OPatch version fits (almost) all.

The download page of Oracle OPatch has quite some room for improvement: put some ‘order by’ on the version and platform would be welcome. Also, make clear that there are very few versions of it.

In fact, for database, there are just two versions of OPatch! One OPatch version that covers all database supported versions from 12.1 to 20c. For paid long-term supported Oracle 11.2 there is another version.

So, in summary, here the OPatch version you need to patch your DBs:

Oracle Database versionOPatch version
11.211.2.x
12.112..2.x
12.212..2.x
18c12..2.x
19c12..2.x
20c12..2.x

The download link is this one:

https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?patchId=6880880

For Enterprise Manager (middleware) there is another OPatch version, 13.9.x which I don’t have experience with.

The information about which OPatch versions is needed to apply the Database RU, RUR, is now part of the Patch Availability Document. For instance for OCtober 2020, this is what we can see:

Note 1: For Enterprise Manager (middleware) there is another OPatch version, 13.9.x which I don’t have experience with.

Note 2 – for Oracle guys out there: when we see the current size of the Release Updates, maybe it would be worth to include the latest version of OPatch within it. It would not increase so much the size and avoid the need of checking if we have the latest OPatch.


How to reclaim storage space on Oracle Autonomous Database

Man shall pay only for what it uses. This is also a motto of the Cloud and Oracle with second-level billing pushes this model.

Concerning disk space, however, it is not always easy. While terabyte prices are getting cheaper, sometimes you make a big cleanup of your database and then you would like to pay only for what is being used.

On Oracle Autonomous Databases it is the sum of datafiles size that counts.

Image now that you have a huge table and then drop it. The datafile space is not recovered.

In order to recover space you need:

  • Purge the recycle bin:

SQL> purge dba_recyclebin

  • reduce size of DATA tablespace datafile

SQL> alter database datafile <file_id> resize yyyM;

Now, this will be possible only if there are no extents used at the end of the datafile. Otherwise, one can try to alter table <table_name> move online; and then alter tablespace <tbs_name> coalesce; but this is not sure to help.

During my tests I only had one table, which made things easier.

Let’s hope that Oracle either changes the way to calculate the space used or provides a way to (continuous) defragment a datafile and make the size dynamic.

To check the storage used on Autonomous Database and find the datafile file_id, you can run the following query:

-- Get Space used by tablespace and file_id
select TBS "File_ID-Tablespace",
  round(sum(bytes)/1024/1024/1024,2) USED_GB,
  round(sum(bytes)/max(PROPERTY_VALUE)*100) PCT 
from
  (select file_id||'-'||tablespace_name TBS, bytes 
    from DBA_DATA_FILES
    where tablespace_name!='SAMPLESCHEMA'),
  (select PROPERTY_VALUE 
    from DATABASE_PROPERTIES 
    where PROPERTY_NAME = 'MAX_PDB_STORAGE')
group by rollup(TBS);

FILE_ID-TABLESPACE USED_GB PCT 
------------------ ------- --- 
3252-SYSTEM           0.41   2 
3253-SYSAUX           3.16  16 
3254-UNDOTBS1         0.44   2 
3255-DATA              0.1   0 
3256-DBFS_DATA         0.1   0 
                       4.2  21 

-- Get Total space used by DB
select round(USED_BYTES/1024/1024/1024,2) USED_GB,
  round(MAX_BYTES/1024/1024/1024,2) MAX_GB,
  round(USED_BYTES/MAX_BYTES*100,2) PCT_USED 
from
  (select PROPERTY_VALUE MAX_BYTES
    from DATABASE_PROPERTIES 
    where PROPERTY_NAME = 'MAX_PDB_STORAGE'),
  (select sum(BYTES) USED_BYTES
    from DBA_DATA_FILES 
    where TABLESPACE_NAME != 'SAMPLESCHEMA');

USED_GB MAX_GB PCT_USED 
------- ------ -------- 
    4.2     20    21.01 

.anjo


Oracle Cloud: what parameters make ATP different from ADW?

Using the Free Tier of Oracle Cloud I created one Autonomous DB of each type – one Autonomous Transaction Processing and one Autonomous Data Warehouse (Autonomous JSON are not yet available). Then did run

select name, display_value
from v$parameter
where isdefault='FALSE'
order by 1;

on each of the DBs, I got the follow differences (empty means not set):

ParameterATPADW
optimizer_ignore_hintsTRUE
optimizer_ignore_parallel_hintsTRUE
parallel_degree_policyAUTO
parallel_min_degreeCPU
pdb_lockdownOLTPDWCS
pga_aggregate_target3000M5100M
resource_manager_planFORCE:OLTP_PLANFORCE:DWCS_PLAN
result_cache_max_result1
result_cache_modeMANUALFORCE
sga_target8000M3400M

Both the databases (PDB) share the same Container (CDB).

I did check also

select *
from database_properties;

but there are no initial state differences.

Something I found interesting. I had a 2-month old ATP when I created the ADW. Immediately I saw that my old ATP was not using ASM, compared to the ADW, also that ADW was a cluster DB while the old ATP was single instance.

I recreated the ATP to check if this remained. But no. My new ATP was co-located on the same database as the ADW, so parameters are mostly the same as we could see above.

For historical reasons, I leave here the parameter changes between ATP created in June and end-August 2020. For paths, only the differences are highlighted:

parameterATP June 2020ATP August 2020
audit_file_dest/u01/…/u02/…
cluster_databaseFALSETRUE
control_files/u01/…,/u05/…+DATA/…,+RECO/….
db_create_file_dest/u01/app/oracle/oradata+DATA
db_nameFBPOD1feio1pod
db_recovery_file_dest/u05/fra+RECO
de_recovery_file_dest_size571558474874921167764M
diagnostic_dest/u01/app/oracle/u02/app/oracle
drcp_connection_limit600
external_keystore_credential/u01/…/wallets/tde_seps+DATA/encrypt_wallet/tde_seps
gcs_server_processes04
instance_number2
log_archive_dest_1LOCATION=USE_DB_RECOVERY_FILE_DEST MANDATORY
log_archive_dest_state_4ENABLE
log_archive_dest_state_5ENABLE
log_archive_dest_state_6ENABLE
max_pdbs4096
processes2000040000
sga_max_size321G301G
thread2

Oracle 11.1 on Windows – ORA-02778

I’ve a funny challenge these days:

Recover a Oracle 11.1.0.6 on Windows installation. I’ve all files in an external drive and I’m trying to make the DB to run on my laptop.

  • I’ve copied to my local disk;
  • Added the entries to the registry that found necessary
  • Re-created the Inventory
  • Re-created a init file with correct locations from the spfile

Now I had still had the error:

SQL> startup nomount
ORA-02778: Name given for the log directory is invalid

Nothing on the logs, all directories mentioned on pfile exist and are writable.

I’ve downloaded the Microsoft ProcessMonitor, configured to trace only Oracle company processes and looked what it tried to read.

Finally I found the culprit: %ORACLE_HOME%\RDBMS\log was missing. Just the log subfolder.

This was during the xcopy to the local disk, I excluded all “log” files, as there were some huge ones. Unfortunately it also excluded this folder. Now looks better:

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1720328192 bytes
Fixed Size                  2130160 bytes
Variable Size             989859600 bytes
Database Buffers          721420288 bytes
Redo Buffers                6918144 bytes
SQL>

SQL query for queue table with priorities 6

Queue tables are used to take care of events. There is a mechanism that insert rows and another that takes care of the existing rows, usually deleting them at the end.

When possible one should use Oracle Advanced Queuing mechanism which takes care of managing the queue and a simple request will give you the next in the line.

Some applications develop their own queuing systems and it is a good way to learn how queue works.

Imagine a table with a list of elements and two specific columns:

  • order of arrival
  • priority

The first to be served is the one that, having the highest priority, was the first to arrive.

(more…)