Monthly Archives: May 2021


Recover OEM repository from old schema backup

Things not always go right, and recently I had to recover the OEM repository using a one month old datapump backup. Only SYSMAN schema was corrupted, and I did not touch the other schemas.

I probably still miss doing some things and this post is mostly a note to make it faster for other people going through the process and not having a DB backup…

A full restore of the database would be certainly a much better and efficient solution than the one below. 🙂

To recover the corrupted SYSMAN schema where the OEM repository resides, first I drop it and then I did:

# Restore user
impdp schemas=SYSMAN directory=BACKUP_DIR dumpfile=expdp_sysman_april2021.dmp include=USER

# Restore system grants:
SQL> grant execute on dbms_crypto to sysman;
SQL> grant execute on dbms_lock to sysman;
SQL> grant execute on dbms_rls to sysman;
SQL> grant execute on dbms_aq to sysman;
SQL> grant execute on dbms_aqadm to sysman;
SQL> grant execute on dbms_alert to sysman;

# Restore other object grants 
impdp directory=BACKUP_DIR dumpfile=expdp_sysman_april2021.dmp include=GRANT

# Restore data
impdp schemas=SYSMAN directory=BACKUP_DIR dumpfile=expdp_sysman_april2021.dmp 

# Restart queues
/* select 'exec DBMS_AQADM.START_QUEUE(queue_name => ''SYSMAN.'||name||''');' from dba_queues where enqueue_enabled like '%NO%' and owner='SYSMAN' order by 1; */
SQL> exec dbms_aqadm.start_queue('SYSMAN.MGMT_ADMINMSG_BUS',true,true);
SQL> exec dbms_aqadm.start_queue('SYSMAN.EM_CNTR_QUEUE',true,true);
SQL> exec dbms_aqadm.start_queue('SYSMAN.EM_LOADERJOB_QUEUE',true,true);
SQL> exec dbms_aqadm.start_queue('SYSMAN.EM_EVENT_BUS',true,true);
SQL> exec dbms_aqadm.start_queue('SYSMAN.EM_NOTIFY_Q',true,true);
SQL> exec dbms_aqadm.start_queue('SYSMAN.EM_GROUP_EVENT_Q',true,true);
SQL> exec DBMS_AQADM.START_QUEUE('SYSMAN.EM_JOB_STATUS_UPD_Q');
SQL> exec DBMS_AQADM.START_QUEUE('SYSMAN.MGMT_HOST_PING_Q');
SQL> exec DBMS_AQADM.START_QUEUE('SYSMAN.MGMT_LOADER_Q');
SQL> exec DBMS_AQADM.START_QUEUE('SYSMAN.MGMT_NOTIFY_INPUT_Q');
SQL> exec DBMS_AQADM.START_QUEUE('SYSMAN.MGMT_TASK_Q');
SQL> commit;

# Perform partitionion maintenance (as SYSMAN):
SQL> exec gc_interval_partition_mgr.partition_maintenance;
SQL> exec mgmt_audit_admin.add_audit_partition;

Start OEM again:

emctl start oms

Unblock the agents (Setup – Manage Cloud Control – Agents – select blocked agents + “unblock”) and on each monitored host, clear the agent state, restart it and test:

emctl clearstate agent
emctl stop agent
emctl start agent
emctl control agent runCollection $(hostname -f):host Response
emctl status agent

After this, OEM was running again without problems.


CPU pinning overview directly from physical server using OracleVM / Xen

When using Oracle VM, there is the the possibility to use the ‘ovm_vmcontrol’ tool to check and set the CPU Pinning. However the output is not the best for a overview picture.

Today I just wrote a short script to print this overview, which can be run directly on the Physical Server which we are interested at:

[root@lxsrv0001 ~]# for vm in $(xm vcpu-list | egrep '^000' | cut -c1-32 | uniq); do 
  echo "${vm} - $(grep OVM_simple_name /OVS/Repositories/*/VirtualMachines/${vm}/vm.cfg | cut -d'=' -f2) - $(xm vcpu-list | grep -c ${vm}) CPUs - Pin to $(xm vcpu-list | grep -m1 ${vm} | rev | cut -d' ' -f1 | rev)"; 
done

The output will be something like

0004fb000006000031f0e14272fa90d1 - 'vmsrv01' - 6 CPUs - Pin to 0-5
0004fb0000060000b7fcb78fa7888d37 - 'vmsrv03' - 6 CPUs - Pin to 0-5
0004fb00000600004568a88b95d9ea3f - 'vmsrv05' - 4 CPUs - Pin to 6-10
0004fb00000600004ae9d1f0c8e4b8fb - 'vmsrv07' - 2 CPUs - Pin to 11-12

This can always be useful when we need to fast check the CPU pinning for the whole Physical Server.