Tuesday, 28 January 2014

How to Disable Oracle Label Security | OLS

How to Disable Oracle Label Security | OLS

OLS and the Audit table AUD$:

●● Installation of Label Security causes the audit table SYS.AUD$ to be dropped and recreated in the SYSTEM schema. Its existing contents are copied into the new SYSTEM.AUD$ table.
●● If you deinstall Label Security, AUD$ is recreated in the SYS schema and dropped from the SYSTEM schema. Again the contents are copied from one to the other before dropping.

Starting as of version 11.2.0.1, when you install the Enterprise Edition, all options that belong to it are always installed, the available Options selections in the installer only decides if they should be enabled or not, to enable or disable OLS afterwards, you can use chopt.

chopt enable lbac
chopt disable lbac

This works on both Unix/Linux and Windows type Operating system.

If you want to disable OLS for a particular policy in 11gR2 you can use following command:
SA_AUDIT_ADMIN.NOAUDIT ('AROLS', 'XXCTO', 'APPLY, REMOVE');

Syntax:
PROCEDURE AUDIT (
 policy_name IN VARCHAR2,
 users IN VARCHAR2 DEFAULT NULL,
 option IN VARCHAR2 DEFAULT NULL,
 type IN VARCHAR2 DEFAULT NULL,
 success IN VARCHAR2 DEFAULT NULL);

Disabling Oracle Label Security for 12c

If Oracle Database Vault has been enabled, then do not disable Oracle Label Security.

SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';
SELECT PARAMETER, VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

sqlplus '/as sysdba'
EXEC LBACSYS.OLS_ENFORCEMENT.DISABLE_OLS;
SHUTDOWN IMMEDIATE
STARTUP

For Oracle Real Application Cluster (Oracle RAC) environment or a multitenant environment, repeat these steps for each Oracle RAC node or PDB on which you enabled Oracle Label Security.

EXEC LBACSYS.OLS_ENFORCEMENT.ENABLE_OLS;

Removal of OLS Data Dictionary:

This 9i method still works for higher versions.

cd $ORACLE_HOME/rdbms/admin/
sqlplus "/ as sysdba"
START catnools.sql


For 11gR2 it doesn't require downtime.

For 12c it requires downtime.


Friday, 24 January 2014

What Does HRGLOBAL Do?

What Does HRGLOBAL Do?

HRGLOBAL execution only do the work at database end, in short it works on FNDLOAD, PYLOAD, FFXMLC and various SQL executions. It shall be executed only from Admin Node and nonshared applications just once. Do not be in misconception that you would have to execute multiple times from various node.

I have just given a short summary of various steps it performs, though not in order, but in summary, HRGLOBAL only touches your database.

1. It disbales the HRMS Access using pydsblhr.sql. Manually also if one wants to restric the HRMS access this can be executed.

2. It cleans the Orphan Data Using payorpcleans1.sql payorpcleans2.sql payorpcleans3.sql

3. Calls SQL Files for online patching.

4. Put the details of the current environment in files using hrglobal_info.sql and hrglobal_chkpreq.sql.

5. Executes FNDLOAD to load help data using afscprof.lct file.

6. Run PYLOAD and FNDLOAD to add legislative data.

7. Regenerate the balance data for HR Legislation

8. Do Fast Formula related activities using FFXMLC

9. Perform Country Specific HRMS Database activity.

10. Enable HRMS Access using pyenblhr.sql.


Monday, 20 January 2014

RFS[2]: no standby redo logfiles of size XXXXXX blocks available

RFS[2]: no standby redo logfiles of size XXXXXX blocks available

Checking the standby logs on standby database, all SRLs are ACTIVE dated from weeks ago- normally we see 1 used for 1 thread and the others will be UNASSIGNED

STANDBY> select * from v$standby_log;
STANDBY> select STATUS, THREAD#, SEQUENCE#, THREAD# SEQUENCE# from v$standby_log;

In Dec1 SRLs created on LG were not archived/stuck and hence remained ACTIVE and could not longer be assigned. At the that time we see that Primary was archiving every minute and only 1 ARCH available to archive to standby. From standby log_archive_max_processes was set to 2, hence only 1 ARCH archiving Locally and most likely unable to cope with the amount of archiving required.

1. On Standby/Primary set log_archive_max_processes=10
alter system set log_archive_max_processes=10 scope=both;

OR
Another way around this if the logs have been applied as they have in this case, the old standby logs can be dropped and recreated to clear the problem.
alter database drop standby logfile '<logfile_name>';
alter database add standby logfile group x '<logfile_name>';

SQL> col MEMBER FORMAT A100
SQL> set linesize 200
SQL> SELECT GROUP#, STATUS, TYPE, MEMBER FROM V$LOGFILE WHERE TYPE='STANDBY';

Explanation of Various Parameters for Workflow Background Process Engine

Explanation of Various Parameters for Workflow Background Process Engine

ITEM TYPE:
Specify an item type to restrict this engine to activities associated with that item type. If you do not specify an item type, the engine processes any deferred activity regardless of its item type.

MINIMUM THRESHOLD:
Specify the minimum cost that an activity must have for this background engine to execute it, in hundredths of a second.

MAXIMUM THRESHOLD:
Specify the maximum cost that an activity can have for this background engine to execute it, in hundredths of a second.
By using Minimum Threshold and Maximum Threshold multiple background engines can be created to handle very specific types of activities. The default values for these arguments are 0 and 100 so that the background engine runs activities regardless of cost.

PROCESS DEFERRED: 
Specify whether this background engine checks for deferred activities. Setting this parameter to YES allows the engine to check for deferred activities.

PROCESS TIME OUT: 
Specify whether this background engine checks for activities that have timed out. Setting this parameter to YES allows the engine to check for timed out activities.

PROCESS STUCK: 
Specify whether this background engine checks for stuck processes. Setting this parameter to YES allows the engine to check for stuck processes.

FNDREVIVER - Theories and Concepts to remember

FNDREVIVER - Theories and Concepts to remember

Theories on FNDREVIVER 

FNDREVIVER (also recognized as reviver.sh) is used for momentary disconnects in the system where the concurrent managers and/or forms go down, and forms is later reconnected while the concurrent managers are not. FNDREVIVER revives the Internal Concurrent Manager (ICM) when it fails.

When ICM can no longer get a database connection, it kills itself and spawns the reviver. Reviver loops every 30 seconds, attempting to login to the database as apps user. Once login is successful, it starts up the ICM again.

If the failure is due to a brief network outage, or database issue, the managers are restarted, so the client does not have to restart the managers manually.

Reviver is recovery mechanism runs in the background. In a Real Applications Cluster (RAC) environment, when the primary node goes down and ICM is set to migrate to the secondary node, the reviver parameter will be passed to the secondary node.

The easiest way to determine if reviver.sh exists is by checking the $FND_TOP/bin directory.

The variable resides in the context file under 's_cp_reviver' and can be set to "enabled" or "disabled". Based on the value of s_cp_reviver in the context file, AFCPDNR is started with a value of either "enabled" or "disabled" .

The reviver is started when starting the ICM, by passing a parameter reviver="enabled". You do this on the node you start the manager, and if the ICM is set to migrate to the second node, this parameter will be passes to the second node. 
A common misconception is that users must start the reviver.sh manually, however this is not the intended use. It is automatically enabled when the parameter REVIVER_PROCESS="enabled" is passed via the adcmctl.sh concurrent manager startup script. 

On a single node concurrent processing system, FNDREVIVER is the only way to recover from a database connection loss. 

On a two node system, there is another factor, the Internal Monitor (FNDIMON).The FNDIMON will race to restart the internal manager in a multi node setup, and by the time the reviver starts it will likely see that the ICM is already running and exit accordingly. 

FNDIMON checks whether it can connect to the database in order to determine if the ICM is running, and if the database connection is not available it fails to run and exits accordingly. The reviver is a shell script which loops until a connection is obtained, and then starts the manager accordingly. The reviver's job is the last line of defense after a database connection failure, as FNDIMON only works when the database connection is available. 

In the event the ICM goes down due to a network outage, then the reviver would be needed to bring the ICM back up. 

Context File Parameters related to FNDREVIVER

The following parameters can be set in the context file, and then autoconfig should be re-run to enable reviver: 

Concurrent Processing Reviver Process (s_cp_reviver) [Allowed values are {enabled, disabled}]
<cp_reviver oa_var="s_cp_reviver">enabled</cp_reviver> 

Reviver Process PID Directory Location (s_fndreviverpiddir) 
This variable specifies the path where ICM reviver process pid file will be created. Oracle recommends using a local disk as the PID file location because the reviver process may run when the network is down. 
<fndreviverpiddir oa_var="s_fndreviverpiddir">/u02/oracle/visappl/fnd/11.5.0/log</fndreviverpiddir> 

High Water Mark - Some Useful Information to remember

High Water Mark


  • The high water mark is the boundary between used and unused space in a segment. As requests for new free blocks that cannot be satisfied by existing free lists are received, the block to which the high water mark points to becomes a used block, and the high water mark is advanced to the next block. In other words, the segment space to the left of the high water mark is used, and the space to the right of it is unused.



  • The high-water mark is the level at which blocks have never been formatted to receive data.



  • When a table is created in a tablespace, some initial number of blocks/extents are allocated to the table. Later, as the number of rows inserted increases, extents are allocated accordingly.



  • Inserting records into the table would increase the high water mark.



  • Deleting the records does not lower the high water mark. Therefore, deleting the records does not raise the 'Empty_blocks'. After deleting the records, if you query dba_segments or dba_tables, there would be no change.



  • ALTER TABLE <TABLE_NAME> DEALLOCATE UNUSED; >>> would not bring the high water mark down.



  • The high water mark can be reset with a truncate table or if the table is moved to another tablespace, or with Shrink Space.

           SQL> ALTER TABLE <tablename> SHRINK SPACE;


  • The ALTER TABLE MOVE can even be a good method to optimize the HWM even if the move occurs within the same tablespace

  • High Water Mark After Exporting/Deleting/Importing a Table >>> NO, the HWM is not reset.

  • All Oracle segments have an upper boundary containing the data within the segment. This upper boundary is called the "high water mark" or HWM.
  • The high water mark is an indicator that marks blocks that are allocated to a segment, but are not used yet. It is reset to ""zero"" (position to the start of the segment) when a TRUNCATE command is issued.  So you can have empty blocks below the high water mark, but that means that the block has been used (and is probably empty caused by deletes). Oracle does not move the HWM, nor does it *shrink* tables, as a result of deletes."
  • Data files do not have a high water mark; only segments do have them.
  • Full table scans typically read up to the high water mark.



Delete Concurrent Program and Executable


Delete Concurrent Program and Executable

Begin
  fnd_program.delete_program('AG_HR_TRANSFER_PROG', 'MKK Group Custom');
  fnd_program.delete_executable('AG_HR_TRANSFER_PROG', 'MKK Group Custom');
  commit;
End;

MKK Group Custom >>> Is your custom Application registered name.

Recreate FND_CONCURRENT_QUEUES Information

Recreate FND_CONCURRENT_QUEUES Information

Publishing this on request of a friend.... It was applicable for him after clone.

Run FND_CONC_CLONE
EXEC FND_CONC_CLONE.SETUP_CLEAN;
COMMIT;
EXIT;

Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers and Web tiers to repopulate the required system tables. 

Connect to SQLPLUS as APPS user and run the following statement :
select CONCURRENT_QUEUE_NAME from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME like 'FNDSM%';

If the above SQL does not return any value please do the following:
cd $FND_TOP/patch/115/sql
START afdcm037.sql;

Check again that FNDSM entries now exist:
select CONCURRENT_QUEUE_NAME from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME like 'FNDSM%';

Run CMCLEAN.sql and start the Managers.

This would help after cloning if the Managers are not coming up.

Friday, 17 January 2014

Thumb rule for sizing UNDO Tablespace Size

Thumb rule for sizing UNDO Tablespace Size

Sizing an UNDO tablespace requires three pieces of data.
(UR) UNDO_RETENTION in seconds
(UPS) Number of undo data blocks generated per second
(DBS) Overhead varies based on extent and file size (db_block_size)

The undo space needed is calculated as:
UndoSpace = UR * (UPS * DBS)

This query would give you the required minimum size in MB:

SELECT (UR * (UPS * DBS)/1024/1024) AS "Bytes" FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
(SELECT undoblks/((end_time-begin_time)*86400) AS UPS
FROM v$undostat
WHERE undoblks = (SELECT MAX(undoblks) FROM v$undostat)),
(SELECT block_size AS DBS
FROM dba_tablespaces
WHERE tablespace_name = (SELECT UPPER(value) FROM v$parameter WHERE name = 'undo_tablespace'));

Thumb rule from past experience: OEM 12c Cloud control suggests you set the size to 10 times of this value. I have tested for multiple clients it works well in 5 times in most of the cases, need observations afterwards.

Now, suppose your current undo_retention is 900 and you are increasing that to 9000. The required UNDO tablespace size would be(in MB):
Result_Of_Above_Query*5*10
5 is required as per thumb rule.
10 is required as you are increasing the UNDO_RETENTION value to 10 times.

Friday, 27 December 2013

iRecruitment Index Synchronization - All Details

iRecruitment Index Synchronization - All Details

Why iRecruitment Index Synchronization?

To keep the text indexes up to date for iRecruitment documents and job postings run the iRecruitment Index Synchronization process. Oracle iRecruitment uses Oracle Text to perform content-based searches on resumes and job posting details. When candidates upload resumes or managers post new job details, you must synchronize the index at a regular interval to keep the user searches accurate.

Recommended way to run iRecruitment Index Synchronization(from MOS Documents)

• Posting Index indicates index of job postings that managers post.
• Document Index indicates index of candidates' resumes.
• Online index rebuild - to run every 5 minutes

Online index rebuild - to run every 5 minutes
Note: If the online synchronization process starts before the previous one has completed, then the process will display an error. Ensure that you set it to run 5 minutes after completion, and not the start.
In the Online mode, the process adds new entries to the index, enabling simultaneous searches.

Full index rebuild - to run each night
In the Full mode, the process defragments the index, reducing its size, and optimizing the performance. The process does not add new entries to the index.

Impact if Not Done

This is recommended to enhance the performance of iRecruitment and should be done, as it is a very normal DBA Activity.

Monday, 11 November 2013

opatch auto - 11gR2

opatch auto - 11gR2

The OPatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle Database home.

On an 11gR2 RAC installation, if there is no existing database associated to the RDBMS home, when applying a patch using "opatch auto" command, OPatch will patch the Grid Infrastructure home but not the RDBMS home.
But, one or more database exists associated to the RDBMS home, then OPatch will patch both the Grid Infrastructure home and the RDBMS Home.

The opatch auto retrieves the db home information from the configured databases. So, if there is no existing database, then "OPatch auto" will skip that RDBMS Home while patching.

In order to patch the RDBMS home that has no database configured, use "-oh" option in opatch auto.   For example:
opatch auto < Patch Location > -oh /ora/oh1,/ora/oh2,/ora/oh3

Exadata Storage Server Patching - Some details

Exadata Storage Server Patching

●● Exadata Storage Server patch is applied to all cell nodes.
●● Patching is launched from compute node 1 and will use dcli/ssh to remotely patch each cell node.

●● Exadata Storage Server Patch zip also contains Database Minimal Pack or Database Convenience Pack, which are applied to all compute nodes. This patch is copied to each compute node and run locally.

●● Applying the storage software on the cell nodes will also change the Linux version and applying the database minimal pack on the compute nodes does NOT change the Linux version
To upgrade the Linux on Compute Node follow MOS Note: 1284070.1

Non rolling patch apply is much faster as you are applying the patch on all the cell nodes simultaneously, also there are NO risk to single disk failure. Please note, this would require full outage.

In case of rolling patch apply, database downtime is not required, but patch application time is very high. Major risk: ASM high redundancy to reduce disk failure exposure

Grid disks offline >>> Patch Cel01 >>> Grid disks online
Grid disks offline >>> Patch Cel02 >>> Grid disks online
Grid disks offline >>> Patch Cel..n>>> Grid disks online

Rolling Patch application can be risky affair, please be appraised of the followings:
Do not use -rolling option to patchmgr for rolling update or rollback without first applying required fixes on database hosts
./patchmgr -cells cell_group -patch_check_prereq -rolling >>> Make sure this is successful and review spool carefully.
./patchmgr -cells cell_group -patch –rolling

Non-rolling Patching Command:
./patchmgr -cells cell_group -patch_check_prereq
./patchmgr -cells cell_group -patch


How to Verify Cell Node is Patched Successfully

# imageinfo

Output of this command gives some good information, including Kernel Minor Version.

Active Image Version: 11.2.2.3.1.110429.1
Active Image Status: Success

If you get anything in "Active Image Status" except success, then you need to look at validations.log and vldrun*.log. The image status is marked as failure when there is a failure reported in one or more validations.
Check the /var/log/cellos/validations.log and /var/log/cellos/vldrun*.log files for any failures.

If a specific validation failed, then the log will indicate where to look for the additional logs for that validation.

Sunday, 10 November 2013

How to find the Cell Group in Exadata Storage Servers

How to find the Cell Group in Exadata Storage Servers

cd /opt/oracle.SupportTools/onecommand
cat cell_group

#cat cell_group
xxxxxcel01
xxxxxcel02
xxxxxcel03
#

This means, when you would start the patching, it would apply the patch in Cell Node xxxxxcel01 and then in xxxxxcel02, and finally in xxxxxcel03.
As I have Exadata Quarter Rack, there are only 3 Storage Servers(Cell Node) and all would be patched during Cell Node Patching.

From this number of cells you can determine, whether Quarter/Half/Full rack exadata is in place.



Exadata and Exalogic - Roadmap

Recently I have been working on Exadata/Exalogic related stuffs, so would be putting brief details here.

Earlier I was involved in a PoC, 2 years back, would try to put practical details also from the same, at that time I did not start this blog.

Let me take you through the high Level architecture of a full rack Exadata:

In a full rack Exadata database machine, there are 8 database servers, 14 storage servers(7 cells at bottom and 7 cells at top), 3 InfiniBand Switches, 1 Cisco Switch, KVM and 2 PDUs is available.

There are 3 Infiniband Switches in a Exadata Full Rack System. Lower one is know as spine switch.
Most of the Exadata full racks have 2 leaf switches and 1 spine switch. 1/4 racks and 1/2 racks may or may not have a spine switch.

There are one Ethernet Switch in between 2 leaf infiniband switches, which has 48 ports. This is used as management network, and all the Exadata systems are plugged into this for management purpose. Ethernet switch for remote administration and monitoring of the Database Machine.

Storage Servers are also known as Cell Node, and Database Servers are also known as Compute node.

Monday, 21 October 2013

Tuning of HTTP WebTier for Access Manager SSO Implementation

Tuning of HTTP WebTier for Access Manager SSO Implementation

. $HOME/webtierenv.sh
opmnctl status -l

. $HOME/webtierenv.sh
cd $MW_HOME/Oracle_WT1/instances/instance1/diagnostics/logs/OPMN/opmn
grep 'Process Unreachable' opmn.log

. $HOME/webtierenv.sh
cd $MW_HOME/Oracle_WT1/instances/instance1/diagnostics/logs/OHS/ohs1
grep 'still did not exit, sending a SIGKILL' ohs1.log

Make sure no latest OHS Restart has happened.

$MW_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs1/httpd.conf
Look for mpm_worker_module and change MaxClients to 300, ThreadsPerChild to 50

There are multiple tuning recommedtation for HTTP WebTier in the following link, and should be carried out as recommended performance tuning activities:
http://docs.oracle.com/cd/E23943_01/core.1111/e10108/http.htm

Find the complete Fusion Middleware tuning guide at:
http://docs.oracle.com/cd/E23943_01/core.1111/e10108/toc.htm




You have encountered an unexpected PLSQL Error, Please contact System Administrator

You have encountered an unexpected PLSQL Error, Please contact System Administrator

Enable FND Debug using followin profile options:

FND: Debug Log Enabled Yes
FND: Debug Log Filename <empty>
FND: Debug Log Level STATEMENT
FND: Debug Log Mode Asynchronous with Cross-Tier Sequencing
FND: Debug Log Module %

2. Run the following SQL and and write down this number
SQL> select max(log_sequence) from fnd_log_messages;

3. Reproduce the issue and run the following SQL again to get the relevant information:
SQL> select * from fnd_log_messages where log_sequence > NUMBER_IDENTIFIED_BEFORE_IN_SQL_STATEMENT_AT_STEP_2 order by log_sequence;

Verify in the fnd_log_messages, you can see the following:

106500495|fnd.plsql.oid.fnd_ldap_wrapper.create_user: |ORA-31202: DBMS_LDAP: LDAP client/server error: Invalid credentials|
106500496|fnd.plsql.oid.fnd_ldap_wrapper.create_user: |l_err_code :FND_SSO_UNEXP_ERROR, l_tmp_str :ORA-31202: DBMS_LDAP: LDAP client/server error: Invalid credentials|
106500497|fnd.plsql.APP_EXCEPTION.RAISE_EXCEPTION.dict_auto_log|Unabled to call fnd_ldap_wrapper.create_user due to the following reason:
An unexpected error occurred. Please contact your System Administrator. (USER_NAME=SHARMAJ1)|

If this is the case, then during cloing, DBAs must have screwed up OID registration from your live system.

Do a fresh registration using txkrun.pl:
SQL> delete from fnd_user_preferences where user_name='#INTERNAL'
$FND_TOP/bin/txkrun.pl -script=SetSSOReg -registerinstance=yes
$FND_TOP/bin/txkrun.pl -script=SetSSOReg -registeroid=yes -provisiontype=3

Thursday, 17 October 2013

Set the DataSource Connection Continuity to avoid Admin/Managed Service Restart

Set the DataSource Connection Continuity to avoid Admin/Managed Service Restart

In a IDM domain, if AdminServer is started before the DB is up and running and subsequently DB is brought up, OPSS data source does not refresh and hence prevents access to application on Admin Server

Connection Creation Retry Frequency >>> This is needed if the Datasource will be down before the Admin server starts
Test Connections on Reserve >>> This is required if the datasource goes down after successful start

Navigation: WebLogic Console >>> Services >>> Data Sources >>> Click on the Data Sources >>> Connection Pool >>> Advanced

This should be done for OAM, OID and eBusiness AccessGate datasources.


Understanding "Managed Server Independence" in WebLogic Configuration

Understanding "Managed Server Independence" in WebLogic Configuration

"Managed Server Independence" specifies whether this Managed Server can be started when the Administration Server is unavailable.

Check that all managed servers have "Managed Server Independence" Enabled (by default it is)
Navigation: WebLogic Console >>> Environment >>> Servers >>> Click on the Name of the Managed Server >>> Configuration >>> Tuning >>> Advanced
Check if Managed Server Independence is Enabled

If you have "Managed Server Independence" Enabled on all managed servers, 
You can restart AdminServer without any problem (the managed servers will continue to work)

I did work for this for OAM Domain, IDM Domain, and eBusiness AccessGate Domain. So if any of the admin server is down, you eBusiness SSO Login would continue to work.

How to Configure OAM to Use Load Balancer URL

How to Configure OAM to Use Load Balancer URL

Navigation: OAM Console >>> System Configuration >>> Access Manager >>> Access Manager Settings

Put the Load Balancer details in this screen.

For me it was BigIP F5 iRule setup, make sure following is already in place and done by F5 Team:
https://mkktestLBR1.lbrdomain.local >>> Redirects to mkktestOAMserver1.unixdomain.local at 14100 port with SSL terminated at F5 Level




After Fusion/OID Installation /em URL is not Accessible

After Fusion/OID Installation /em URL is not Accessible

While trying to access http://mkktestOIDserver1.unixdomain.local:7001/em after installation, it is giving me following error:

Error 503--Service Unavailable 
From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.5.4 503 Service Unavailable
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay may be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.

Note: The existence of the 503 status code does not imply that a server must use it when becoming overloaded. Some servers may wish to simply refuse the connection.

Locate targets.xml file and take a backup of the original file:
$DOMAIN_HOME/sysman/state/targets.xml

Immediately after 1st line of <Targets> add the following line:
<Target TYPE="oracle_ias_farm" NAME="Farm_IDMDomain" DISPLAY_NAME="Farm_IDMDomain">
<Property NAME="MachineName" VALUE="mkktestOIDserver1.unixdomain.local"/>
<Property NAME="Port" VALUE="7001"/>
<Property NAME="Protocol" VALUE="t3"/>
<Property NAME="isLocal" VALUE="true"/>
<Property NAME="serviceURL" VALUE="service:jmx:t3://mkktestOIDserver1.unixdomain.local:7001/jndi/weblogic.management.mbeanservers.domainruntime"/>
<Property NAME="WebLogicHome" VALUE="/opt/oracle/IDMLIVE_MW_HOME/WebLogic/wlserver_10.3"/>
<Property NAME="DomainHome" VALUE="/opt/oracle/IDMLIVE_MW_HOME/WebLogic/user_projects/domains/IDMDomain"/>

Restart Admin Server.

This resolved the issue.