Friday, 30 May 2014

RMAN feature to delete the archive logs applied on Standby

This is a feature in RMAN to automatically purge archivelogs from the FRA once they are applied to the standby database.
APPLIED ON STANDBY: enables flash recovery area to delete archivelogs that are applied on mandatory standby.

Pre 11g Set the following Parameter and restart the database
SQL> alter system set "_log_deletion_policy"='ALL' scope=spfile;

Configure the following parameter in RMAN (primary and standby):

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

Now define the FRA accordingly, so that when there is a space pressure in FRA it would delete the archive logs and you would see a message in the alert log as follows:
Deleted Oracle managed file +ARCH/AGPROD/archive_1_3926_820676398.arc
Deleted Oracle managed file +ARCH/AGPROD/archive_1_3927_820676398.arc

This would help in better maintenance of archive log and accidental loss of archive logs.

Monday, 17 February 2014

Oracle Identity and Access Management

Recently, I conducted a training on Oracle Identity Management related products, just putting various topics I have covered under this.


  • Technical Insight on Oracle Access Manager, Oracle Internet Directory, Oracle Virtual Directory, WebGate, AccessGate and Windows Native Authentication. In this topic I mostly talked on the various implementation/Support experience, I have with IDM Products and lesson learned from each.
  • Installation and Implementation Steps for Identity Management Components.
  • Load Balancer in front of OAM and WebGate
  • Basic Performance Tuning on OAM, EAG, OID, HTTP WebTier, OVD
  • Proxy SSL Configuration for OAM, eBusiness, EAG Components
  • Integration of OAM with eBusiness Suite and WebCentre Content Management.
  • Provisioning of Identity Management with Fusion Apps
Keep Learning !!!!!

Tuesday, 11 February 2014

Security on Oracle Data Dictionary | O7_DICTIONARY_ACCESSIBILITY

Security on Oracle Data Dictionary | O7_DICTIONARY_ACCESSIBILITY

The data dictionary tables and views for a given database are stored in the SYSTEM tablespace for that database. All the data dictionary tables and views for a given database are owned by the user SYS. Connecting to the database with the SYSDBA privilege gives full access to the data dictionary

SQL> SELECT TABLE_NAME FROM DICTIONARY;

INIT Parameter O7_DICTIONARY_ACCESSIBILITY (set as FALSE) enables to secure Oracle Data Dictionary.

Oracle Database provides highly granular privileges. One such privilege, commonly referred to as the ANY privilege, like DROP ANY TABLE. It is possible to protect the Oracle data dictionary from accidental or malicious use of the ANY privilege by setting 07_DICTIONARY_ACCESSIBILITY initialization parameter to FALSE.

For changes in O7_DICTIONARY_ACCESSIBILITY to get reflected Database restart is required.

Monday, 10 February 2014

Concurrent Manager/Request Tuning(Supply Chain Cost Rollup - Print Report)


Concurrent Manager/Request Tuning(Supply Chain Cost Rollup - Print Report)

Recently I was engaged with a Concurrent Processing performance tuning. Queue for "Cost Rollup Manager" was growing rapidly.
This manager is responsible to run only "Supply Chain Cost Rollup - Print Report" and it should not take more than 30 seconds to complete normally.

Though its underlying code was causing the issue, writing in this blog. Same informations can be used to tune other manager and requests as well.

Find the Requests a Manager would Run

select  ptl.user_concurrent_program_name,qtl.user_concurrent_queue_name,t.request_id
  from Fnd_Concurrent_Requests t,
       FND_CONCURRENT_PROCESSES k,
       Fnd_Concurrent_Queues_TL QTL,
       Fnd_Concurrent_Programs_TL PTL 
  where k.concurrent_process_id = t.controlling_manager
    and QTL.Concurrent_Queue_Id = k.concurrent_queue_id
    and ptl.concurrent_program_id=t.concurrent_program_id
    AND QTL.LANGUAGE='US'
    AND PTL.USER_CONCURRENT_PROGRAM_NAME LIKE '%'
    AND qtl.user_concurrent_queue_name ='Cost Rollup Manager'
ORDER BY ptl.user_concurrent_program_name DESC;

Find the Requests Running More than 30 Minutes

undefine start_date
undefine end_date

set pages 10000
set verify off
column request_id format 99999999 heading 'REQUEST'
column user_name format a17
column phase format a10
column status format a12
column start_date format a5
column completion_date format a5 heading 'END'
column avg_run_time format 9999 heading 'AVG TIME'
column min_run_time format 9999 heading 'MIN TIME'
column max_run_time format 9999 heading 'MAX TIME'
column program_name format a50

select
    p.user_concurrent_program_name program_name,
    count(r.request_id),
    avg((nvl(r.actual_completion_date,sysdate) - r.actual_start_date) * 24 * 60) avg_run_time,
    min((nvl(r.actual_completion_date,sysdate) - r.actual_start_date) * 24 * 60) min_run_time,
    max((nvl(r.actual_completion_date,sysdate) - r.actual_start_date) * 24 * 60) max_run_time
from
    apps.fnd_concurrent_requests r,
    apps.fnd_concurrent_processes c,
    apps.fnd_concurrent_queues q,
    apps.fnd_concurrent_programs_vl p
where
    p.concurrent_program_id = r.concurrent_program_id
    and p.application_id = r.program_application_id
    and c.concurrent_process_id = r.controlling_manager
    and q.concurrent_queue_id = c.concurrent_queue_id
    and p.application_id >= &&ApplicationId
    and r.actual_start_date >= sysdate-31
    and r.status_code = 'C'
    and r.phase_code in ('C')
    and (nvl(r.actual_completion_date,r.actual_start_date) - r.actual_start_date) * 24 * 60 > 30
    and p.user_concurrent_program_name not like 'Gather%Statistics%'
    and (
      (nvl(r.actual_completion_date,r.actual_start_date) - r.actual_start_date) * 24 > 16
      or
      (r.actual_start_date-trunc(r.actual_start_date)) * 24 between 9 and 17
      or
      (r.actual_completion_date-trunc(r.actual_completion_date)) * 24 between 9 and 17
    )
group by p.user_concurrent_program_name
/

For a particular Date how many Requests are submitted for a particular concurrent requests and its performance analysis.

SELECT -- /*+ first_rows*/
  fcr.request_id req_id,
  fcp.concurrent_program_name conc_prg,
  PT.USER_CONCURRENT_PROGRAM_NAME USR_CONC_PRG,
  TO_CHAR (FCR.ACTUAL_START_DATE, 'DD-MON-YY HH24:MI:SS') START_DATE,
  ---NVL (TO_CHAR (fcr.actual_completion_date, 'mm-MON-yy HH24:MI:SS'), 'Not complete ') end_date,
  SUBSTR ( DECODE ( TRUNC (actual_completion_date - actual_start_date), 0, NULL, TRUNC (actual_completion_date - actual_start_date)
  || 'D' )
  || LPAD ( TRUNC(MOD ( (actual_completion_date - actual_start_date) * 24, 24 )), 2, 0 )
  || ':'
  || LPAD ( TRUNC(MOD ( (actual_completion_date - actual_start_date) * 24 * 60, 60 )), 2, 0 )
  || ':'
  || LPAD ( TRUNC(MOD ( (actual_completion_date - actual_start_date) * 24 * 60 * 60, 60 )), 2, 0 ), 1, 10 ) TIME,
  flv1.meaning phase,
  flv2.meaning status,
  fcr.argument_text parameters,
  fcr.oracle_process_id
FROM applsys.fnd_concurrent_programs fcp,
  applsys.fnd_concurrent_programs_tl pt,
  applsys.fnd_concurrent_requests fcr,
  fnd_lookup_values flv1,
  fnd_lookup_values flv2
WHERE FCR.CONCURRENT_PROGRAM_ID = FCP.CONCURRENT_PROGRAM_ID
AND TRUNC (fcr.ACTUAL_START_DATE) BETWEEN TO_DATE ('17-JAN-2014', 'DD-MON-YYYY') AND TO_DATE ('26-JAN-2014', 'DD-MON-YYYY')
AND fcr.program_application_id = fcp.application_id
AND fcp.application_id         = pt.application_id
AND fcp.concurrent_program_id  = pt.concurrent_program_id
AND pt.LANGUAGE                = 'US'
AND fcr.phase_code             = flv1.lookup_code
AND flv1.lookup_type           = 'CP_PHASE_CODE'
AND flv1.LANGUAGE              = 'US'
AND flv1.view_application_id   = 0
AND fcr.status_code            = flv2.lookup_code
AND flv2.lookup_type           = 'CP_STATUS_CODE'
AND flv2.LANGUAGE              = 'US'
AND FLV2.VIEW_APPLICATION_ID   = 0
AND PT.USER_CONCURRENT_PROGRAM_NAME LIKE '%Supply Chain Cost Rollup - Print Report%'
--ORDER BY FCR.ACTUAL_START_DATE DESC;
ORDER BY time DESC;

Solutions are outlined in following notes

Supply Chain Cost Rollup - Print Report Run Very Long time (Doc ID 1463306.1)
Supply Chain Cost Rollup - Print Report Performance Issues (Doc ID 1588101.1)

Enterprise Password Management/Self Service Password Management using Oracle Enterprise SSO

Enterprise Password Management/Self Service Password Management using Oracle Enterprise SSO

Recently I got chance to work on a Enterprise Password Management related activities using ESSO.

It should be broadly categorized as 
■ Application Password Change
■ Self-Service Windows Password Reset

I would be elaborating "Self-Service Windows Password Reset" in this section.

■ Self-Service Windows Password Reset

● Provides a fully integrated self-service Windows password reset solution for end-users, eliminating help desk calls and speeding the reset process. The user is challenged with a series of challenge questions which must be answered correctly in order for password reset to succeed.
● Challenge questions and acceptable answers, including the “weight” of each question, are administrator-configurable. 
● Self-service password reset functionality is accessed directly from the Windows logon dialog (integrated via GINA or credential provider link, depending on the OS version), and remotely via Web browser.

Questions and answers can be either specified by the administrator and stored directly within the ESSO-PR data store or retrieved dynamically via standard APIs from external systems, such as HR databases. Furthermore the ESSO PR Client can direct a user to the OIM KBA authentication engine to facilitate change password via that system. 

The weight of each question can be individually configured by the administrator using ESSO-PR’s confidence-based rating system so that one question can count more towards granting the user access than another. Correct answers add to the user’s quiz core, while incorrect answers subtract from it but not necessarily disqualify the user. Once the user correctly answers enough questions to pass the quiz, access to the account unlock and/or password reset functionality is granted. 

Required questions
Eliminator questions
Optional questions

The administrator can assign individual questions to specific users or groups using the ESSO-PR Administrative Console.

Architecture diagram is presented below for your reference:




Thursday, 6 February 2014

Default Password Security Settings and Various Audits in 11g

Default Password Security Settings  and Various Audits in 11g

If applications use the default password security settings from Oracle Database 10g Release 2 (10.2), then you can revert to these settings until you modify them to use the Release 11g password security settings. To do so, run the undopwd.sql script.

undopwd.sql: This script is called by DBCA to undo the 11g secure configuration changes to the password portion of the default profile. It reverts to the default 10gR2 settings. It is not intended to be run during upgrade, since that would undo all customer settings as well.

secconf.sql: This script would enable the 11g default password security related settings and enable various auditing parts,

select resource_name, limit from dba_profiles where profile='DEFAULT' and resource_type='PASSWORD';

Oracle Database 10gR2 Settings:

ALTER PROFILE DEFAULT LIMIT
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_GRACE_TIME UNLIMITED
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
;

Oracle Database 11g Settings:

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
;

Audit alter any table by access;
Audit create any table by access;
Audit drop any table by access;
Audit Create any procedure by access;
Audit Drop any procedure by access;
Audit Alter any procedure by access;
Audit Grant any privilege by access;
Audit grant any object privilege by access;
Audit grant any role by access;
Audit audit system by access;
Audit create external job by access;
Audit create any job by access;
Audit create any library by access;
Audit create public database link by access;
Audit exempt access policy by access;
Audit alter user by access;
Audit create user by access;
Audit role by access;
Audit create session by access;
Audit drop user by access;
Audit alter database by access;
Audit alter system by access;
Audit alter profile by access;
Audit drop profile by access;
Audit database link by access;
Audit system audit by access;
Audit profile by access;
Audit public synonym by access;
Audit system grant by access;
Audit directory by access;

Wednesday, 5 February 2014

Oracle Database Security - Steps to achieve high Security

Oracle Database Security - Steps to achieve high Security

Recently, I was engaged with an Oracle Database Security related tasks, would provide the details in multiple post. As of now just posting the road map, I used for this.


Tuesday, 28 January 2014

How to Disable Oracle Label Security | OLS

How to Disable Oracle Label Security | OLS

OLS and the Audit table AUD$:

●● Installation of Label Security causes the audit table SYS.AUD$ to be dropped and recreated in the SYSTEM schema. Its existing contents are copied into the new SYSTEM.AUD$ table.
●● If you deinstall Label Security, AUD$ is recreated in the SYS schema and dropped from the SYSTEM schema. Again the contents are copied from one to the other before dropping.

Starting as of version 11.2.0.1, when you install the Enterprise Edition, all options that belong to it are always installed, the available Options selections in the installer only decides if they should be enabled or not, to enable or disable OLS afterwards, you can use chopt.

chopt enable lbac
chopt disable lbac

This works on both Unix/Linux and Windows type Operating system.

If you want to disable OLS for a particular policy in 11gR2 you can use following command:
SA_AUDIT_ADMIN.NOAUDIT ('AROLS', 'XXCTO', 'APPLY, REMOVE');

Syntax:
PROCEDURE AUDIT (
 policy_name IN VARCHAR2,
 users IN VARCHAR2 DEFAULT NULL,
 option IN VARCHAR2 DEFAULT NULL,
 type IN VARCHAR2 DEFAULT NULL,
 success IN VARCHAR2 DEFAULT NULL);

Disabling Oracle Label Security for 12c

If Oracle Database Vault has been enabled, then do not disable Oracle Label Security.

SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';
SELECT PARAMETER, VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

sqlplus '/as sysdba'
EXEC LBACSYS.OLS_ENFORCEMENT.DISABLE_OLS;
SHUTDOWN IMMEDIATE
STARTUP

For Oracle Real Application Cluster (Oracle RAC) environment or a multitenant environment, repeat these steps for each Oracle RAC node or PDB on which you enabled Oracle Label Security.

EXEC LBACSYS.OLS_ENFORCEMENT.ENABLE_OLS;

Removal of OLS Data Dictionary:

This 9i method still works for higher versions.

cd $ORACLE_HOME/rdbms/admin/
sqlplus "/ as sysdba"
START catnools.sql


For 11gR2 it doesn't require downtime.

For 12c it requires downtime.


Friday, 24 January 2014

What Does HRGLOBAL Do?

What Does HRGLOBAL Do?

HRGLOBAL execution only do the work at database end, in short it works on FNDLOAD, PYLOAD, FFXMLC and various SQL executions. It shall be executed only from Admin Node and nonshared applications just once. Do not be in misconception that you would have to execute multiple times from various node.

I have just given a short summary of various steps it performs, though not in order, but in summary, HRGLOBAL only touches your database.

1. It disbales the HRMS Access using pydsblhr.sql. Manually also if one wants to restric the HRMS access this can be executed.

2. It cleans the Orphan Data Using payorpcleans1.sql payorpcleans2.sql payorpcleans3.sql

3. Calls SQL Files for online patching.

4. Put the details of the current environment in files using hrglobal_info.sql and hrglobal_chkpreq.sql.

5. Executes FNDLOAD to load help data using afscprof.lct file.

6. Run PYLOAD and FNDLOAD to add legislative data.

7. Regenerate the balance data for HR Legislation

8. Do Fast Formula related activities using FFXMLC

9. Perform Country Specific HRMS Database activity.

10. Enable HRMS Access using pyenblhr.sql.


Monday, 20 January 2014

RFS[2]: no standby redo logfiles of size XXXXXX blocks available

RFS[2]: no standby redo logfiles of size XXXXXX blocks available

Checking the standby logs on standby database, all SRLs are ACTIVE dated from weeks ago- normally we see 1 used for 1 thread and the others will be UNASSIGNED

STANDBY> select * from v$standby_log;
STANDBY> select STATUS, THREAD#, SEQUENCE#, THREAD# SEQUENCE# from v$standby_log;

In Dec1 SRLs created on LG were not archived/stuck and hence remained ACTIVE and could not longer be assigned. At the that time we see that Primary was archiving every minute and only 1 ARCH available to archive to standby. From standby log_archive_max_processes was set to 2, hence only 1 ARCH archiving Locally and most likely unable to cope with the amount of archiving required.

1. On Standby/Primary set log_archive_max_processes=10
alter system set log_archive_max_processes=10 scope=both;

OR
Another way around this if the logs have been applied as they have in this case, the old standby logs can be dropped and recreated to clear the problem.
alter database drop standby logfile '<logfile_name>';
alter database add standby logfile group x '<logfile_name>';

SQL> col MEMBER FORMAT A100
SQL> set linesize 200
SQL> SELECT GROUP#, STATUS, TYPE, MEMBER FROM V$LOGFILE WHERE TYPE='STANDBY';

Explanation of Various Parameters for Workflow Background Process Engine

Explanation of Various Parameters for Workflow Background Process Engine

ITEM TYPE:
Specify an item type to restrict this engine to activities associated with that item type. If you do not specify an item type, the engine processes any deferred activity regardless of its item type.

MINIMUM THRESHOLD:
Specify the minimum cost that an activity must have for this background engine to execute it, in hundredths of a second.

MAXIMUM THRESHOLD:
Specify the maximum cost that an activity can have for this background engine to execute it, in hundredths of a second.
By using Minimum Threshold and Maximum Threshold multiple background engines can be created to handle very specific types of activities. The default values for these arguments are 0 and 100 so that the background engine runs activities regardless of cost.

PROCESS DEFERRED: 
Specify whether this background engine checks for deferred activities. Setting this parameter to YES allows the engine to check for deferred activities.

PROCESS TIME OUT: 
Specify whether this background engine checks for activities that have timed out. Setting this parameter to YES allows the engine to check for timed out activities.

PROCESS STUCK: 
Specify whether this background engine checks for stuck processes. Setting this parameter to YES allows the engine to check for stuck processes.

FNDREVIVER - Theories and Concepts to remember

FNDREVIVER - Theories and Concepts to remember

Theories on FNDREVIVER 

FNDREVIVER (also recognized as reviver.sh) is used for momentary disconnects in the system where the concurrent managers and/or forms go down, and forms is later reconnected while the concurrent managers are not. FNDREVIVER revives the Internal Concurrent Manager (ICM) when it fails.

When ICM can no longer get a database connection, it kills itself and spawns the reviver. Reviver loops every 30 seconds, attempting to login to the database as apps user. Once login is successful, it starts up the ICM again.

If the failure is due to a brief network outage, or database issue, the managers are restarted, so the client does not have to restart the managers manually.

Reviver is recovery mechanism runs in the background. In a Real Applications Cluster (RAC) environment, when the primary node goes down and ICM is set to migrate to the secondary node, the reviver parameter will be passed to the secondary node.

The easiest way to determine if reviver.sh exists is by checking the $FND_TOP/bin directory.

The variable resides in the context file under 's_cp_reviver' and can be set to "enabled" or "disabled". Based on the value of s_cp_reviver in the context file, AFCPDNR is started with a value of either "enabled" or "disabled" .

The reviver is started when starting the ICM, by passing a parameter reviver="enabled". You do this on the node you start the manager, and if the ICM is set to migrate to the second node, this parameter will be passes to the second node. 
A common misconception is that users must start the reviver.sh manually, however this is not the intended use. It is automatically enabled when the parameter REVIVER_PROCESS="enabled" is passed via the adcmctl.sh concurrent manager startup script. 

On a single node concurrent processing system, FNDREVIVER is the only way to recover from a database connection loss. 

On a two node system, there is another factor, the Internal Monitor (FNDIMON).The FNDIMON will race to restart the internal manager in a multi node setup, and by the time the reviver starts it will likely see that the ICM is already running and exit accordingly. 

FNDIMON checks whether it can connect to the database in order to determine if the ICM is running, and if the database connection is not available it fails to run and exits accordingly. The reviver is a shell script which loops until a connection is obtained, and then starts the manager accordingly. The reviver's job is the last line of defense after a database connection failure, as FNDIMON only works when the database connection is available. 

In the event the ICM goes down due to a network outage, then the reviver would be needed to bring the ICM back up. 

Context File Parameters related to FNDREVIVER

The following parameters can be set in the context file, and then autoconfig should be re-run to enable reviver: 

Concurrent Processing Reviver Process (s_cp_reviver) [Allowed values are {enabled, disabled}]
<cp_reviver oa_var="s_cp_reviver">enabled</cp_reviver> 

Reviver Process PID Directory Location (s_fndreviverpiddir) 
This variable specifies the path where ICM reviver process pid file will be created. Oracle recommends using a local disk as the PID file location because the reviver process may run when the network is down. 
<fndreviverpiddir oa_var="s_fndreviverpiddir">/u02/oracle/visappl/fnd/11.5.0/log</fndreviverpiddir> 

High Water Mark - Some Useful Information to remember

High Water Mark


  • The high water mark is the boundary between used and unused space in a segment. As requests for new free blocks that cannot be satisfied by existing free lists are received, the block to which the high water mark points to becomes a used block, and the high water mark is advanced to the next block. In other words, the segment space to the left of the high water mark is used, and the space to the right of it is unused.



  • The high-water mark is the level at which blocks have never been formatted to receive data.



  • When a table is created in a tablespace, some initial number of blocks/extents are allocated to the table. Later, as the number of rows inserted increases, extents are allocated accordingly.



  • Inserting records into the table would increase the high water mark.



  • Deleting the records does not lower the high water mark. Therefore, deleting the records does not raise the 'Empty_blocks'. After deleting the records, if you query dba_segments or dba_tables, there would be no change.



  • ALTER TABLE <TABLE_NAME> DEALLOCATE UNUSED; >>> would not bring the high water mark down.



  • The high water mark can be reset with a truncate table or if the table is moved to another tablespace, or with Shrink Space.

           SQL> ALTER TABLE <tablename> SHRINK SPACE;


  • The ALTER TABLE MOVE can even be a good method to optimize the HWM even if the move occurs within the same tablespace

  • High Water Mark After Exporting/Deleting/Importing a Table >>> NO, the HWM is not reset.

  • All Oracle segments have an upper boundary containing the data within the segment. This upper boundary is called the "high water mark" or HWM.
  • The high water mark is an indicator that marks blocks that are allocated to a segment, but are not used yet. It is reset to ""zero"" (position to the start of the segment) when a TRUNCATE command is issued.  So you can have empty blocks below the high water mark, but that means that the block has been used (and is probably empty caused by deletes). Oracle does not move the HWM, nor does it *shrink* tables, as a result of deletes."
  • Data files do not have a high water mark; only segments do have them.
  • Full table scans typically read up to the high water mark.



Delete Concurrent Program and Executable


Delete Concurrent Program and Executable

Begin
  fnd_program.delete_program('AG_HR_TRANSFER_PROG', 'MKK Group Custom');
  fnd_program.delete_executable('AG_HR_TRANSFER_PROG', 'MKK Group Custom');
  commit;
End;

MKK Group Custom >>> Is your custom Application registered name.

Recreate FND_CONCURRENT_QUEUES Information

Recreate FND_CONCURRENT_QUEUES Information

Publishing this on request of a friend.... It was applicable for him after clone.

Run FND_CONC_CLONE
EXEC FND_CONC_CLONE.SETUP_CLEAN;
COMMIT;
EXIT;

Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers and Web tiers to repopulate the required system tables. 

Connect to SQLPLUS as APPS user and run the following statement :
select CONCURRENT_QUEUE_NAME from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME like 'FNDSM%';

If the above SQL does not return any value please do the following:
cd $FND_TOP/patch/115/sql
START afdcm037.sql;

Check again that FNDSM entries now exist:
select CONCURRENT_QUEUE_NAME from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME like 'FNDSM%';

Run CMCLEAN.sql and start the Managers.

This would help after cloning if the Managers are not coming up.

Friday, 17 January 2014

Thumb rule for sizing UNDO Tablespace Size

Thumb rule for sizing UNDO Tablespace Size

Sizing an UNDO tablespace requires three pieces of data.
(UR) UNDO_RETENTION in seconds
(UPS) Number of undo data blocks generated per second
(DBS) Overhead varies based on extent and file size (db_block_size)

The undo space needed is calculated as:
UndoSpace = UR * (UPS * DBS)

This query would give you the required minimum size in MB:

SELECT (UR * (UPS * DBS)/1024/1024) AS "Bytes" FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
(SELECT undoblks/((end_time-begin_time)*86400) AS UPS
FROM v$undostat
WHERE undoblks = (SELECT MAX(undoblks) FROM v$undostat)),
(SELECT block_size AS DBS
FROM dba_tablespaces
WHERE tablespace_name = (SELECT UPPER(value) FROM v$parameter WHERE name = 'undo_tablespace'));

Thumb rule from past experience: OEM 12c Cloud control suggests you set the size to 10 times of this value. I have tested for multiple clients it works well in 5 times in most of the cases, need observations afterwards.

Now, suppose your current undo_retention is 900 and you are increasing that to 9000. The required UNDO tablespace size would be(in MB):
Result_Of_Above_Query*5*10
5 is required as per thumb rule.
10 is required as you are increasing the UNDO_RETENTION value to 10 times.

Friday, 27 December 2013

iRecruitment Index Synchronization - All Details

iRecruitment Index Synchronization - All Details

Why iRecruitment Index Synchronization?

To keep the text indexes up to date for iRecruitment documents and job postings run the iRecruitment Index Synchronization process. Oracle iRecruitment uses Oracle Text to perform content-based searches on resumes and job posting details. When candidates upload resumes or managers post new job details, you must synchronize the index at a regular interval to keep the user searches accurate.

Recommended way to run iRecruitment Index Synchronization(from MOS Documents)

• Posting Index indicates index of job postings that managers post.
• Document Index indicates index of candidates' resumes.
• Online index rebuild - to run every 5 minutes

Online index rebuild - to run every 5 minutes
Note: If the online synchronization process starts before the previous one has completed, then the process will display an error. Ensure that you set it to run 5 minutes after completion, and not the start.
In the Online mode, the process adds new entries to the index, enabling simultaneous searches.

Full index rebuild - to run each night
In the Full mode, the process defragments the index, reducing its size, and optimizing the performance. The process does not add new entries to the index.

Impact if Not Done

This is recommended to enhance the performance of iRecruitment and should be done, as it is a very normal DBA Activity.

Monday, 11 November 2013

opatch auto - 11gR2

opatch auto - 11gR2

The OPatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle Database home.

On an 11gR2 RAC installation, if there is no existing database associated to the RDBMS home, when applying a patch using "opatch auto" command, OPatch will patch the Grid Infrastructure home but not the RDBMS home.
But, one or more database exists associated to the RDBMS home, then OPatch will patch both the Grid Infrastructure home and the RDBMS Home.

The opatch auto retrieves the db home information from the configured databases. So, if there is no existing database, then "OPatch auto" will skip that RDBMS Home while patching.

In order to patch the RDBMS home that has no database configured, use "-oh" option in opatch auto.   For example:
opatch auto < Patch Location > -oh /ora/oh1,/ora/oh2,/ora/oh3

Exadata Storage Server Patching - Some details

Exadata Storage Server Patching

●● Exadata Storage Server patch is applied to all cell nodes.
●● Patching is launched from compute node 1 and will use dcli/ssh to remotely patch each cell node.

●● Exadata Storage Server Patch zip also contains Database Minimal Pack or Database Convenience Pack, which are applied to all compute nodes. This patch is copied to each compute node and run locally.

●● Applying the storage software on the cell nodes will also change the Linux version and applying the database minimal pack on the compute nodes does NOT change the Linux version
To upgrade the Linux on Compute Node follow MOS Note: 1284070.1

Non rolling patch apply is much faster as you are applying the patch on all the cell nodes simultaneously, also there are NO risk to single disk failure. Please note, this would require full outage.

In case of rolling patch apply, database downtime is not required, but patch application time is very high. Major risk: ASM high redundancy to reduce disk failure exposure

Grid disks offline >>> Patch Cel01 >>> Grid disks online
Grid disks offline >>> Patch Cel02 >>> Grid disks online
Grid disks offline >>> Patch Cel..n>>> Grid disks online

Rolling Patch application can be risky affair, please be appraised of the followings:
Do not use -rolling option to patchmgr for rolling update or rollback without first applying required fixes on database hosts
./patchmgr -cells cell_group -patch_check_prereq -rolling >>> Make sure this is successful and review spool carefully.
./patchmgr -cells cell_group -patch –rolling

Non-rolling Patching Command:
./patchmgr -cells cell_group -patch_check_prereq
./patchmgr -cells cell_group -patch


How to Verify Cell Node is Patched Successfully

# imageinfo

Output of this command gives some good information, including Kernel Minor Version.

Active Image Version: 11.2.2.3.1.110429.1
Active Image Status: Success

If you get anything in "Active Image Status" except success, then you need to look at validations.log and vldrun*.log. The image status is marked as failure when there is a failure reported in one or more validations.
Check the /var/log/cellos/validations.log and /var/log/cellos/vldrun*.log files for any failures.

If a specific validation failed, then the log will indicate where to look for the additional logs for that validation.

Sunday, 10 November 2013

How to find the Cell Group in Exadata Storage Servers

How to find the Cell Group in Exadata Storage Servers

cd /opt/oracle.SupportTools/onecommand
cat cell_group

#cat cell_group
xxxxxcel01
xxxxxcel02
xxxxxcel03
#

This means, when you would start the patching, it would apply the patch in Cell Node xxxxxcel01 and then in xxxxxcel02, and finally in xxxxxcel03.
As I have Exadata Quarter Rack, there are only 3 Storage Servers(Cell Node) and all would be patched during Cell Node Patching.

From this number of cells you can determine, whether Quarter/Half/Full rack exadata is in place.