Monday 7 October 2013

Consolidated Recommendations for eBusiness Suite Maintenance/Performance/Betterment

1. Apply Recommended Performance Patches
Oracle E-Business Suite Recommended Performance Patches [ID 244040.1]

2. Gather Schema Statistics Schedule
Make sure Gather Schema Stats is running on periodical basis, we recommend to scheduled to run every 15 Days.
Identify tables which are excessively updated/deleted, gather stats and do rebuild indexes on those tables on 3/4 days interval.

3. Verify Optimizer Statistics Over a Period of 15 Days
Over a period of 25 Days monitor the Optimizer Statistics using bde_last_analyzed.sql.

This generates the output in HTML format which contains the following files:
>> bde_last_analyzed_ind_partitions.html
>> bde_last_analyzed_indexes.html
>> bde_last_analyzed_summary.html
>> bde_last_analyzed_tab_partitions.html
>> bde_last_analyzed_tables.html

Make sure in the database there is NO Stale Statistics, if so, identify the objects from the reports above and arrange a table/index(object level) statistics for the affected objects.

File bde_last_analyzed.sql can be downloaded from Metalink Note: bde_last_analyzed.sql - Verifies CBO Statistics [ID 163208.1]

4. Database Initialization Parameter as per Oracle Recommendations
Execute bde_chk_cbo.sql as APPS user and analyze the output.
File bde_chk_cbo.sql can be download from Oracle Support Metalink Note: bde_chk_cbo.sql - EBS initialization parameters - Healthcheck [ID 174605.1]
This will confirm that all obsolete parameters are commented out from Database and EBS Database is using the latest parameter set recommended by Oracle.

5. Tuning the Requirement of UNDO Tablespace
Parameters related to system undo, the database releases certified for use with Oracle E-Business Suite Release 12 only support the use of system managed undo (SMU).
SMU is more efficient than traditional rollback segments and reduces the possibility of snapshot too old (ORA-1555) errors.

>> Undo tablespace become crucial for elapse times of the concurrent jobs, and corresponding commit windows.
>> Setting this parameter to a value higher than 900 (the default) is recommended if you experience "ORA-1555: Snapshot too old" errors.
>> Automatic undo is not supported for LOBS

Error Message in Log File:
Adding new assignments ...
SHRD0018: Oracle database error trapped at function: GLBMBO - label: glbmbo4_err
  ORA-01555: snapshot too old: rollback segment number 448 with name "_S

Current value of UNDO_RETENTION was 900, changed to 14400 sec(4 Hour)
SQL> ALTER SYSTEM SET UNDO_RETENTION = 14400 SCOPE=BOTH; [ Make changes in initfile/ifile, so that it preserves ]

As this is changed from default value of 900 to 14400 add some extra space to undo tablespace also:
SQL> ALTER TABLESPACE APPS_UNDOTS1 ADD DATAFILE '+SUP01' AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED;


5. Setting up TEMP Tablespace

It is recommended that the temporary tablespace for Oracle E-Business Suite users be created using locally managed temp files with uniform extent sizes of 128K. The 128K extent size is recommended because numerous modules, such as Pricing and Planning, make extensive use of global temporary tables which also reside in the temporary tablespace. Since each user instantiates a temporary segment for these tables, large extent sizes may result in space allocation failures.

The following is an example of creating a locally managed temporary tablespace with temp files:

SQL> drop tablespace temp;
SQL> create temporary tablespace temp tempfile '<Temp DBF Name with Location>' size 4096M reuse extent management local uniform size 128K;
Add 3 similar tempfiles.

6. Database Initialization Parameter Sizing

Following is required for 500+ active users in LIVE system. We have done it in many production systems and run very well:
*.sga_target = 14G
*.shared_pool_size = 3G
*.shared_pool_reserved_size = 300M
*.streams_pool_size = 150M
*.java_pool_size = 1G
*.pga_aggregate_target = 20G
*.result_cache_max_size = 150M
*.db_cache_size = 2G
*.large_pool_size=300M
*.db_keep_cache_size = 500M
*.db_writer_processes = 3
*.processes = 2500
*.sessions = 5000
*.open_cursors = 1500

7. Performance Improvement By Disabling "Canceling Long-Running Queries" Features

>> On occasion, a user may start a long-running query, grow tired of waiting, and close the browser. This section considers the implications of such an action, and describes a new feature that can help in this situation.
>> The middle-tier Forms Server process is basically single-threaded, and queries are normally executed in Blocking mode. If the client is killed, the Forms Server process still waits for the query to complete. This means that the Forms Server process will only time out when control has been returned from the database, at which point it will terminate when it can no longer communicate with the PC client. Cleanup will also take place for the database session.
>> The same principle applies in the case of a long-running transaction that calls a stored procedure from client-side PL/SQL. The key point is that in all such cases the query must run to completion before the middle tier process terminates and cleanup takes place.
>> If the query takes longer than about 5 seconds, a dialog box will appear that allows the user to cancel the query by clicking a Cancel button. This Cancel Query feature (sometimes called Non-blocking mode, as it is based on SQL*Net non-blocking functionality) is the only way to terminate a query without killing the Forms session.

>> Set the profile option FND:ENABLE_CANCEL_QUERY to Yes in order to enable this feature.
>> This can be set at site, application, responsibility or user level. Setting this profile to Yes also adds a call to set_form_property in the standard APPCORE when-new-form-instance event, which sets the INTERACTION_MODE property to NON_BLOCKING.
>> Some customers may wish to enable cancel query for power users only due to the additional overheads. In this case the profile should be set at User level for these users only.

8. Faster Processing of Workflow Events

The "Workflow Background Process" program needs to be submitted in various permutation and combination. This would help in minimizing the load by avoiding picking up error or stuck workflow. Also, user wait time would be reduced. This has other many advantages like visibility of roles etc.

Type-1: which Runs Every 3 Mins for Parameters:
Process Deferred: Yes
Process Timeout:  No
Process Stuck:  No

Type-2: which Runs Every 30 Mins for Parameters:
Process Deferred:  No
Process Timeout:  Yes
Process Stuck:  No

Type-3: which Runs Every 1 Hours for Parameters:
Process Deferred:  No
"Process Timeout:  No
Process Stuck:  Yes"

Type-4: which Runs Every 3 Hours for Parameters:
Process Deferred:  Yes
"Process Timeout:  Yes
Process Stuck:  Yes"

9. FNDOPP and FNDSM Tuning

Increase Parallel Threads of Standard Manager and Output Post Processor

Following measures are taken so that business users feels their requests are processing faster:
>> Sleep Seconds for Standard Manager is reduced to 20 Seconds
>> Number of Parallel threads for FNDSM is increased to 20
>> Number of Parallel threads for FNDOPP is increased to 6
>> Sleep Seconds for Output Post Processor is reduced to 20 Seconds

Increase Java Heap Size for OPP:

Determine what the heap size per OPP process is currently:
select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');

The default should be: J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m

Increase the Heap Space per Process to 1024:
update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx1024m'
where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');

Stop the managers, run CMCLEAN, start the managers back to have the changes reflected.

Recommendation: Reports built with large images will require additional space.  So please insure there is adequate temp space and your code is optimized when running large reports/templates.

Implement Greater Heap Size for Large Template Reports:

Navigation: Responsibility: System Administrator >>> Concurrent >>> Program >>> Define >>> Query for the concurrent program

Set the value for the Executable Options : -Xmx2048m

Advantages: After discussing with all the developers, higher heap size is assigned to some large template reports. This would minimize the risk on OPP running with out of memory and better performance.

Use XML Scalable Features for Better performance on .xls template reports:

Navigation: Responsibility: XML Publisher Administrator >>> Administration (Tab) >>> Configuration

Under Temporary Directory pick a temporary file location on your concurrent processing node. This should be at least 5GB or 20x larger than largest XML data file you generate

Under FO Processing, set:
>> Use XML Publisher's XSLT processor set to True 
>> Enable scalable feature of XSLT processor set to False
>> Enable XSLT runtime optimization set to True


9. Schedule Obsolete Session Removal Conc Program

Concurrent Program Name: Remove obsolete sessions from fnd_sessions
Parameters: NONE

It is common fact in Oracle Apps that some sessions eixsts in FND_SESSIONS, but actually from gv$session the process has vanished. Sometime process is defunct in Application level.
By cleaning this we would have better performance.

10. Index on All Custom Tables Created in Custom Schema

During the Implementation cycle, we have followed that all Custom Tables should have proper indexes, so that we have better performance on all custom table fetching application. All the custom columns that are in use should be indexed.

Following query is used to determine the Indexes on the Custom Table:
SQL> Select INDEX_NAME, TABLE_NAME, TABLE_TYPE from DBA_INDEXES where OWNER='XXDOT' and TABLE_NAME like 'XX%';

11. JVM/OC4J - Application Performance Optimization

JVM: Guidelines to setup the Java Virtual Machine in Apps Ebusiness Suite 11i and R12 (Doc ID 362851.1)

Start with 4 JVMS for each, with 1024MB space, for example:
      -Xmx1024M -Xms512M -XX:MaxPermSize=256M
      -XX:NewRatio=2 -XX:+PrintGCTimeStamps

12. Interpret Database AWR/Statspack Report

Tune all the relavant SQLs, specially most buffer gets and most elapsed time ones.
Take DBAs steps like, creating composite indexes, gather stats and similar steps.

DBAs should try to eliminate all the bottlenecks mentioned in AWR/ADDM.

13. Schedule Purge Obsolete Workflow Runtime Data

The program "Purge Obsolete Workflow Runtime Data" calls database package WF_Purge.Total.  This program deletes old Workflow runtime data for completed Workflows. It is recommended that this program or package is run regularly as required.

From Workflow Administrator's Guide PDF Document:
The Oracle Applications Manager console helps you easily maintain the Oracle Workflow and Oracle XML Gateway database tables. Oracle Workflow and Oracle XML Gateway access several tables that can grow quite large with obsolete workflow information that is stored for all completed workflow processes, as well as obsolete information for XML transactions. The size of these tables and indexes can adversely affect performance. These tables should be purged on a regular basis, using the Purge Obsolete Workflow Runtime Data concurrent program.

Parameter Explanation:
Item Type: The item type to purge. Leaving this field blank defaults to purging the runtime data for all item types.
Item Key: The item key to purge. Leaving is field blank defaults to purging the runtime data for all item keys.
Age: Minimum age of data to purge, in days.
Persistence Type: The persistence type to be purged, either for Temporary or for Permanent. The default is temp.
Core Workflow Only: Enter 'Y' to purge only obsolete runtime data associated with work items, or 'N' to purge all obsolete runtime data as well obsolete design data. The default is 'N'.
Commit Frequency: Enter the number of records to purge before the program commits data. To reduce rollback size and improve performance, set this parameter to commit data after a smaller number of records. The default is 500 records.
Signed Notifications: Enter 'N' to preserve signature evidence, including notifications that required electronic signatures and their associated signature information. Enter 'Y' to purge signature-related information. The default is 'N'.

Note: After performing a commit, the program resumes purging work items with the next subsequent begin date. In some cases, if additional items have the same begin date as the last item that was purged before a commit, the program may not purge all eligible items. To purge these remaining work items, simply rerun the program.


14. OS Watcher to Verify Hardware Performance

OSWatcher Black Box Analyzer User Guide (Doc ID 461053.1)

15. Requirement of Flushing Shared Pool

On systems which use a lot of literal SQL the shared pool is likely to fragment over time such that the degree of concurrency which can be achieved diminishes. Flushing the shared pool will often restore performance for a while as it can cause many small chunks of memory to be coalesced. After the flush there is likely to be an interim spike in performance as the act of flushing may remove sharable SQL from the shared pool but does nothing to improve shared pool fragmentation. The command to flush the shared pool is:

SQL> ALTER SYSTEM FLUSH SHARED_POOL;

Contrary to reports elsewhere items kept in the shared pool using DBMS_SHARED_POOL.KEEP will NOT be flushed by this command. Any items (objects or SQL) actually pinned by sessions at the time of the flush will also be left in place.

NB: Flushing the shared pool will flush any cached sequences potentially leaving gaps in the sequence range. DBMS_SHARED_POOL.KEEP('sequence_name','Q') can be used to KEEP sequences preventing such gaps. 

16. Forms Dead Client Detection

>> Enable Forms Dead Client Detection using FORMS_TIMEOUT=10 (Value specified in minutes). Default 30 Minutes is two high and can cause performance issue.
>> Terminates fwebmx processes for dead clients.
>> Enable Forms Abnormal Termination Handle
>> Do not set FORMS_CATCHTERM
>> The above two variables ( FORMS_TIMEOUT and FORMS_CATCHTERM ) can be changed from CONTEXT_FLE XML.
>> In CONTEXT_FLE these parameters can be found as s_f60catchterm and s_f60time parameters for 11i users, s_forms_catchterm and s_forms_time for R12 users.

>> Developers have reviewed the Note Few Basic Techniques to Improve Performance of Forms. [ID 221529.1] for all Custom forms developed to maximize performance

Reference Metalink Notes: Oracle Forms in Applications FAQ [ID 177610.1]

17. Maintenance on Concurrent Request Related Tables/Data

Follow all the recommendations mentioned in following notes:
http://mkkoracleapps.blogspot.co.uk/2013/08/performance-tuning-on-ebs-concurrent.html

Ensure "Purge Concurrent Request and/or Manager Data, FNDCPPUR,"  is run at regular intervals with "Entity" parameter as "ALL".  A high number of records in FND_CONCURRENT tables can degrade the performance.
Functions of FNDCPPUR: It purges data on FND_CONCURRENT tables and log/output files.

Name: Purge Concurrent Request and/or Manager Data
Short Name: FNDCPPUR
Description: Purges Concurrent requests and/or Concurrent Manager data and log/output files

1. Entity: ALL
2. Mode: Age
3. Mode Value : 10
4. Oracle ID: NULL
5. User Name: NULL
6. Responsibility Application: NULL
7. Responsibility: NULL
8. Program Application: NULL
9. Program: NULL
10. Manager Application: NULL
11. Manager: NULL
12. Report: Y

13. Purge Other: Y

No comments:

Post a Comment