Friday 11 October 2013

Performance Issue on a AP Workflow Approval(Custom Package)

Performance Issue on a AP Workflow Approval(Custom Package)

Few days back I got a chance to work for a performance issue on a AP Workflow Approval Package.
An adhoc request from the team, though there are huge scope of performance improvement on the whole database, I concentrated only on the AP Workflow Approval part.

Firstly I analysed AWRs for some days during business hours and noticed the following:
1. Multiple Procedures of the same package was present in "SQL ordered by Elapsed Time" and in "SQL ordered by CPU Time" with SQL IDs: v9yxuzfup5w3, 05jrjvak9tkry, 5cdb5xwmdggu2
2. Procedure of the same package was present in "SQL ordered by User I/O Wait Time" with SQL IDs: 05jrjvak9tkry
3. Multiple Procedures of the same package was present in "SQL ordered by Gets" with SQL IDs: 05jrjvak9tkry, 2v9yxuzfup5w3
4. Procedure of the same package was present in "SQL ordered by Reads" with SQL IDs: 05jrjvak9tkry

It is always a good practice that DBAs should look into the section of "SQL ordered by Gets" and tune them accordingly.
"SQL ordered by Gets" are always candidate of tuning and that is first thumb rule of SQL performance tuning based on AWR analysis.

In this case following two procedures were bottleneck for the whole package:
1. GET_APPROVER
2. AME_SUPERVISOR_LEVELS

So my initial target was to improve the performance for GET_APPROVER and AME_SUPERVISOR_LEVELS procedures. Needless to say both of these are present in "SQL ordered by Gets".

Now, there is a constraint  you cannot ask the developers to change the logic of the code for this kind of packages, as you never know, while changing what else can be screwed up. Also, Project manager may come to beat you :)
If something goes wrong business can be impacted, and thorough testing on a already live infrastructure is also not possible due to various constraints.

Started analysing the details of the two procedures, using SQLT and Explain Plan.

Solution#1: Reorder WHERE Clause for PER_ALL_ASSIGNMENTS_F/PER_ASSIGNMENTS_X in the Custom Code

First important observation from the analysis: Most of the bottleneck is coming from PER_ALL_ASSIGNMENTS_F.
It was noticed that, from the table PER_ALL_ASSIGNMENTS_F/PER_ASSIGNMENTS_X, the where clause in SELECT statement was very haphazard.
In some cases developers used the columns in where clause in one order and in other case it was in different order.
It is always a best practice, that first condition in the where clause should start with PK. So requested the developer to reorder the where cluase.

<<<<<<<<<<
You have used per_all_assignments_f/ per_assignments_x in various select statement.. Put the where clause in this order:
Select whatever from per_all_assignments_f/ per_assignments_x
Where 
person_id = …….
position_id = …..
assignment_type = …..
primary_flag = …..
effective_start_date = …..
effective_end_date = ……
supervisor_id = ….
>>>>>>>>>>>>>>

As this is a very minor change, and doesn't require any functional testing to be done, it is very much acceptable.
So this is the first change that is done.

Solution#2: CREATE Index on PER_ALL_ASSIGNMENTS_F, PAY_USER_ROWS_F, PAY_USER_COLUMNS, PAY_USER_TABLES

Now, started looking at what is happening inside the code, specially for FULL TABLE SCAN. This means that there is a need of INDEX. 

It is always good for DBAs to keep on checking to understand what are the tables having FULL TABLE SCAN
and if table size is high DBAs should take action to minimise that. Specially look for COST, CPU_COST and IO_COST.

Now a need has come to create an INDEX on PER_ALL_ASSIGNMENTS_F, as this a standard/very important table of core HR, Composite Index is always a safest bet.
As the developer has rearranged the where clause order in the whole package, just created a composite INDEX as follows:
CREATE INDEX HR.PER_ALL_ASSIGNMENTS_F_XXMKK_N1 ON HR.PER_ALL_ASSIGNMENTS_F 
(PERSON_ID, POSITION_ID , ASSIGNMENT_TYPE, PRIMARY_FLAG, EFFECTIVE_START_DATE, EFFECTIVE_END_DATE, SUPERVISOR_ID);

In the code, there were 3 more tables where existing Indexes(created as part of eBusiness Installation) were not sufficient:
1. PAY_USER_ROWS_F
2. PAY_USER_COLUMNS
3. PAY_USER_TABLES

Index need could have been eliminated(though not fully), if the where cluase was properly written/ordered, for example PAY_USER_COLUMNS select was done on following order:
USER_TABLE_ID, USER_COLUMN_NAME, USER_COLUMN_ID

Now let me explain, USER_TABLE_ID is FK, USER_COLUMN_ID is PK and USER_COLUMN_NAME is UK.
So in this case order should have been, USER_COLUMN_ID, USER_TABLE_ID, USER_COLUMN_NAME.

Now eBusiness by default has one index for USER_COLUMN_ID(PK), USER_TABLE_ID(FK1) and
PAY_USER_COLUMNS_UK2 UNIQUE (USER_COLUMN_NAME, USER_TABLE_ID, BUSINESS_GROUP_ID, LEGISLATION_CODE).

So it is difficult to utilise the above indexes mentioned.

Created following 3 indexes understanding the need(analysis was almost same with little difference for all three tables)

create index HR.PAY_USER_ROWS_F_XXMKK_N1 on HR.PAY_USER_ROWS_F (USER_TABLE_ID, USER_ROW_ID, EFFECTIVE_START_DATE, EFFECTIVE_END_DATE);
CREATE INDEX HR.PAY_USER_TABLES_XXMKK_N1 ON HR.PAY_USER_TABLES (USER_TABLE_ID, USER_TABLE_NAME);
CREATE INDEX HR.PAY_USER_COLUMNS_XXMKK_N1 ON HR.PAY_USER_COLUMNS (USER_TABLE_ID, USER_COLUMN_NAME, USER_COLUMN_ID);

Now the big question, how to make sure if this indexes are getting used or not. Requested the developer to test the performance of the approval package.

You can set the index usage monitoring in following way:
ALTER INDEX HR.PER_ALL_ASSIGNMENTS_F_XXMKK_N1 MONITORING USAGE;
SELECT * FROM V$OBJECT_USAGE;
ALTER INDEX HR.PER_ALL_ASSIGNMENTS_F_XXMKK_N1 NOMONITORING USAGE;

There is one more way to monitor the usage of INDEX, which I would incorporate in subsequent post.

Solution#3: De-fragment the tables periodically to reclaim unused space / improve performance

Should be done every quarter/half year

WF_LOCAL_ROLES is highly fragmented and causing a lot of performance problem, even I executed the most eligible candidate for de-fragmentation SQL and there is a huge benefit noticed for WF_LOCAL_ROLES.

ALTER TABLE APPLSYS.WF_NOTIFICATIONS MOVE;
ALTER TABLE APPLSYS.WF_LOCAL_ROLES MOVE;
ALTER TABLE APPLSYS.WF_LOCAL_USER_ROLES MOVE;
ALTER TABLE APPLSYS.WF_USER_ROLE_ASSIGNMENTS MOVE;
ALTER TABLE AP.AP_INVOICES_ALL MOVE;

ALTER TABLE <TABLE_NAME> ENABLE ROW MOVEMENT;
ALTER TABLE <TABLE_NAME> SHRINK SPACE CASCADE;
ALTER TABLE <TABLE_NAME> DISABLE ROW MOVEMENT;

I would like to mention one more point on AP Performance where AP.AP_LIABILITY_BALANCE, this table is also highly fragmented, if defrag is done, some other code would be benefited.

ALTER TABLE AP.AP_LIABILITY_BALANCE MOVE;

Solution#4: Rebuild Indexes for Better Performance/After Row Movement

Should be done every quarter/half year

You need to rebuild the indexes associated with these tables, as the status would be unusable.

SELECT TABLE_OWNER, INDEX_NAME FROM dba_ind_columns WHERE INDEX_NAME IN
(SELECT DISTINCT INDEX_NAME FROM dba_ind_columns WHERE TABLE_NAME IN
('WF_NOTIFICATIONS','WF_LOCAL_ROLES','WF_LOCAL_USER_ROLES', 'WF_USER_ROLE_ASSIGNMENTS', 'AP_INVOICES_ALL','AP_LIABILITY_BALANCE'))
ORDER BY INDEX_NAME;

alter index <owner>.<index_name> rebuild online;

Solution#5: Gather Stats On Relevant Tables

In Live, Gathers Stats conc program is scheduled to run every 15 days. There is no need to do this additionally, but every time 3/4 is done, it is recommended to perform.

exec fnd_stats.gather_table_stats('HR','PER_ALL_ASSIGNMENTS_F');
exec fnd_stats.gather_table_stats('HR','PER_ALL_PEOPLE_F');
exec fnd_stats.gather_table_stats('HR','PER_POSITION_EXTRA_INFO');
exec fnd_stats.gather_table_stats('HR','PAY_USER_ROWS_F');
exec fnd_stats.gather_table_stats('HR','PAY_USER_TABLES');
exec fnd_stats.gather_table_stats('AP','AP_INVOICES_ALL');
exec fnd_stats.gather_table_stats('HR','PAY_USER_COLUMNS');
exec fnd_stats.gather_table_stats('HR','PAY_USER_COLUMN_INSTANCES_F');
exec fnd_stats.gather_table_stats('APPLSYS','WF_LOCAL_ROLES');
exec fnd_stats.gather_table_stats('APPLSYS','WF_LOCAL_ROLES_TL');
exec fnd_stats.gather_table_stats('AP','AP_INV_APRVL_HIST_ALL');
exec fnd_stats.gather_table_stats('APPLSYS','WF_NOTIFICATIONS');
exec fnd_stats.gather_table_stats('APPLSYS','WF_LOCAL_USER_ROLES');
exec fnd_stats.gather_table_stats('APPLSYS','WF_USER_ROLE_ASSIGNMENTS');

And finally the result is good, from 50 seconds of approval time we could manage to take it down to 10 seconds.
There is a further scope of improvement and we can work towards to make below 5 seconds...

That would follow very soon, only the analysis part.....




No comments:

Post a Comment