Pages

About this blog

As with my website, my blogs espouse the philosophy “not all performance problems are technical” and that in most stable environments user behaviour / usage patterns are typically the source of most correctable performance problems. So if you are into bits, bytes, locks, latches etc this is not the place for you, but if you are after ways to genuinely enhance performance and work with your systems and customers to achieve better, more predictable performance, read on.

Friday, November 19, 2010

Is the number of concurrent manager processes you have causing performance issues?

All too often I see sites with far too many concurrent manager processes and the site wonders why they have intermittent performance issues. Remember, having more concurrent manager processes does not mean more throughput.

So, how many concurrent manager processes should you have?

Add up the number of standard and custom concurrent manager processes you have. If that value exceeds 2 times the number of CPUs (multi core adjusted) on the box you most likely have too many manager processes. If you are experiencing intermittent performance issues, particularly around high processing times like financial month ends, too many manager processes would most likely be one of your reasons.

The following SQL will list your concurrent managers:

SELECT concurrent_queue_name,
                max_processes,
                running_processes,
                nvl(sleep_seconds,0) sleep_seconds,
               cache_size,
               decode(enabled_flag,
               'Y', 'Enabled', 'N', 'Disabled', 'Unknown' ) status
   FROM applsys.fnd_concurrent_queues
ORDER by decode(enabled_flag, 'Y', 1, 'N', 0, enabled_flag ) DESC,
                    max_processes DESC,
                    decode(concurrent_queue_name,
                                  'FNDICM', 'AA',
                                  'FNDCRM', 'AB',
                                  'STANDARD', 'AC',
                                  concurrent_queue_name);

 
Example Output

A real world example of too many managers:

A site I reviewed had 54 standard manager processes on a 4 CPU box - 54 / 4 = 13.5

In this example there can be up to 54 concurrent requests running through the standard managers at any time which will just plain flood the CPUs. Generally running two (2) concurrent requests per CPU is sufficient to leave enough overhead for normal processing activities; any more than that and the risk increases of CPU flooding. And you would hope this site does run too many FSG’s... as we all know the damage they can do...

Believe it or not I have even seen 108 standard managers on an 8 CPU box..... Hmmm.....

Flooding the CPUs with concurrent requests leaves very little available CPU for normal user requests. As a result users tend to experience poor performance with their forms etc... It’s basic queuing theory... To make matters worse, these intermittent performance issues tend to occur around peak processing times when the concurrent manager load is at its peak; exactly the time users are clearly very busy and don’t want to be experiencing performance issues.

If you have a “problem” with an excessive number of standard and custom concurrent managers processes, what you need to do is lower the number of processes. This is easier said than done, as once they exist the business is very reluctant to let them go. But it is worth persisting as it will make a difference!

For more information refer to the paper I wrote:

http://www.piper-rx.com/pages/papers/cm101.pdf

Believe it or not, and much to my surprise, this is the most down loaded paper on my web site. Even though I wrote this paper in 2004, it still holds true today.... not much changes in OEBS, and for good reason; stability in accounting and business systems is what businesses want.

Saturday, November 6, 2010

Case Review - How did I manage to get 30 million rows in my fnd_logins table?

The site in presented with over 30 million rows in the fnd_logins table, growing at a rate of approximately 25,000 records per day.

Based on the site’s application activity, the estimated number of rows that should be held in the fnd_logins table should be around 800,000 records (i.e. holding 32 days history on-line). It is expected that this number should reduce post concurrent manager activity review.

The site was running the concurrent program Purge Signon Audit data (FNDSCPRG) daily as part of their normal maintenance program to purge the sign-on audit data and was unaware of the high growth rate in the sign-on audit tables.

On review of the site’s scheduled requests it was found that the Purge Signon Audit data was being run daily with the date argument set to 10-Oct-06, however, “the increment date parameter each run” check box had not been checked when the scheduled request was created. As such the data parameter has not been incrementing with each run, thus with each run the program has been purging all sign-on audit records prior to 10-Oct-06. As a result any record added after that date has not been being purged.

So, since 10-Oct-06 (Over 1,200 days at the time of presentation) the program has been running but purging nothing.

Whilst there are several inherent performance issues, the biggest impact to the application would be when the concurrent program runs. The concurrent program Purge Signon Audit data (FNDSCPRG) uses an un-indexed column start_time in its execution. As such the purge program will execute a full scan on each of the target tables in order to determine the rows to delete. So for this site, that would include a daily full scan of a 30 million to not find any rows to delete.

The case review shows how I would “Fix” the issue including deleting the unwanted rows and setting up a new Purge Signon Audit data scheduled program run.

The full case review can be found at -
http://www.piper-rx.com/pages/case_reviews/case_sign_on_audit.pdf