pepdwill Honored Contributor.
Honored Contributor.
240 views

CacheManager Report- Max Cache Size Reached

Hello -

 

Had a question about the CacheManager report and whether something within it could be impacting performance.   

 

The line item below indicates that the max cache size has been getting reached for the User's Request Securities line.

 

User's Request Securities 64280    5887   0.92      0 / 0 / 2806 / 1485 -             0            5 / 50             -       2 hours  -           No 

 

 

Could this alone be causing delays for portlets and searches, which I know would query user's request security?

 

What is the recommended path to increase the cache size of this (if that is the recommendation)? 

 

 

Thanks-

Danny

 

 

0 Likes
14 Replies
pepdwill Honored Contributor.
Honored Contributor.

Re: CacheManager Report- Max Cache Size Reached

Would it be recommended to update the cache parameter below (from cache.conf)?

 

cache.userrequest.title = User's Request Securities
cache.userrequest.maxSize = 50
cache.userrequest.maxIdleTime = 2 hours

 

 

0 Likes
Absent Member.. AbrahamRB Absent Member..
Absent Member..

Re: CacheManager Report- Max Cache Size Reached

Hello pepdwill,

 

I found this information in the installation guide that could help you with your question:

 

DB_CACHE_SIZE

The DB_CACHE_SIZE parameter value specifies the size (in KB or MB) of the default buffer pool for

buffers with the primary block size (the block size defined by the DB_BLOCK_SIZE parameter).

Recommended Setting

Specify a DB_CACHE_SIZE parameter value of at least 500 (expressed in MB).

 

Please let me know if the information is helpful.

 

Regards

 

 

0 Likes
Outstanding Contributor.. Loc_Nguyen_PPM Outstanding Contributor..
Outstanding Contributor..

Re: CacheManager Report- Max Cache Size Reached

Hello Danny,

 

I hope you are doing great and happy holliday !

 

You can change the entity's maxSize and idleTime, but we do not recommendation for that because it not document in our guide. Or if you want to flush cache for "User's Request Securities" you can using the following command

 

sh ./kRunCacheManager.sh  23

 

 

Hope this helps,

“HP Support
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.”
0 Likes
Absent Member.. AbrahamRB Absent Member..
Absent Member..

Re: CacheManager Report- Max Cache Size Reached

Hello pepdwill

 

As mentioned before due to we don't have documentation about the possibility to update the cache parameters you indicated; it is not our recommendation do that.

 

Also if you want to flush your cache, you can do it as it's mentioned in the installation guide:

 

kRunCacheManager.sh


Use the kRunCacheManager.sh script to manage your cache from the command line and without
having to restart the PPM Server.


Run the script as follows:


sh ./kRunCacheManager.sh

 

Best regards


Select the number for the corresponding entity cache (request types, validations, and so on) that
you want to flush. Running this script on any one node clears out the cache on all nodes. You can
script this to run after your database changes have been committed.

 

 

 

0 Likes
pepdwill Honored Contributor.
Honored Contributor.

Re: CacheManager Report- Max Cache Size Reached

Thanks all for the replies.

0 Likes
Absent Member.. AbrahamRB Absent Member..
Absent Member..

Re: CacheManager Report- Max Cache Size Reached

Hello pepdwill

 

Could you please mark our answer as a "Correct Answer" so that other members can benefit from this answer?
And continue with the process of close this thread to you. I apologize for the inconvenience & Thanks a lot.


I hope you have a nice day

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: CacheManager Report- Max Cache Size Reached

Hi,

 

From my perspective (which seems differs from that of HP Support, so please take it with a pinch of salt 🙂 ), I would strongly encourage you to tune PPM cache according to your needs. If you hit the cache size limit, then increase the cache size, unless of course this may cause memory shortage on your JVM. Also, until you have a real reason to invalidate objects after 2 hours (hint: you likely don't), then you can also increase the max-age value as an object just doesn't turn stale after 2 hours by some unexplained magical effect (unless people modify it directly in the DB without going through PPM, but in that case what you need is a staleness check, not a max-age setting).

 

The default cache settings are, well, default settings, and they are meant to be tuned whenever one is trying to get the best performance of their PPM server under heavy load. 

 

It turns out I just wrote a PPM Cache Tuning White Paper a few month ago as a documentation base material to present the Cache changes to be introduced in next PPM version; I attached it to this post. I think it's a good read for anyone that wants to know more about the PPM Cache.

Please note that as usual, any description about content of future PPM version is tentative only and is in no way a commitment at this point.

 

Any comment or question on it, please let me know.

 

Kind Regards,

Etienne.

Erik Cole Acclaimed Contributor.
Acclaimed Contributor.

Re: CacheManager Report- Max Cache Size Reached

This is great information, Etienne.

Can you comment on what is maxAge vs maxIdleTime?

"soft reference reclaimed" means it got swept out in a GC event?

 

Thanks!

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: CacheManager Report- Max Cache Size Reached

Hi Eric,

 

maxAge = maximum time that an object can spend in the cache after being added there (independantly of whether it's read or not from the cache).

maxIdleTime = maximum time that an object can spend in the cache without being read at least once from the cache.

 

So an object can be in the cache and be read from the cache 10 times per second, however it will be removed from the cache whenever it reaches maxAge (and will then have to be reloaded).

 

 

You're correct for the "soft reference reclaimed" definition. 

 

Every object in the "new" PPM cache will be automatically removed from the cache if the JVM runs short of memory. The only drawback to this great "security valve" approach is that unfortunately, any object in the cache is equally likely to get removed if memory shortage happens... Ideally PPM should remove the least used objects from the cache first, but that's not yet the case.

 

Still, it's a nice relief to know that even if you are too generous with your cache size settings, this cannot cause memory shortage problems for your PPM server.

 

Thanks,
Etienne.

Erik Cole Acclaimed Contributor.
Acclaimed Contributor.

Re: CacheManager Report- Max Cache Size Reached

Yeah we're running 4GB JVMs on our user nodes, but still the default cache settings so I've updated (read: drastically increased) the maxIdleTime parameter on several items. I also think the default "2 Days" on a lot of items explains why the system seems slower every Monday morning...nobody has hit anything since Friday and the cache expired.

 

I'll keep an eye on memory usage and GC events, but I think these default settings came from back when server memory was measured in MB and not GB...

0 Likes
pepdwill Honored Contributor.
Honored Contributor.

Re: CacheManager Report- Max Cache Size Reached

Erik -

 

Please provide an update once you see whether that maxIdleTime update improves things!

0 Likes
Erik Cole Acclaimed Contributor.
Acclaimed Contributor.

Re: CacheManager Report- Max Cache Size Reached

Hard to tell improvement...so much depends on the user traffic. Odd that I am seeing virtually all flushes are of the Soft References Reclaimed variety. I'm surprised if this means our 4GB JVMs are running out of mem and having to GC the cached stuff....

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: CacheManager Report- Max Cache Size Reached

Well, technically speaking the only thing this means is that cache is taking up too much memory and that some of these have to be re-claimed in order to make room for "normal" non-cache objects. 

 

This may happen if there's a peak load scenario (for example, many users loading huge work plans at the same time) and that the demand for memory to store "normal" objects increase sharply, even for a short time.

 

Is the node you are monitoring also running background services, or is it only servicing web users? 

 

It might be interesting to track whether these soft-references reclaims occur regularly throughout the day, or whether they only occur at certain moment of the day (or when some specific background services are running on the node).

 

I wouldn't increase the heap memory unless I've got solid proof that it would prevent such soft references reclaims without resulting in lenghty blocking full GC. "4GB ought to be enough for anybody".

 

Cheers,

Etienne.

 

0 Likes
Erik Cole Acclaimed Contributor.
Acclaimed Contributor.

Re: CacheManager Report- Max Cache Size Reached

I've got two user nodes, and two services nodes. I'm pretty much ignoring the services nodes since I'm really interested in user experience primarily.

 

Each node usually reports around 15-20 open sessions at any given time, with about 5 or so active in the last few min, so not what I'd consider a heavy user load.

 

I was actually looking at JVM tuning options, maybe even reducing the size...I'm not a Java developer, so I'm operating mostly trial-and-error and feeling my way forward...

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.