maf Honored Contributor.
Honored Contributor.
380 views

OMi GW Jboss runs out-of-memory

Hello support,

I've been posting here:

/t5/Operations-Manager-i/memory-allocations-by-JVM-to-OMi-Processes/m-p/6905064

regarding our problem with JBOSS heap usage on OMi 10.01 IP6 (Windows).

To summarize - JBoss on GW starts normally i.e. one full GC per hour. Within about a month, heap gets filled up and full GC happens every two minutes. We then restart the GW.

Today I've noticed the following KM :

https://softwaresupport.hpe.com/km/KM02592694

and I'm wondering if it might be related.

I've tried to replicate using event perspective with view explorer, looking for unusual jvm statistics on both our test setups with omi 10.12 and 10.01 ip6. So far nothing special is happening. I'm trying to pin this down, to see if gui usage patterns have an effect, se we could limit the fallout before this is patched.

Can anybody elaborate here on the statement:

View explorer sends refresh requests even if the previous request has not been processed yet.

Thanks

Tags (3)
0 Likes
6 Replies
Micro Focus Expert
Micro Focus Expert

Re: OMi GW Jboss runs out-of-memory

Hello Maf,

Can you try with IP7 and see if the situation improves?

Regards,

Rosen

Micro Focus Software Support
The views expressed in my contributions are my own and do not necessarily reflect the views and strategy of Micro Focus.
If you find this or any post resolves your issue, please be sure to mark it as an accepted solution.
0 Likes
maf Honored Contributor.
Honored Contributor.

Re: OMi GW Jboss runs out-of-memory

We cannot install IP7 now. Any information as to the reffered KM?

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: OMi GW Jboss runs out-of-memory

The related KM (https://softwaresupport.hpe.com/km/KM02592694) relates to the View Explorer component & the auto refresh introduced in 10.11.

Under normal circumstances, the status refresh occurs every 5 seconds for each individual View Explorer component loaded into the users UI.

However, under certain circumstances, such as if the RTSM is busy, the refresh can be issued again before the previous status update has been responded to, which may cause issues if the RTSM is flooded with requests.

If you believe you are having issues with the RTSM responding in time to requests from the UI's, please first try the JVM configuratoins for the DPS for a High RTSM Utilization scenario as described in the Performance & Sizing Guide.

The KM indicates that this issue is currently being worked on for 10.11 / 10.12 but unfortunately we cannot give a date when an official HF will be released.

MicroFocus Support
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.
If you liked it I would appreciate KUDOs. Thanks
0 Likes
Highlighted
Micro Focus Expert
Micro Focus Expert

Re: OMi GW Jboss runs out-of-memory

Regarding the original issue:

If you feel that there is a true memory leak, I would first try the latest IP (if possible) in your test environment and see if you can replicate the issue (saves time / issues with testing in live).

Either way, you will probably need to raise an official case to get the issue investigated.

The engineer may be able to talk you through obtaining memory dumps (you may have some already for crashed JVM processes - check for a .hprof extension) and these would need to be analysed using a Memory Analyzer Tool (Eclipse MAT is a good open source tool - http://www.eclipse.org/mat/).

If there is a true memory leak, these dumps files can be used by a labs engineer to help idenitfy the issue.

MicroFocus Support
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.
If you liked it I would appreciate KUDOs. Thanks
0 Likes
Micro Focus Expert
Micro Focus Expert

Re: OMi GW Jboss runs out-of-memory

Hi Maf,

This is not something that was encountered frequently..

I would recommend OMi 10.12 IP1 to get rid of known issues you are much more likely to run into and take advantage of the many enhancements since the 10.01 version you are running.

I would also recommend you implement in both test and production the OMi Self monitoring pack which is a contributed and very valuable tool in use in our user base. You can find it at our HPE Live Network (HPLN) location:

https://hpln.hpe.com/contentoffering/omi-server-self-monitoring

people who have used it are very impressed with what it has to offer; policies metrics for many aspects such as event throughput and jvm statistics, alarms all vital to “monitoring the monitor”!

 

Regards,

James

HPE Software Support

 

0 Likes
maf Honored Contributor.
Honored Contributor.

Re: OMi GW Jboss runs out-of-memory

Thanks John and James.

We will eventually migrate to newer version, but we have to be hyper-conservative in a production environment.
We are aware of self monitoring policies on HPLN and have tried them on 10.10 setup. They are nice, but in this case they would just expose the problem we are already aware of (that is if we were to rework them for 10.01).

We don't have any dump JBoss files at the moment, as we restart before the storm gets out of control. Previous analysis of WDE dump files proved to be usefull though, so thanks for the tip.

As for sizing -  we have increased the allowances once again and it seems the stats are holding on for now.
We do still get gradually less and less heap for JBoss, but we can nearly live with bi-monthly restarts. 

Mat

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.