Highlighted
Super Contributor.. kjf Super Contributor..
Super Contributor..
198 views

omi opr_gateway_queue backing up - only fix is to restart DPS

 For the last 24 hours our DPS keeps falling behind on processing.  The opr_gateway_queue buffered messages steadily climbs and no new alerts are inserted into the event list.  This is happening about every 2 hours.  Once the DPS is restarted, the queue clear clears and we are good for another 2 hours, or so.

I had debug on the opr-backend but did not see any silver bullet.  I do see messages indicating a potential "stalled" condition for a consumer (see copied text below signature).  I have a ticket open with MF, but not getting much traction at this point.  I do not think it is a flood because we have the flood policy turned on.  I have archived some alerts that had extensive history record, but to no avail.

OMi 10.62 - no patches running on Windows Server 2012

 

Any help is appreciated.

thank you.

-k

2019-04-25 13:50:55,206 [PipelineStep-6-RegisterEvents] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'RegisterEvents->EventUpdateHandler' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:51:13,663 [PipelineStep-5-LogReceivedStatistics] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'LogReceivedStatistics->RegisterEvents' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:51:38,742 [PipelineStep-4-SequenceEvents] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'SequenceEvents->LogReceivedStatistics' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:51:57,888 [PipelineStep-3-InitEvents] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'InitEvents->SequenceEvents' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:52:23,715 [PipelineStep-2-SameEventFilter] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'SameEventFilter->InitEvents' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:52:41,315 [PipelineStep-1-KPIStatusChangeHandler] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'KPIStatusChangeHandler->SameEventFilter' takes longer than 30 seconds. Consumer might be stalled.
2019-04-25 13:53:01,884 [Thread-58 (ActiveMQ-client-global-threads-1183245882)] WARN ProcessableObjectQueue.put(44) - Submitting Event to queue 'Entry->KPIStatusChangeHandler' takes longer than 30 seconds. Consumer might be stalled.

Labels (2)
0 Likes
4 Replies
Outstanding Contributor.. SahilGupta Outstanding Contributor..
Outstanding Contributor..

Re: omi opr_gateway_queue backing up - only fix is to restart DPS

Hello @kjf ,

This issue is with many environments. there is no known solution for this.

Few workarounds for time being

1) Increase the heap memory sizes  using configuration vizard

2) Specially for WDE-  increase heap memory even more 

3) try restarting OA agents on GW n DPS instead of all the services

 

Regards,
Sahil Gupta

Regards,
Sahil Gupta
0 Likes
Stijn_COL Respected Contributor.
Respected Contributor.

Re: omi opr_gateway_queue backing up - only fix is to restart DPS

Hi,

We had the same issue.
Something we discovered while troubleshooting the issue:

  1. Long running EPI scripts can block the event pipeline queue.
  2. Upgrading OMi 10.62 to 10.63 IP 2 resulted in a much more performant OMi environment in regards to event processing.

Kind regards

Stijn

0 Likes
gun339 Super Contributor.
Super Contributor.

Re: omi opr_gateway_queue backing up - only fix is to restart DPS

Hi Kjf,

Try to check the jmsqueue of the on the OMi gateway server bu running opr-jmsUtils.bat and see th output.

Verify the buffered event count.

Also this is a know issue and we have encountered the same in our env.

You can try this the below steps:

opr-support-utils –stop bus

Delete the following:

%TOPAZ_HOME%\bus\bindings
%TOPAZ_HOME%\bus\journal
%TOPAZ_HOME%\bus\large-messages

opr-support-utils –start bus

You can also increase the heap size of the process using the below path <OMI_Home>/bin/<processname.ini>.

Regards

Gun

0 Likes
Super Contributor.. kjf Super Contributor..
Super Contributor..

Re: omi opr_gateway_queue backing up - only fix is to restart DPS

Thank you all for your answers.  I appreciate you taking the time to reply.

I have tracked down the source of the backlog.  Now I just need to figure out how to fix it permanently.  Prior to almost every backlog, I noticed that there were entries in the opr-backend.log regarding archiving.  Upon further investigation I discovered that each of the archive files that were created on disk, from the start of the backlog problem were corrupted.  I cannot open the compressed files.

My "bandaid" for the problem is that I have disabled auto archiving for the time-being.  I think there may be a corrupt record in the database, but I am not sure.  I am having our DBAs look into the database to see if there were any issues. I will try manual archiving to see if that works.

Of course, support recommended similar fix to upgrade to 10.63 P2.  We will definitely do that, but, unfortunately, it is not an immediate fix we can implement.  I also checked all the jvm_statistics files and did not see any that were low on memory.

Thanks again for all the replies and input!

-k

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.