Highlighted
karl2k1 Absent Member.
Absent Member.
287 views

experiencing memory shortage issues for containers often

I've noticed that my C3400 connector appliances run into memory issues often that results in container restarts

right now i'm running an older OS and container on this particular appliance, however I have not found that updating the OS and container corrects the issue

currently i have windows unified connector setup on a C3400 connector appliance with the OS v6.3.0.6386.0 and container v6.0.3.6664.0

The connector pulls windows security logs from 2 domain controllers which results in 300EPS in and around 150EPS out after filtering.  We are not doing aggregation or anything special other than retaining raw logs on a connector.

this is the only connector in the container and I have increased the JVM of the container to 1024mb from the previous 256mb.

My issue is that container goes down multiple times a day still.  It will usually come back up on it's own after about 5 minutes.  However, I have run into the issue of it hanging upon startup, which requires me to goto the admin->processes tab and I have to restart the container.  When I restart the container, it will say "Execution Failed" which then requires me to highlight the container and start it.  Events will flow at that time, usually with a large cache because of network bottleneck, eventually the cache will clear and everything is fine for about 2-3 hours

I have a similar issue with another windows unified connector on the same connector appliance that is grabbing security from 12 domain controllers (less use on these DCs) around 120EPS in and 60EPS out and it reboots every hour.  Same container version, memory, destination etc

also, when i setup an additional destination, to mirror events to my logger and ESM, the containers restarted about every 15 mins

Is there something i'm missing?  I can live with this if the container can always recover on its own.  One huge issue is that this will happen to the syslog daemon connector and this results in around 25,000 lost events when it goes down each time
container.JPG

Labels (4)
0 Likes
Reply
2 Replies
davidhawley1 Absent Member.
Absent Member.

Re: experiencing memory shortage issues for containers often

The default Cache for a SmartConnector is 1 GB.  Try boosting the Container from 1024 MB to just a little bit more

for overhead and see what happens.  It's a bit of a longshot but it sounds like you have some overall memory to spare.

Next look at your compare your start and end times.  We have seen DC's that are overloaded and delay sending events

up to 24 hours, this could cause problems.  Check to make sure there are no big delays between the DC's and the ConnApp.  In other words divide and conquer.

Lastly maybe it would be worth reinstalling the SmartConnector in another unused Container if you have one.

Sometimes a SmartConnector could somehow be corrupted.  You might be able to see if this is the case in the logs.

I have fixed probelms updating the Container itself in some cases.

I know pretty standard suggestions.

Good Luck, David

0 Likes
Reply
gruwellj Absent Member.
Absent Member.

Re: experiencing memory shortage issues for containers often

I had this same issue on a C5200 and fixed it by doing a container restore and rebuilding the connector.

0 Likes
Reply
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.