experiencing memory shortage issues for containers often
I've noticed that my C3400 connector appliances run into memory issues often that results in container restarts
right now i'm running an older OS and container on this particular appliance, however I have not found that updating the OS and container corrects the issue
currently i have windows unified connector setup on a C3400 connector appliance with the OS v18.104.22.16886.0 and container v22.214.171.12464.0
The connector pulls windows security logs from 2 domain controllers which results in 300EPS in and around 150EPS out after filtering. We are not doing aggregation or anything special other than retaining raw logs on a connector.
this is the only connector in the container and I have increased the JVM of the container to 1024mb from the previous 256mb.
My issue is that container goes down multiple times a day still. It will usually come back up on it's own after about 5 minutes. However, I have run into the issue of it hanging upon startup, which requires me to goto the admin->processes tab and I have to restart the container. When I restart the container, it will say "Execution Failed" which then requires me to highlight the container and start it. Events will flow at that time, usually with a large cache because of network bottleneck, eventually the cache will clear and everything is fine for about 2-3 hours
I have a similar issue with another windows unified connector on the same connector appliance that is grabbing security from 12 domain controllers (less use on these DCs) around 120EPS in and 60EPS out and it reboots every hour. Same container version, memory, destination etc
also, when i setup an additional destination, to mirror events to my logger and ESM, the containers restarted about every 15 mins
Is there something i'm missing? I can live with this if the container can always recover on its own. One huge issue is that this will happen to the syslog daemon connector and this results in around 25,000 lost events when it goes down each time
Re: experiencing memory shortage issues for containers often
The default Cache for a SmartConnector is 1 GB. Try boosting the Container from 1024 MB to just a little bit more
for overhead and see what happens. It's a bit of a longshot but it sounds like you have some overall memory to spare.
Next look at your compare your start and end times. We have seen DC's that are overloaded and delay sending events
up to 24 hours, this could cause problems. Check to make sure there are no big delays between the DC's and the ConnApp. In other words divide and conquer.
Lastly maybe it would be worth reinstalling the SmartConnector in another unused Container if you have one.
Sometimes a SmartConnector could somehow be corrupted. You might be able to see if this is the case in the logs.
I have fixed probelms updating the Container itself in some cases.
I know pretty standard suggestions.
Good Luck, David