ESM running slow
I will try to quick write down my problem, if there are any questions I’ll try to answer. We are having real problems with Oracle base, because is running slow. Last hour event takes about a minute, last 2 hour takes 4, daily reports about 15 minutes,… ESM 4.5 SP2 is installed on HP DL 360 G6 server (quad core, 12 gb ram) with Red Hat x64. When I look top in terminal it shows a small amount of memory free, in graphical mode like task manager is showing completely different statistic. Does anyone have any idea what should be problem for slow running? Performances of server or Oracle? I aslo try to run scripts SetDynamicSampling and GatherSystemStats but no improvements. Are there any tips and trick that should be done when installing and how to tune Oracle base?
Is this a fairly fresh install? A few months after ArcSight was installed here we started to run into a ton of different, seemingly unrelated issues. Turned out the wrong template was used and on a server with 16gb of ram it was configured to use 2. I would head down that path. I would also open a ticket with AS support if you haven't yet.
Of note - when you say the daily reports are taking a long time are they running against raw data or a trend? I would create a trend for those if you haven't already.
It is fresh install, about 2 month old and there is about 3.000.000 events per hour, now when all connectors are up. I choose to use standard template. Where do you see how much of memory is used or how is configured as you mentioned? How did you fix the memory problem? Because this is only demo lab, ESM is using internal hard drive, SCSI disks – DB and Manager are installed on the same system.
3000000 Event per hour? That is 830 EPS.
That's quite a lot for a demo system.
So then tell us:
- how many disks?
- what is the storage layout?
- what is your raid level?
There are 4 discs (300GB), one is for OS other 3 for DB. No raid is used because ain't enough space – already quite full base. I know that is not ideal and that DB should be running separate, but this is the only way it could be done financially. Is there any way to optimize this?
I doubt you will get enough performance from 3 disks for 830 EPS.
The simple reason why you have bad performance is probably your I/O troughput.
Monitor the I/O on the disks with OS tools and see how many I/O waits you have. I assume you will have a lot of I/O waits.
Most probably your OS is idle while your storage is the bottleneck.
I think Til hit the nail on the head.
At that EPS you're looking at around 10K IOPS with the other activity going on. Without the advantages storage solutions bring to boost IOPS the non-RAID'd disks (that also don't have read optimization queues and write caches) don't have a chance at keeping up.
If this is a lab system, I'd suggest reducing your EPS rate to about 100 EPS and see if performance improves.