Absent Member.
Absent Member.

Logger HA using external loadbalancers

Hi all,

first the most important:

This thread should not be about built in logger HA, which in fact does not exist by now!!!

But there are some approaches more or less stressed in the field. One of them is using external loadbalancers between connectors and loggers.

Is there anyone out there who has experience with building a HA environment for san-logger using loadbalancers and one common LUN in the SAN?

There is a logger HA document from ArcSight mentioning this approach as one among many other scenarios applicable when HA is a mandatory requirement.

I just want to discuss this scenario. Maybe there could be other threads to discuss other approaches (at least if they are feasible and worth to...)

Thanks for your feedback,


Labels (2)
2 Replies
Absent Member.
Absent Member.

As a first answer, my 50cents about this topic:

- Connectors and loggers communicate using port 443 tcp sending smart messages

- Connectors could be limited to use just one logger destination

- This one IP/Name is in fact a loadbalancer "serice/virtual IP"

- The loadbalancers backend is made of 2-n logger instances

- One instance is the primary destination for the loadbalancing logic

- Other instances are used depending on the loadbalancing logic

- A meaningful loadbalancing logic could just be the availability of the primary logger ip

- A better loadbalancing logic could check the service by sending a real life request to the logger port 443 and examining the answering syntax

In case of an invalid answer or missing probe answers, all data has to be sent to a failover backend instance.

All of this is transparent for the agents, feeding events into the loggers. So neither failover nor additional destinations have to be configured on the agents.

Event caching should never occur as long as the loadbalancers service/virtual IP is available.

Primary logger serves all incoming events. In case of failover, the secondary logger temporarily takes over. When the primary instance comes back up, events go through the primary instance again (designated master)

Now a first differentiation:

If the loggers all using the same SAN LUN

- Events are written to the same storage, regardless of which logger is active

- Event loss could / should not occur

- Event duplicates could / should not occur

- Simultaneous write operations from both loggers to the only SAN LUN should not happen

- SAN is always in a well-defined and consitent state

- Designated master principle counts

If the loggers all using different SAN LUNs

- Everything, lined out for "two loggers with local storage" counts here as well

If the loggers all using local storage

- Events are written to local hard disks

- Event loss could / should not occur

- Event duplicates could / should not occur

- Split brains problem due to the fact that events are only written to one instance (normally the primary, in case of failover the secondary)

- Designated master principle counts here as well

- All events are on the primary loggers harddisks, except the events arriving during primary logger outages

-> Consequences for reporting/searching

- No impact on searches when using peer logger principle

- Impact on reporting due to "distributed reporting" is not available so far

- Two reports necessary

- Maybe a workaround using secondary logger event export functionality (CEF syslog forwarding to some kind of "SYNCING AGENT" which feeds events from failover logger to primary instance)

Is there anything crucial i did not mention?

As an outcome of this, logger HA using SAN storage and loadbalancers is, in theory a reliable and feasible approach (not considering costs for loadbalancers and SAN).

Locking forward to your 50cents...



Hi Markus,

to add on your excelent observation:

One possible solutin, that would not involve loadbalancers, could be VMware fault tolerance feature. Taking into considiration that ArcSight logger is now available as software product (not appliance), someone could think about deploing logger on virtual machine.

The whole story about VMware FT is that "original" virtual machine (VM), can have one or more "live" clones,what means that each transaction done on "orignal" VM, is also done on FT VMs. If "original" VM fails at some moment of time, FT VM is put in full working state, without loosage of any data. So, with propper VMware administration, it is so simple to organize "real" HA of the VMs (not to confuse with "standard" VMware HA functionality). Just to mention that virtual machines data is located on storage system.

In other words, for those who might use software version of ArcSight logger, it would be clever to calculate if loadbalancer or VMware fault tolerance VM could be better solution.  Using virtualization technology, you would prepare one VM with software version of ArcSight logger, then create FT VM, which would be "live" clone of original logger. All data are located on storage system, and in case that base logger VM fails, the FT VM is immediatelly put in active state, so there would be no lost events. Talking about networking, the FT VM uses same IP address as base VM, and the whole process of different MAC addresses from base and FT VM, is under control of virtual layer, so transparent to smart connectors who are sending data to one IP address all the time, without interruption.


The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.