Highlighted
Honored Contributor.
Honored Contributor.
496 views

Design Arcsight solution on Azure

Jump to solution

Hi,
I have to install a simple Log Management solution on Microsoft Azure tenant.
Below the data flow:
Log Server --> SmartConnector -->Logger

It's my first time on Azure and I'm not an expert about it.

These are my doubts about the availability of the ArcSight solution.

  • Is it necessary to have a Smartconnector that continues to receive logs if the first smartconnector is down ?
  • Do I need a secondary logger that is able to collect logs when the first Logger is down ?
  • Does that make sense to put 2 Logger which is peered with other logger ?
  • Are the availability problems of Smartconnectors and Loggers still valid in the cloud environment ?

I would like to know, if there are some factors to keep in mind while design the solution
to ensure the availability of the ArcSight Log management platform.

Thanks,

Lg

Labels (5)
0 Likes
1 Solution

Accepted Solutions
Highlighted
Acclaimed Contributor.
Acclaimed Contributor.

I think @dkuehner pointed it out perfectly.

I would think of Azure as just another environment, there is no real difference between having the solution in the cloud or on your own hosted environment, the only difference I feel is when you calculate risk (in terms of uptime etc, which is your current question or concern).

When calculating the risk and requirements of a hosted infrastructure, it is not uncommon to for example have a mirrored solution, in which you setup components like ESM in HA, or have two installations separated over multiple datacenters, but it all drills down to the requirements you have in terms of uptime.

When it comes to clouds, your solution is most commonly already redundant, so if one data center goes down, your virtual environment is automatically moved to another, depending on your contract with the cloud service company of course, this should cover some of the risk of hardware failure, but does not cover software failures (if the software hangs, crashes etc).

So to answer your question, the below would be a good approach:

1. What are the uptime requirements you are looking for?

2. Does your cloud provider provide redundancy already?

3. If software like a connector goes down, is your logsource able to cache?

4. If software like Logger goes down, does your smartconnector have enough disk space and the correct settings to cache it until it can be taken up again?

 

A few key points:

We offer a syslog load balancer connector, this takes all incoming syslog and spreads them over multiple connectors. If one connector goes down the others still continue to work, so this can in theory also be used as a HA solution for connectors.

Logger peering is only needed if your incoming EPS is too high or if the data retention requirements is higher than the max size offered by Loggers (16TB). The connectors can cache long enough to resolve your Loggers in most cases.

-----------------------------------------------------------------------------------------
All topics and replies made is based on my personal opinion, viewpoint and experience, it does not represent the viewpoints of MicroFocus.
All replies is based on best effort, and can not be taken as official support replies.
//Marius

View solution in original post

2 Replies
Highlighted
Knowledge Partner Knowledge Partner
Knowledge Partner

Hi,

 

the answer is "depends". 😉

No really, there are so many factors... in the end Azure is the same as vm or software or hardware or appliance installations. It does not really matter.

- Do you need a second connector? Maybe. Is it a database connector? Then no. Is it a windows connector? No, unless the log rolls over in the downtime... and so on. The only connector where it might make sense is for UDP syslog connectors. Or just do not use UDP.

- You do not need a second Logger unless you have enough EPS or the cache of the connector would run out during the downtime...

- Peering is good... if you need it due to EPS

- Cloud is more or less the same. But I never had any availability issues with SmartConnectors or Loggers over the last  8 years, so I am not sure what you mean...

Highlighted
Acclaimed Contributor.
Acclaimed Contributor.

I think @dkuehner pointed it out perfectly.

I would think of Azure as just another environment, there is no real difference between having the solution in the cloud or on your own hosted environment, the only difference I feel is when you calculate risk (in terms of uptime etc, which is your current question or concern).

When calculating the risk and requirements of a hosted infrastructure, it is not uncommon to for example have a mirrored solution, in which you setup components like ESM in HA, or have two installations separated over multiple datacenters, but it all drills down to the requirements you have in terms of uptime.

When it comes to clouds, your solution is most commonly already redundant, so if one data center goes down, your virtual environment is automatically moved to another, depending on your contract with the cloud service company of course, this should cover some of the risk of hardware failure, but does not cover software failures (if the software hangs, crashes etc).

So to answer your question, the below would be a good approach:

1. What are the uptime requirements you are looking for?

2. Does your cloud provider provide redundancy already?

3. If software like a connector goes down, is your logsource able to cache?

4. If software like Logger goes down, does your smartconnector have enough disk space and the correct settings to cache it until it can be taken up again?

 

A few key points:

We offer a syslog load balancer connector, this takes all incoming syslog and spreads them over multiple connectors. If one connector goes down the others still continue to work, so this can in theory also be used as a HA solution for connectors.

Logger peering is only needed if your incoming EPS is too high or if the data retention requirements is higher than the max size offered by Loggers (16TB). The connectors can cache long enough to resolve your Loggers in most cases.

-----------------------------------------------------------------------------------------
All topics and replies made is based on my personal opinion, viewpoint and experience, it does not represent the viewpoints of MicroFocus.
All replies is based on best effort, and can not be taken as official support replies.
//Marius

View solution in original post

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.