Big news! The community will be moving to a new platform April 21. Read more.
Big news! The community will be moving to a new platform April 21. Read more.
Commodore
Commodore
627 views

F5 LTM for OMi

Anyone have advice on the setup of a F5 for OMi 10.62?   The documentation is not correct for a F5.  For instance the documentation specified Destination Address Affinity but the F5 documenation indicates Destination Address Affinity is really for a pool of cache servers.

"Data Collector Load Balance method should be sticky"  What does this mean?   (Least Sessions seems better with the correct persistance profile).

Also, their is no monitor method for the BBC channel.  This seems pretty silly as this is one of the most critical paths to monitor.  Ideally we would have a monitor for the F5 that duplicates:

bbcutil -ping https://localhost/om.hp.ov.opc.msgr/

Checking the BBC message receiver connection to the WDE.

 

I expect that most have setup their own monitoring script for BBC.

 

F5 Article: K2492 - Load balancing proxy servers

Destination address affinity/sticky persistence

You can optimize your proxy server array with destination address affinity (also called sticky persistence). Address affinity directs requests for a certain destination to the same proxy server, regardless of which client the request comes from.

This enhancement provides the maximum benefits when load balancing caching proxy servers. A caching proxy server intercepts web requests and returns a cached web page if it is available. In order to improve the efficiency of the cache on these proxies, it is necessary to send similar requests to the same proxy server repeatedly. You can use destination address affinity to cache a given web page on one proxy server instead of on every proxy server in an array. This saves the other proxies from having to duplicate the web page in their cache, wasting memory.

Labels (1)
7 Replies

Hello,

Were you able to configure OMi using F5 successfully?

0 Likes
Fleet Admiral Fleet Admiral
Fleet Admiral

Hi @mdcomputerguy 

I know this is a very old posting, but I am going through our current F5 configuration to optimize it for OBM 2020.05 (soon 2020.10), and we are having the same questions that you had regarding whether "Destination Address Affinity" would actually be correct. Particularly considering that we use SSL-passthrough at the Load balancers (load balancing at layer 4), so the LB would not be able to maintain any session information either.

Did you get any good tips back then regarding how to configure this properly? I fear that the documentation is incorrect.

Cheers,
Frank

Commander
Commander

Hi All

We have been using the F5 as a LB for OBM set up to the word of the documentation for a number of years now and the results have been far from ideal. I was hoping that "Message Buffering" would become a thing of the past but sadly not. When I check the tcp connections on 383 for my gateways I often see 400 or so connections to gateway1 and 18 to gateway2. This doesn't seem very 'balanced' to me. Also the gateways when they fail (all too often) never fail in such a way that the LB check

http://<gateway>/ext/mod_mdrv_wrap.dll?type=test

ever returns anything other than 'Web Data Entry is Up' causing the F5 to carry on directing requests to a gateway that has broke in some other way that doesn't affect this check.

Altogether this has been a disaster and provided no acceptable level of resilience at all.

I'd love to hear if anyone has successfully configured an F5 to LB to OBM gateways that involves actual balancing of load and a check that can actually detect if the gateway is successfully processing events.

 

Micro Focus Expert
Micro Focus Expert

You probably want to check the latest documentation, as OBM added another health check URL.
https://docs.microfocus.com/itom/Operations_Bridge_Manager:2020.10/HaLbGw
Check the health of message receiver, service discovery and bus services.

* Send String: GET /gtw/serviceStatus
* Receive String: RUNNING

0 Likes
Fleet Admiral Fleet Admiral
Fleet Admiral

Hi Asaf,

An interesting detail regarding this is that the standard OBM 2020.05 doc presents both the old URL and the new one (/gtw/serviceStatus) as two apparently equivalent alternatives. Although it briefly mentions what each of the two check, it does not state that one of them is preferred over the other. I am in the process of replacing the old one for the new one, nonetheless, partly because it is new and partly also because the interactive upgrade guide for OBM 2020.05 only mentioned that one, and not the old one.

A question regarding the use of destination address affinity; Is it really true that we should use that instead of source address affinity? How exactly would that be explained?

Cheers,
Frank

0 Likes
Micro Focus Expert
Micro Focus Expert

I think the reason to destination affinity is that if OA sends large topology, you want the same GW to handle that, otherwise you may have reconciliation issues.
I am validating this with R&D, but for meantime, please proceed with the documented approach.
Commodore
Commodore

Hi Frank,  We didn't get a lot of details back on our request.  I will dig through my notes but I thought source address affinity was what we did.  The same source address would get sent to the same gateway.

Destination address affinity doesn't seem to be relevant in this case.  I can't find any arguments that would support destination vs source.  

Wes

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.