F5 LTM for OMi
Anyone have advice on the setup of a F5 for OMi 10.62? The documentation is not correct for a F5. For instance the documentation specified Destination Address Affinity but the F5 documenation indicates Destination Address Affinity is really for a pool of cache servers.
"Data Collector Load Balance method should be sticky" What does this mean? (Least Sessions seems better with the correct persistance profile).
Also, their is no monitor method for the BBC channel. This seems pretty silly as this is one of the most critical paths to monitor. Ideally we would have a monitor for the F5 that duplicates:
bbcutil -ping https://localhost/om.hp.ov.opc.msgr/
Checking the BBC message receiver connection to the WDE.
I expect that most have setup their own monitoring script for BBC.
F5 Article: K2492 - Load balancing proxy servers
You can optimize your proxy server array with destination address affinity (also called sticky persistence). Address affinity directs requests for a certain destination to the same proxy server, regardless of which client the request comes from.
This enhancement provides the maximum benefits when load balancing caching proxy servers. A caching proxy server intercepts web requests and returns a cached web page if it is available. In order to improve the efficiency of the cache on these proxies, it is necessary to send similar requests to the same proxy server repeatedly. You can use destination address affinity to cache a given web page on one proxy server instead of on every proxy server in an array. This saves the other proxies from having to duplicate the web page in their cache, wasting memory.
I know this is a very old posting, but I am going through our current F5 configuration to optimize it for OBM 2020.05 (soon 2020.10), and we are having the same questions that you had regarding whether "Destination Address Affinity" would actually be correct. Particularly considering that we use SSL-passthrough at the Load balancers (load balancing at layer 4), so the LB would not be able to maintain any session information either.
Did you get any good tips back then regarding how to configure this properly? I fear that the documentation is incorrect.
We have been using the F5 as a LB for OBM set up to the word of the documentation for a number of years now and the results have been far from ideal. I was hoping that "Message Buffering" would become a thing of the past but sadly not. When I check the tcp connections on 383 for my gateways I often see 400 or so connections to gateway1 and 18 to gateway2. This doesn't seem very 'balanced' to me. Also the gateways when they fail (all too often) never fail in such a way that the LB check
ever returns anything other than 'Web Data Entry is Up' causing the F5 to carry on directing requests to a gateway that has broke in some other way that doesn't affect this check.
Altogether this has been a disaster and provided no acceptable level of resilience at all.
I'd love to hear if anyone has successfully configured an F5 to LB to OBM gateways that involves actual balancing of load and a check that can actually detect if the gateway is successfully processing events.
Check the health of message receiver, service discovery and bus services.
* Send String: GET /gtw/serviceStatus
* Receive String: RUNNING
An interesting detail regarding this is that the standard OBM 2020.05 doc presents both the old URL and the new one (/gtw/serviceStatus) as two apparently equivalent alternatives. Although it briefly mentions what each of the two check, it does not state that one of them is preferred over the other. I am in the process of replacing the old one for the new one, nonetheless, partly because it is new and partly also because the interactive upgrade guide for OBM 2020.05 only mentioned that one, and not the old one.
A question regarding the use of destination address affinity; Is it really true that we should use that instead of source address affinity? How exactly would that be explained?
I am validating this with R&D, but for meantime, please proceed with the documented approach.
Hi Frank, We didn't get a lot of details back on our request. I will dig through my notes but I thought source address affinity was what we did. The same source address would get sent to the same gateway.
Destination address affinity doesn't seem to be relevant in this case. I can't find any arguments that would support destination vs source.