

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
OML 9.11.100 - Clustering / Creating a Hot Standby
I have 2 OML 9.10.100 management servers on RedHat Linux (HARDWARE BASED)
Not Clustered Via Hardware or OS
Not Clustered Via Clsutering Software
Not Clustered Via HPOM install setup
Located at 2 Seperate Locations
One is Prod - Primary Manager
One is Prod Failover
Each has their own back end Oracle Database that is replicated to multiple data centers for backup.
Each node in my environment has flexible management enabled to allow the nodes to be managed by either management server.
I have a shell script I created to do an opccfgdwn and opccfgupload between the two management servers in set intervals.
I need my environment setup to fail over all the nodes.
I setup my ovconfchg -ovrg server
OPC_JGUI_BACKUP_SRV=management server cname/virtual load balanced ip
all of my agents are 11.11.025 min with majority 90% at 11.13.007
With 11.11.025 the option came about in ovconfchg for
OPC_BACKUP_MGRS_FAILOVER_ONLY=TRUE
and
OPC_BACKUP_MGRS=with FQDN of management server 2
how do I get all of my nodes to automatically send messages to management server 2 if management server 1 is not availible?
Any Advice or Thoughts?
Thanks.
-Brandon


- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hello Brandon,
I think the best way to have all nodes reporting to the backup server immediately after the main server fails (and without human intervention) is to have both Management Servers in a cluster. This way the nodes will report to a virtual node and it will be transparent to them if the main server fails, that would be an internal issue of the cluster.
I hope this helps and I hope you get a better answer.
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.
If you liked it I would appreciate KUDOs.


- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
How would that work when we have
1 Remedy SPI license.
2 Management Servers.
I dont have both servers licensed to connect to Remedy for Ticketing.
Also on the nodes
Primary Manager
Certificate server
[Sec.Core.Auth]
Manager
Manger_ID
all need to be pointed at a FQDN of a management server and not of a cname/virtual IP that is load balanced or clustered.
We also have too big of an environment to consider virtualization.
@GTrejos7 wrote:Hello Brandon,
I think the best way to have all nodes reporting to the backup server immediately after the main server fails (and without human intervention) is to have both Management Servers in a cluster. This way the nodes will report to a virtual node and it will be transparent to them if the main server fails, that would be an internal issue of the cluster.
I hope this helps and I hope you get a better answer.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hello Brandon,
Using the agent based failover (OPC_BACKUP_MGRS_FAILOVER_ONLY and OPC_BACKUP_MGRS) is one possibility. The agent will start sending messages to the backup server if the primary server is unreachable.
Anoother option is to have a normal backup server environment and switch all the nodes to the backup server using "opcragt -primmgr -all". That's probably what you are already using.
Since you said that the primary and backup servers are in different locations, cluster and a normal server pooling setup won't work (the servers need to be in the same subnet).
But, you could use server pooling with a load balancer. That avoids the same subnet limitation. The load balancer is purely used to switch the agents from one server to the other.
See the Server Pooling White Paper for more information:
Best regards,
Tobias

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hello Brandon,
> How would that work when we have
> 1 Remedy SPI license.
> 2 Management Servers.
I'm not familiar with the Remedy SPI licensing.
Please check with your HP Sales contact.
> Also on the nodes
> Primary Manager
> Certificate server
>
>
>
> [Sec.Core.Auth]
> Manager
> Manger_ID
>
> all need to be pointed at a FQDN of a management server and not of a cname/virtual IP that is load balanced or clustered.
With a cluster, You can use the cluster virtual node name as MANAGER (the cluster appears to the managed nodes as one system).
But for server pooling and the like, you would usually continue to use the physical manager for MANAGER and just set OPC_PRIMARY_MGR in the eaagt namespace to the virtual node name.
Best regards,
Tobias