Setting Up GroupWise Clustering with OES2



A Forum reader recently asked:

"I want to set up new Domains and Post Offices on a new Linux cluster, using Novell Cluster Services on OES2. However, there is an existing (non-clustered) GroupWise system in place (running on NetWare 6.5). So, it's adding new domains and postoffices into an existing GroupWise system - it's just the clustering part that is new to me.

I need to set up a pair of Domains and a pair of Post Offices on each OES2 Linux box, and each Linux box needs to have it's own unique IP address, and each Linux box needs to point to a shared partition. Okay - no problem.

Setting up GroupWise on the environment is, to say the least, challenging. Here is the info I have so far:



OES Cluster server:

the NSS volume is:

Of course, only ONE Linux server has access to it at any time... so installation is going to be interesting."

And here's the response from Morris Blackham ...


Here are some suggestions:

1. Your GRPW_CLUSTER_DATA_SERVER.colo.acme, I assume, is your Cluster Resource that was created when you created the NSS volume. It already has a secondary IP address assigned; the primary IP addresses are for the physical nodes. So the GW agents and MTA/POA should be configured to use the address.

2. Let's assume you are creating a new GW system. My suggestion would be to have the cluster resource (nss vol) online to one of the nodes, create a directory and copy the GroupWise CD to it. Then run the GW install from this directory. Be sure to select the "installing to a cluster" option, or run the install script with a --cluster option.

Note: You will have to have Linux ConsoleOne installed to the node before running the GroupWise install. You can get it from

3. Run thru the install to create a Domain and Post Office. When you get to the agent configuration process, it will ask you where your cluster resource mount point is. This is where the NSS volume is
mounted to on the physical node. The agent configuration will then point the log path to the clustered volume and put the grpwise start script there, so that it is available across all cluster nodes.

4. When agents are installed and configured, test to see if they will run by manually starting the agents. To do this, go to /opt/novell/groupwise/agents/bin and run

 ./gwmta @ --show &

Do the same for the POA - they should run correctly.

Unload them and test the grpwise start script: /etc/init.d/grpwise start

Verify that the are running: /etc/init.d/grpwise status

Stop GroupWise: /etc/init.d/grpwise stop

5. When you are ready, you'll have to add the agent start commands to the cluster resource load/unload scripts using iManager. It's a good idea to start each one separately:

/etc/init.d/grpwise start <mta> or start <poa>

To find what the agent object name is, go to /etc/opt/novell/groupwise and cat/edit the gwha.conf file. Each configured agent will have a section header (such as [utah] or [provo.utah] if your
domain and post office are called "utah" and "provo"). Then the start line for the POA is /etc/init.d/grpwise start provo.utah - also be aware this is case-sensitive, so match the case in the gwha.conf file.

So, the two lines in the cluster resource load/unload scripts would be:

exit_on_error /etc/init.d/grpwise start utah
exit_on_error /etc/init.d/grpwise start provo.utah


ignore_error /etc/init.d/grpwise stop utah
ignore_error /etc/init.d/grpwise stop provo.utah

6. Now to get the second node configured ...

Exit the GW agents if they are running, then migrate the cluster resource to the other node:

cluster migrate GRPW_CLUSTER_DATA_SERVER gwnode2.colo.acme

Go to the software directory and install the agent rpm's (do not run the configure option). Then at the main install screen, you should see the "Import cluster data" option. Select that, and it will create the gwha.conf file and do some other magic so the agents will run on this second node.


How To-Best Practice
Comment List