Migrate and Upgrade - OS and GW

I am looking to migrate our four servers running the po/mta/gwia to new servers, along with reducing the provisioned space of the servers.

Current setup is virtualized on esx 6.5.

GroupWise 18.0.2 -131493

A) four servers, each with a post office, one also has mta/gwia

OES11 SP2, SLES 11 SP3

B) three servers running webaccess behind load balancer 

SLES 12   ....  not worried about these, they will upgrade to 18.1.1

C) four servers running GroupWise Mobile 14.2.something

SLES 11 SP4 ....  not worried about these, will build one new server and consolidate existing users



Build four new OES2018SP1 servers for four PO/MTA/GWIA

reduce post office volume from 500gb to 300gb on each server  

currently, all post offices are around 200-215gb  (users limited to 1.5gb max mailbox size)

         Upgrade to GW 18.1.1



  1. Create new 300gb virtual disk, attach to each existing server with post office.
  2. dbcopy data from existing volume to new volume   (initial)
  3. Build new servers for each post office, minus the groupwise volumes 
  4. shutdown GroupWise on existing servers
  5. dbcopy data from existing volume to new volume
  6. copy po/mta/gwia config files to new volume   
  7. shutdown old servers, detach new volume from old servers
  8. attach new volume to new servers
  9. change ip address of new servers
  10. install GW 18.1.1, not configured
  11. copy po/mta/gwia config files to locations
  12. start GroupWise on MTA servers
  13. start GroupWise on remaining PO servers
  14. remove old servers from tree gracefully  (boot, change ip, remove edir, shutdown)






  •  wrote:

    I am looking to migrate our four servers running the po/mta/gwia to new servers, along with reducing the provisioned space of the servers.

    Your task consists of three distinct activities:

    1. Build new servers.
    2. Migrate GroupWise to your new servers.
    3. Update GroupWise.


    Build new servers.

    I'm glad you chose this option as VMware doesn't support upgrades to new major releases of the OS. Furthermore, upgrading an existing system does not necessarily ensure all new features are implemented.

    You can save quite a bit of effort if you use the Transfer ID method whereby the new server takes on the identity of the old server. This way there is no need to remove the old server from the tree and anyone who accessed the old server will now access the new one without any configuration changes at their end.

    Have a look at this thread:
    Need some help migrating from OES 2 to OES 2018 SP1

    Migrate GroupWise to the new server.

    The GroupWise documentation makes the distinction that migration is a separate activity from upgrade and that both are required when a new version of GroupWise must be run on a new server.

    I wrote a Cool Solution article that discusses GroupWise migrations which you may find helpful:
    GroupWise Migrations – A Better Way

    Update GroupWise.

    Check the documentation for the details but in essence this is what is needed:

    • Download and install the new GroupWise software onto the new server.
    • Ensure the previous version's Domain and Post Office are mounted on the new server. That's all that is needed from your old system! 
    • Complete the Transfer ID, if that is the approach you choose.
    • Ensure the old  server has been shut down and there is no possibility that the GroupWise agents can be accidentally started.
    • Do the GroupWise configuration and specify that you are upgrading an existing system.
  • With Exchange we just spin up the new server and migrate the mailbox from one store/db to the other.  If we are running GW18.0 and want to move to GW18.1 is that possible?  Is seems like a stateless upgrade would be better than working all night or the weekend to get a migration done.  We are running on VMware and have the SLES Linux version of GW.

    Has anyone tested keeping the GW data on NFSv4.x or NFSv3 volume?  I did read that many years ago GW8 for Linux you could corrupt your mail db by trying to use nfs, but that is a decade or so in the past. 

    It also seems sad that there is so much going on in Linux with clustered file systems and it seems like GW is still stuck in the Netware days.  Not saying netware was bad, Netware ACL's are superior to MS NTFS ACL's and the VAX clustering better than anything we have today.  History repeats itself and in IT it just takes a very long time to repeat. LOL

  • You can do that, if you really think you have to, but that's just another step in making a trivially easy task (updating OES with Groupwise on it) needlessly complex.

    If I were tasked with that update, it would take worst case 2 hours with a total downtime of max 10 minutes.

    What nobody should *EVER* even think of, is running the groupwise databases ona remote filesystem, let alone NFS with it's virtually non-existent locking.

    If simply updating the OS and Groupwise isn't in for whatever reason, with VMWare you simply setup a new server, deatch the old Groupwise disk rom the old server, attach it to the new one, reinstall GW Software and point it at the data. Done.

    *Assuming* as this is OES, areal Filesystem is in use (aka NSS), if you want new disks, you can fist move the pools (online!) to another disk with a different size.




    Welcome to the Micro Focus forums.

    One of the nice features on this platform is the ability to mark one post in a thread as the solution. and we encourage everyone to make use of it. This makes it easy for anyone to zero in on the solution without having to read every post. But this doesn't work if new issues are raised in an existing thread!

    We have always had a policy of one issue per thread. Even if you have a similar issue, the correct procedure is to start a new thread and, if necessary, reference the existing thread.

    I can move your post and start a new thread for you. From there we can explore your issues

    What would you like your subject to be?

  • Massimo,

    I would rather move users to a new server in a controlled manner than BAM your on a new version.  If we can move them back that is even better.  I prefer not to work nights or weekends on upgrades.  If it takes more time that is fine as it should not be disruptive.

    We do not have OES, we have SuSE and that does not support upgrading online like Cisco CAT 6k's or some of the big iron systems.  It seems many Linux distros do not even let you offline upgrade from version 6 to 7 because they change too much.

    Running on a remote file system in the NFSv2 days was just stupid.  If we can't use any remote file system today that is fine.  NFSv3 and NFSv4 are used by VMware to host virtual servers as a clustered file system with no corruption, they are handling file locking at the hypervisor level and I am fine with that.  MicroFocus would have to do something similar with GW if it was run against a clustered/shared file system or use that vendor/distro's preferred way of file locking on the client and server.  NFSv4 and above are very good(https://en.wikipedia.org/wiki/Network_File_System).

    I have read some of the upgrade guides and I have seen mistakes made where config files are copied from version x to version y so I am leery of that, I know many have done it but I have been burned in the past.

    Thanks for the input and suggestions.