Best way to upgrade to GW 18 from GW 14

We currently have a GW 14.2.2 system all running on a single SLES 11 SP4 / OES2015 SP1 VM server.  The techs that did the porting from a NetWare environment back in 2014 left the PO and MTA data structures on an NSS volume but the agents are running as linux agents. 

We have a Novell Messenger 2.2 application running on the same server as GroupWise

In addition we have a mobility server running on a separate VM on SLES 11 SP4.

I want to upgrade everything to GroupWise 2018 and Messenger 2018.  My understanding is that having the PO databases on an NSS volume is not optimal.  My question is should I try to upgrade the server in place?  If so, in what order?  Do I upgrade SLES first then GroupWise or GroupWise then SLES?

Or should I spin up a separate SLES 12 server without OES, install GroupWise 2018 on it and then migrate the various databases from the current GroupWise server to the new one?

Can I upgrade the Mobility server first? Can I in place upgrade it to SLES 12 then upgrade the Mobility server to the latest?

Thanks,

Dan

  • Dan,

    Not sure who told you that GroupWise on NSS is not optimal.  It actually works quite well.  You should have salvage and atime turned off prior to using the volumes.

    I don't run Messenger, so you need to check on that.  But I recently upgraded my GroupWise servers.  I did an in place upgrade from OES 2015 SP1 to OES 2018 SP1 and then I upgraded GroupWise to 18.2.  That was a fairly smooth process.  The OES upgrade is documented, but you might want to spin up a test server and run through the process to increase your comfort level before trying it on production.  If the server is running a GWIA or Web Access, these two KB docs might be helpful: https://support.microfocus.com/kb/doc.php?id=7023130 and https://support.microfocus.com/kb/doc.php?id=7012936 

    Then I setup a new SLES 12 server and installed Mobility on it.  I find it is easier to spin up a new Mobility rather than trying to upgrade because there is minimal downtime.  I just point my DNS to the new server and most devices switch seamlessly.  I have had a couple phones where the user had to delete and re-add their account.  Mobility is a client so you need to upgrade your domains and POs before upgrading Mobility.

    Hope that helps.

  •  

    Hi Dan,

    These are the Hardware and Operating System Requirements for GroupWise 18.

    While your GroupWise license allows you to use SLES it does not include SLES support. Like Ken, my preference is to use OES and if you are using OES there is no point to using a Linux file system for your GroupWise data: Go with NSS.

    OES 2018 SP1 is based on SLES 12 SP3. SLES 12 SP3 is a major upgrade from SLES 11 SP4 which you are currently using. For that reason I would recommend creating a new OES 2018 SP1 server, installing GroupWise 18 and migrating your data.

     

     

  • Thanks Ken and Kevin.  I guess I was mis-informed about NSS and GroupWise.  I remember about turning off salvage.  I will have to look at what 'atime' is.

    Kevin, if I spin up a new OES2018 server for GroupWise 2018, would I use dbcopy to copy the files from the old server to the new one while the old server is still running, then install the GW 2018 program files on the new server, then shut down GW on the old server, then bring GW 2018 up on the new server?  I would want to change the IP of the new server to be that of the old server so everything still pointed the same, right?

    Also, the licensing issue - Kevin, if I read between the lines, it looks like I could use SLES by itself but would not have any support from MF for the OS, or I could use OES2018 in which case I would have support from MF for the OS as well as GroupWise, correct?

    Ken, I do have GWIA and WebAccess running on the same server.  Thanks for the TIDs.

    Dan

  • See here: https://www.novell.com/documentation/open-enterprise-server-2018/stor_nss_lx/data/bfvixn2.html  You would want to use the /noatime option on the volume that holds your GroupWise data.

    Last time I migrated to a new server, I added a secondary ip to the existing server and reconfigured all my GW stuff to use that address.  Then after completing the migration, all I had to do was add the same secondary IP to the destination server and start GW.  Made it an easy process.

    I used rsync to copy my post offices.  I did an initial sync while GW was running on the source server.  Then did a second pass shortly before I was going to shutdown GW.  Then I shut down GW and did a final pass - which in my case only took a few minutes.  I already had GW installed on the server, so as soon as the final pass was done, I started up GW.  I've not used dbcopy for this, but Kevin may have some thoughts to add.


  •  wrote:

    Kevin, if I spin up a new OES2018 server for GroupWise 2018, would I use dbcopy to copy the files from the old server to the new one while the old server is still running, then install the GW 2018 program files on the new server, then shut down GW on the old server, then bring GW 2018 up on the new server?  I would want to change the IP of the new server to be that of the old server so everything still pointed the same, right?

    A GroupWise migration is not at all difficult but there is a lot to consider. Some careful up front planning will pay large dividends and simplify future migrations.

    Have a look at this article I wrote: GroupWise Migrations – A Better Way

    After you have read it feel free to ask any follow up questions here.

    Many if us are doing as Ken suggested and assigning a second IP address to the server specifically for GroupWise. That way, when GroupWise is migrated to a new server, its IP address moves over with it.  In your case, if you will be decommissioning your existing server, you could use its IP address for GroupWise on the new server after you have transferred your GroupWise data and shutdown your existing server but before you begin your GroupWise configuration on the new server.

    In my article I recommend storing your GroupWise data (Domain and Post Office) on its own (virtual) volume. Not only does that eliminate issues when transferring your GroupWise data to a new server but it allows you to optimize performance by using a dedicated adapter to attach it to your server.

    Also, the licensing issue - Kevin, if I read between the lines, it looks like I could use SLES by itself but would not have any support from MF for the OS, or I could use OES2018 in which case I would have support from MF for the OS as well as GroupWise, correct?

    That is correct.

  • I'm splitting the knot, and add my vote for upgrading the existing server and installing a new one for GMS.

  • Especially for GMS I agree. Well there is a path to upgrade a GMS environment from Sles11 to Sles12. But you have to take care of (m)any steps. If one of the steps fails (i.e. upgrading database) then you have to start from beginning.

    If you want to avoid grey hair (or losing hair) start with a new server. New Sles12, new trusted key, new GMS. Maybe you will adjust your discs because you have learnt from history.

    Diethmar

  • Now a small remark to GroupWise.

    It is a matter of conviction as you see. More specialists more opinions.

    The easiest way is to do a in-place upgrade and take care of settings (review). A migration from OES2015 to a new OES2018 server is the next option and in some way to really hard to handle.

    However in many cases I prefer to start with a new Sles12 (or 15) without OES using the native Linux file system. So I do not have to take care of OES updates or issues. NSS is a great file system but GroupWise does not use the advantages. Any other OES features will not be used (except if you need i.e. a second eDirectory server). Doing this migration can cause a little more work but be not afraid!

    So it is still a matter of conviction ...

    Diethmar


  •  wrote:

    NSS is a great file system but GroupWise does not use the advantages.


    How can Groupwise not use features like online pool moves, mirroring, online pool expansion, industry leading performance, and outstanding stability and reliability absolutely unmatched by any existing native linux filesystem?

    Not to mention that customers owning OES anyways can use it for free for groupwise including support. Not so with native SLES. To get support for it, you have to buy additional licenses.

    Quite frankly, I have no understanding for customers which own OES licenses and still opt to use native SLES with inferiour filesystems instead for GW. There are really *ZERO* reasons to do that, ever. And I'm not even starting with clustering where two nodes are totally free and beat the hell out of any competitive offering.

     

  • Thanks guys for all the info.  I think I will build a new server as the current one is a little funky.  So the first thing I tried to do was to install a new OES2018.1 server on VMware.  Right away, VMware has no concept of an OES server.  I chose the server type as SLES 12.  Should I have chosen SLES 15?

    When I do the DBcopy, I was going to do it from a Windows 10 WS copying from the source Linux mail server with the GroupWise Databases on an NSS volume to the new destination OES2018.1 server and NSS volume.  I'll do the first copy while the old mail server is running then do the final copy after I bring down the system.  Do you see any problems with that?

    Because I have the GWIA and the WebAccess gateway on the same server, I will need to move the IP address from the old server to the new one to deal with firewall issues.  Maybe I could just shut down the old server then bind the old server's IP address to the new server as a secondary IP address?  How difficult is that to do?

    They also have Novell Messenger running on the same server as GroupWise. Is it wise to put this on a separate server from the GroupWise server?  If so, does Messenger have something similar to ngwnameserver so if the Messenger server has a new IP, when a Messenger client fires up, it will search for that global DNS name which will redirect it to the new Messenger server?

    Thanks for all your help,

    Dan