Application Delivery Management
Application Modernization & Connectivity
CyberRes by OpenText
IT Operations Management
Summary:
How to Setup a Free VMWare ESXi 6.7x or 7.x production licensed Hypervisor Host with 2 Clustered MicroFocus OES2015 or OES2018 Linux VMWare guest nodes in a LAB environment
This article describes how to setup a LAB environment with a free VMWare ESXi Hypervisor host license for clustering 2 VMWare guest nodes on Open Enterprise Server also called OES. The scenario involves only one VMWare ESXi 6.7 Update 2 Hypervisor Host which is being used without a VCenter Host since there is only 1 ESXi Hypervisor Host to manage and there is no SAN. The local disks on the VMWare ESXi Hypervisor host will be used in this scenario to create 2 shareable disks for clustering 2 nodes.
Any fairly recent version of VMWare ESXi Hypervisor Host will work with the steps in this article. The article assumes that you have installed and configured at least one ESXi Host with enough RAM and disk space to accommodate at least a 2 node OES cluster along with server VMs for other services such as DNS, NTP, patching, etc..
For LAB purposes, the ESXi Host’s local disk will be used to create and configure shareable (cluster able) volumes, to be shared between the 2 clustered OES VM guest nodes instead of using a SAN since most LABs do not have a SAN. This will be configured on the VMWare ESXi Hypervisor host using a workstation on any recent version of Windows to connect with using Putty. Once connected, the VMWare ESXcli – command line interface is used to configure settings.
The option to configure the VMWare ESXi Hypervisor host and VMs using a browser based GUI on the Windows workstation is also described. Finally, once the ESXi host is prepared for clustering drives, details are provided on installing and configuring basic clustering of 2 MicroFocus Open Enterprise Servers on version 2015. This also applies to the latest version of OES 2018.
Additional guidance is provided in terms of official documentation containing best practices, known issues, planning information, and additional information on the use of SANs in general, vSANs or RDM disks. Links for official documentation on clustering OES on VMWare ESXi are included at the end of the document in the references section.
VMWare ESXi Hypervisor Host
In this LAB scenario, only one VMWare ESXi 6.7 Update 2 Hypervisor Host is being used, without a VCenter Host since there is only 1 Host to manage and there is no SAN. Any fairly recent version of VMWare ESXi Hypervisor Host will apply to this situation, so if your VMWare ESXi Host version is not identical, this article should still be applicable. The VMWare ESXi Hypervisor Host shall be referred to as the ESXi Host going forward in the document. The ESXi Host is configured with enough RAM and disk space to accommodate at least a 2 node cluster, along with other VMWare guest servers for DNS, NTP, patching etc.. Refer to the links at the end of the article for installing and configuring VMWare ESXi Host. For LAB purposes, the ESXi Host’s local disk will be used to create and configure shareable (cluster able) volumes instead of using a SAN, to be shared between the 2 clustered OES VM guest nodes.
Connecting Workstation
VMWare Workstation 15.5 is being used on the front end workstation with a GUI to connect to the VMWare ESXi Host, to start and stop VMs, to put the Host in maintenance mode and to restart the Host as required, all of which and more can be done from a free VMWare ESXi browser GUI if you don’t have VMWare workstation. This free browser based VMWare ESXi GUI is available to configure all aspects of the VMWare ESXi host and guests and will be used in this article to configure multi-writer disks for clustering vmware guest nodes. The ESXcli command line can and will also be used directly on the ESXi Host to provide another method of configuring the VMWare ESXi Host, which is accessible using Putty from a connecting workstation. More information is provided in this article below.
VMWare Guest Nodes for Clustering
Finally, the 2 VMWare guests running either OES2015 or OES2018 which are outlined in the pre-requisites below are in the same Edirectory tree, but are not yet clustered. Everything in one subnet keeps things simple in this scenario.
Steps to cluster the 2 OES nodes are provided last after first preparing the ESXi Host to allow and use VMWare shareable drives.
Note: You cannot combine both OES2015 nodes and OES2018 nodes in the same cluster. The steps for configuring the VMWare Host and clustered guests are however the same for either version of OES.
From the connecting workstation, open Putty to connect to and login to the VMWare ESXi Host.
Next, enable simultaneous write protection provided by VMFS using the multi-writer flag with the following example 1.1 at the ESXicli Host command line.
esxcli system settings advanced set -i 1 -o /VMFS3/GBLAllowMW
Note: This will enable the multi-writer lock. The GBLAllowMW config option is disabled by default on ESXi host.
Execute the command below to check the value set and to confirm that the Int Value highlighted below in yellow is set to 1 in order to enable or 0 to disable. Refer to Example output 1.2 shown below
1.2 - Highlighted area indicating whether enabled (1) or disabled (0) which is default
esxcli system settings advanced list -o /VMFS3/GBLAllowMW
Path: /VMFS3/GBLAllowMW
Type: integer
Int Value: 1 =========== > Value set to 1 from default 0
Default Int Value: 0
Min Value: 0
Max Value: 1
String Value:
Default String Value:
Valid Characters:
Description: Allow multi-writer GBLs.
Note: Run esxcli system settings advanced set -i 0 -o /VMFS3/GBLAllowMW to disable this option on the local ESXi Host disks if a SAN is used instead at some point in the future.
Note: Typically, in a production environment, you would enable the above config option on ALL the VMWare ESXi hosts that are part of the clustered environment for redundancy and availability, and the multi-writer settings would only apply to clustered SAN drives/volumes instead of shareable local ESXi host vmware drives/volumes. Refer to specific VMWare and OES details and documentation for clustering SANs on VMWare OES guests in VMWare ESXi in the “References” section at the end of the document.
Since we only have one ESXi host in this LAB scenario for testing in a free VMWare ESX environment, the above setting is only applicable to one VMWare ESXi host and to its local disks.
Note: Make sure that the shared drives used by all the OES cluster node vmware guests including the SBD volume and the Shared (cluster able) volumes are created on the ESXi local vmfs file system and that they are thick, eagerly zeroed vmware drives, not thin provisioned vmware drives
You can convert pre-existing thin vmware drives to thick with option 2 below, or you can create new vmware thick drives with option 1 below. We will create one shareable vmware drive/volume for apps and data called SharedVol, and one shareable vmware drive/volume for the SBD volume of the cluster to prepare.
From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.
Select one of the 2 VMWare guest nodes. Shut down the VMWare guest node.
Click on the virtual machine in the inventory.
Click Edit Settings to display the Virtual Machine Properties dialog box.
You should see a screen similar to the following below in screenshot 2.3: Click on Add Hard Disk
Select “New Standard Hard Disk” and the disk is added to the list of disks.
Click “on the right arrow to the left of the new hard disk to expand the details of the disk”. Modify the disk size, disk name, and location as appropriate for the SBD disk/volume first.
Modify the disk provisioning to be thick provisioned eagerly zeroed and note the controller and disk location. See screenshot 2.4 below.
Finally, set the Sharing option to multi-writer sharing as in the screenshot below. Note Controller Location and to the right of that, the SCSI controller and disk placement highlighted in yellow. That information will be required for step 3.2 below.
Note – Disk Mode above should be set to “independent persistent” according to best practices for OES. This is a mistake in the screenshot.
From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.
Select one of the 2 OES nodes and shutdown the VMWare guest node.
Right-click the virtual machine in the inventory.
Click Edit Settings to display the Virtual Machine Properties dialog box.
Click the Hardware tab and click the arrow to the left of the hard disk in the Hardware list to expand the details of the hard disk.
Note: The disk will either be thin or thick. If thin, the drive/volume will need to be converted to thick.
Click Cancel to exit from Virtual Machine Properties dialog box.
Note- If the drives/volumes to be shared are currently all thick, continue to step 3 below. To continue converting thin vmware drives/volumes to thick eagerly zeroed, continue with step vii.
Browse to where the vm guest is stored and find the associated .vmdk file which is the drive/volume.
Right-click the .vmdk file, and click Inflate. The Inflate option converts the disk to thick provisioned.
You may be able to skip the step of reloading the .vmx file since it was not necessary in this scenario after inflating the vmware thin drives. However, VMWare recommends it in case of certain issues. Reload the .vmx file only if required. For more information, see Reloading a vmx file without removing the virtual machine from inventory (1026043).
Note: Create a new folder outside the OES VM folder structures for the shared drives:
You should create a new sub folder called “Sharedcluster” in the datastore1 vmfs file system like in the steps for Option 1 above and then browse to the shared SBD and data volumes one at a time using Datastore browser and right click the .vmdk file, which is the disk created for clustering and select “Move” as in the screenshot in 2.6 below. Move the disk to the new SharedCluster folder in the root of DataStore1:
Note: It is highly recommended that the folder you create is not in the same folder structure as either of the 2 OES nodes. Create the “Sharedcluster” folder outside of the 2 OES VM folder structures.
If the Inflate option is grayed out or not available, it indicates that the virtual machine is not powered off or that the drive is not thin provisioned.
There should be no snapshots and the conversion is performed on the base disk.
Note * If the settings were not manually manipulated, (only with the GUI) there should be no need to correct the files, but it is still a good idea to verify the files.
The VMWare ESXi Host stores the VMWare guest files on the ESXi Host vmfs file system, in a sub folder called datastore1 by default. There you will find folders matching the names of each of the VMWare guests including the .vmx files you need to verify and possibly modify. The basic underlying OS on VMWare ESXi Host is BusyBox which uses very basic Linux commands such as ls, cat, cd, mkdir etc. along with an ESXicli command line consisting of various ESXi commands for configuring the VMWare ESXi Host.
From a Putty session on your connecting workstation, connect and login to the ESXi Host. Enlarge the putty session screen by pulling on the opposite corners until it is about as large as a normal screen.
Type “cd vmfs\volumes\datastore1” or follow the screenshots below for different CLI methods to get to the VM guests’ folders and press enter. You should see output similar to the following screenshots:
Type “ls” to get a directory listing of all your VMWare guests, which we’ll refer to as VMs going forward. You should see output similar to the blue output on the screen below:
Type “cd VMName” (VMName being case sensitive) to move into the folder of the OES VMWare guest node folder. Referring to the example above, “cd OES2015Viv” will be used in this scenario since it is the first VM to be configured for shared cluster able VMWare drives/volumes.
Type “ls” to see the list of files in the OES VM guest folder. You should see output similar to the following screenshot in 3.3:
Find and note the placement and following settings in the file, which are associated to the thick eagerly zeroed drives setup at the beginning of this article to be shared for the cluster. There are 2 settings to verify as seen below.
Note that the multi-writer statement will exist for each shared volume you need to create. If the statements do exist, verify that they correspond with the ESXi Host’s correct drive(s) and controller(s) placement in your VMWare drive settings.
In this scenario, we have created 2 shared VMWare disks/volumes, one for the SBD which is required in clustering, and one for shared data/apps. Below we see a configuration example in 3.4 of 2 shared drives. We will have only 2 lines for the multi-writer setting in the .vmx file in our scenario. Refer back to the notes made on the proper controller and disk placement of your own ESXi Host disks as outlined in section 2 above and verify they are correct in the .vmx file for each OES cluster node.
disk.locking = "FALSE"
scsi1:0.sharing = "multi-writer"
scsi1:1.sharing = "multi-writer"
If the settings are missing or are not correct, either add or modify the settings using the vi editor in the next step and refer to the screenshot below for the 3 lines which should be in the .vmx file. See screenshot 3.5 below.
If the values were not properly added by the GUI process, you must manually add the settings to the .vmx file using the vi editor. Type “vi vmname.vmx” (replace vmname with the name of your vm guest) and press enter. Press the “Esc” key and the “I” key to start editing.
Move the cursor with the down arrow to the place as in the above screenshots to add the settings manually. Type the settings as they appear in the screenshot for your scenario. Note that case sensitivity is important. Once finished, press the “Esc” key, then type “wq” keys to save and quit the “vi” editor.
Type “cat vmname.vmx” to list the .vmx file and to verify the file is correct. Once correct, you are ready to test the volumes to make sure they are shareable (cluster able), and if everything works, move onto creating the OES cluster.
In the browser based ESXi GUI, highlight one of the OES VMWare cluster node guests and click the right arrow button “>” on the penguin to start the VM. Note, if the OES Cluster node guest starts without errors, start the 2nd OES Cluster node guest. See the following example for starting the VM in the ESXi GUI in screenshot 4.1:
If you received the following error while opening either VM, “Failed to open '/vmfs/volumes/4f709747-0ec9e0a0-219b-001e0bccdf5c/test1/test1_1-flat.vmdk” or “Failed to lock the file (67) (0x2017).” Or something similar, there is likely a syntax issue in the .vmx file, or one of the volumes is still not thick.
Try testing the 2nd OES VM node by highlighting it and clicking the right arrow button “>” on the penguin to start the VM. If you receive the same error as above, shut down both VM’s and re-verify that the vmware drives are thick by reviewing the sections above, and in the Putty session connected to the ESXi Host, review each of the 2 nodes’ .vmx files to make sure the files match the structure and syntax of the screenshots provided above.
If both OES VMs do load successfully and there are no errors, you are ready to setup OES clustering with the shared drives on both nodes in Step 5 below.
On the first OES VM node, make sure the OES 2015 or 2018 Add-On .ISO is connected to the VM by modifying the VM settings.
Right click on the OES VM node desktop and select “Open in Terminal” to open a terminal screen.
Increase the size of the terminal screen by pulling on the outer edges of the screen.
Note - The shared VMWare drives that were created need to be initialized in order to prepare them for clustering. Type “nssmu” and press enter.
Select “Devices” on the left and press “Enter”. See screenshot 5.1 below:
Select the first shareable drive to initialize, and press the “F3” key, you should see a screen similar to the following. Note * Make sure you select the proper drive to initialize, this is irreversible. See screenshot 5.2 below. Type “Y” for Yes to initialize.
Select the Partitioning scheme to be GPT by using the down arrow and then press “Enter” as in the screenshot 5.3:
You’ll see the newly initialized drive as in screenshot 5.4:
viv- Press F6 to make the drive shareable as in the screenshot below:
x- Repeat steps vi, vii, and viii above for the 2nd shareable VMWare drive created.
xi- Once you have completed initializing and making both drives shareable for clustering in OES, press the “ESC” key until you exit NSSMU.
Open Yast and click on “Open Enterprise Server” in the left pane, followed by clicking on the “OES Install and Configuration” in the right pane as in screenshot 5.5 below:
In the Software Selection pane on the left, scroll down to find “Novell Cluster Services” and select it and then click on “Accept”. You should then see a screen such as the following in screenshot 5.6.
“Scroll down” the list until you see Novell Cluster Services, and click on the “Enable” beside Configuration, right underneath it. You should see output similar to screenshot 5.7 below.
Click on “Novell Cluster Services (NCS)” Header to configure clustering services. You will be prompted to login as the admin of the eDirectory tree. Type in the “distinguished admin account name” and “password”..
Select “New Cluster”, type in the static IP address assigned to the cluster node, not the IP address assigned to the vmware guest.
Select the drop down to the right of “Select the device for the shared media”. This where you select the “SBD” shareable drive. Make sure to select the smaller of the 2 drives created for the SBD. See the 5.7 screenshot below.
Click “Next” on the Use OES Common Proxy User screen.
Click “Next” on the Start clustering services now screen.
Click “Next” on the Use the following configuration screen
Open a browser and launch “iManager”. Login to “iManager” and select “Clusters” in the left pane.
Click on “Add” in the right pane to add the new cluster node to the cluster.
Click on the “down arrow” to move into the Novell container and select the “Cluster” object.
The cluster is added to iManager.
Setup the shared storage pool/volume using the shared vmware data drive created earlier for the cluster node. Click on “Storage” in the left pane.
Click on “Pools” under Storage in left pane. Click on the magnifying glass to browse for the OES server node and select it to add it.
Type in the “Name” of the shared pool. i.e. SharedPool1. Click on “Next” to continue.
Type in a new “IP Address” for the new shared pool, an address different from the cluster node.
Click “Finish” to complete adding the shareable pool.
Finally, create the shared volume to be used by the pool. Click on “Volumes” under Storage in the left pane.
Click on “New” in the center pane to create the new volume.
Type the “Name” of the new shared date volume. i.e. SharedVol1 and click “Next”.
Click to select the “SharedPool1” pool to associate it to the new shared volume and click “Next”
The OES Cluster configuration on VMWare ESXi Hypervisor is now complete!
To see and manage the new cluster, click on “My Clusters” under Cluster in the left pane. Click on the “Cluster object” in the right pane.
https://kb.vmware.com/s/article/2107518
https://kb.vmware.com/s/article/2014832
Solved: Migrate VM's from XenServer - VMware Technology Network VMTN
How To Convert A Xen Virtual Machine To VMware (howtoforge.com)
https://giritharan.com/move-virtual-machines-from-xen-to-vmware/
https://kb.vmware.com/s/article/10051
https://www.vembu.com/blog/vmware-vsan-configuration-setup/
https://www.altaro.com/vmware/vsphere-vm-rdm/
https://www.novell.com/documentation/oes2015/pdfdoc/inst_oes_lx/inst_oes_lx.pdf#b9kmg9x
Cluster Services in a VMware environment documentation