How to Setup a Free VMWare ESXi 6.7x or 7.x production licensed Host with 2 Clustered OES VM Nodes

0 Likes

Scenario

VMWare ESXi Host

In this LAB scenario, only one VMWare ESXi 6.7 Update 2 Host is being used, without a VCenter Host since there is only 1 Host to manage and there is no SAN. Any fairly recent version of VMWare ESXi Host will apply to this situation, so if your VMWare ESXi Host version is not the same, this article should still be applicable. The VMWare ESXi Host shall be referred to as the ESXi Host going forward in the document.  The ESXi Host is configured with enough RAM and disk space to accommodate at least a 2 node cluster, along with other VMWare guest servers for DNS, NTP, patching etc.. Refer to the links at the end of the article for installing and configuring VMWare ESXi Host. The ESXi Host’s local disk will be used to create and configure cluster able volumes instead of using a SAN for LAB purposes.

Connecting Workstation

VMWare Workstation 15.5 is being used on the front end workstation with a GUI to connect to the VMWare ESXi Host, to open and close VMs, to put the Host in maintenance mode and to restart the Host as required, all of which and more can be done from a free VMWare ESXi browser GUI if you don’t have VMWare workstation. This free browser based VMWare ESXi GUI is available to configure all aspects of the VMWare ESXi host and guests and will be used in this article to configure multi-writer disks for clustering vmware guest nodes. The ESXcli command line can and will also be used directly on the ESXi Host to provide another method of configuring the VMWare ESXi Host, which is accessible using Putty from a connecting workstation. More information is provided in this article below.

VMWare Guest Nodes for Clustering

Finally, the 2 VMWare guests running either OES2015 or OES2018 which are outlined in the pre-requisites below are in the same Edirectory tree, but are not yet clustered. Everything in one subnet keeps things simple in this scenario.

Steps to cluster the 2 OES nodes are provided below after first preparing the ESXi Host to allow and use VMWare shareable drives.

 

Note ** You cannot combine both OES2015 nodes and OES2018 nodes in the same cluster. The steps for configuring the VMWare Host and clustered guests are however the same for either version of OES.

 

 

 

Pre-Requisites – It is assumed that you have the following already installed and configured:

 

 

  1. Free VMWare ESXi  production level license (for LAB use only) either standard free license, or license through VMUG (VMWare Users Group) applied to VMWare ESXi Host. Information on acquiring the free VMWare ESXi Host license is included in the links at the end of this article.

 

A dedicated and functional VMWare ESXi Host with enough RAM and hard disk space to accommodate multiple VMs.  Make sure your ESXi host is updated to the latest level version and patches if possible to avoid known issues. Most versions of VMWare ESXi Host should be applicable although being on more recent versions provides the most stability and best performance. Links containing documentation for installing VMWare ESXi are provided at the end of this article.

 

  • A workstation to connect and manage the Host. In this scenario, Windows 10 workstation is being used to connect to the ESXi Host with VMWare Workstation, but managing vmware VMs can be done using the browser based ESXi GUI.  Putty is installed on the Win 10 workstation being used to access the VMWare Host ESXi CLI.

 

 

  1. 2 or more VMWare guests running either OES2015 or OES2018 in the same Edirectory tree. OES2015 or OES2018 VMWare guests need to be setup according to the guidelines and best practices set out in the references and documentation links at the end of this article. It is highly recommended that the VMWare guest nodes be setup with DNS and NTP as per guidelines, but if internal DNS is not feasible in this LAB scenario, you can use hosts files. NTP however is not an option but can be easily setup on one of the of the OES nodes so at least time is synchronized properly between the OES nodes in particular, which is critical for a functioning OES cluster.

 

  1. If possible, try to provide Internet access for the OES VMWare guest nodes being clustered in order to have access to applying the latest patches and updates. This also facilitates using an NTP server already setup and available on the Internet for keeping time synchronized between the OES nodes.

 

  1. Valid OES subscription for updating and patching and license for OES servers and the OES2015 or 2018 Add-On .ISO setup for usage on the OES vmware nodes for installing and configuring Micro Focus OES Clustering.

 

 

  • Prepare the VMWare ESXi Host for multi-write mode on the local disks

 

  • From the connecting workstation, open Putty to connect to and login to the VMWare ESXi Host.
  • Next, enable simultaneous write protection provided by VMFS using the multi-write flag with the following command at the ESXi Host command line.

esxcli system settings advanced set -i 1 -o /VMFS3/GBLAllowMW

Note: This will enable the multi-writer lock. The GBLAllowMW config option is disabled by default on ESXi host.

  • Execute the command below to check the value set and to confirm that the “Int” Value highlighted below in yellow is set to 1 in order to enable. Example output is shown below with the highlighted area showing whether enabled (1) or disabled (0) which default.
         
    esxcli system settings advanced list -o /VMFS3/GBLAllowMW

 


         Path: /VMFS3/GBLAllowMW
         Type: integer
         Int Value: 1             =========== > Value set to 1 from default 0
         Default Int Value: 0
         Min Value: 0
         Max Value: 1
         String Value:
         Default String Value:
         Valid Characters:
         Description: Allow multi-writer GBLs.

 Note: Run esxcli system settings advanced set -i 0 -o /VMFS3/GBLAllowMW to disable this option on the local ESXi Host disks if a SAN is used instead at some point in the future.
 
 

Note: Typically, in a production environment, you would enable the above config option on ALL the VMWare ESXi hosts that are part of the clustered environment for redundancy and availability, and the multi-writer settings would only apply to clustered SAN drives/volumes instead of shareable local ESXi host vmware drives/volumes.

Since we only have one ESXi host in this LAB scenario for testing in a free VMWare ESX environment, the above setting is only applicable to one VMWare ESXi host and to its local disks.  

 

 

  • Thick Eagerly zeroed drives for the shared (cluster able) SBD and other data/app volumes.

 

Note - Make sure that the shared drives used by all the OES cluster node vmware guests including the SBD volume and the Shared (cluster able) volumes are created on the ESXi local vmfs file system and that they are thick, eagerly zeroed vmware drives, not thin provisioned vmware drives

 

You can convert pre-existing thin vmware drives to thick with option 2 below, or you can create new vmware thick drives with option 1 below. We will create one shareable vmware drive/volume for apps and data called SharedVol, and one shareable vmware drive/volume for the Split Brain Detector of the cluster to prepare.

 

 

Option 1 -To Create the thick eagerly zeroed drives/volumes set for multi-writer sharing:

 

  • From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.
  • Select one of the 2 VMWare guest nodes. Shut down the VMWare guest node.
  • Click on the virtual machine in the inventory.
  • Click Edit Settings to display the Virtual Machine Properties dialog box.
  • You should see a screen similar to the following below: Click on Add Hard Disk

 

 

 

  • Select “New Standard Hard Disk” and the disk is added to the list of disks.
  • Click on the left arrow beside the new hard disk to expand the details. Modify the disk provisioning to be “Thick provisioned eagerly zeroed” and note the controller and disk location. See next screenshot.
  • Also set the Sharing option to “multi-writer sharing” as in the screenshot below. Note Controller Location and to the right of that, the SCSI controller and disk placement highlighted in yellow. That information will be required for section 3.2 below.

 

 

 

Option 2 – Convert existing drives/volumes to thick provisioned multi-writer sharing volumes using the following procedure:

 

  • From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.
  • Select one of the 2 OES nodes  and shutdown the VMWare guest node.
  • Right-click the virtual machine in the inventory.
  • Click Edit Settings to display the Virtual Machine Properties dialog box.
  • Click the Hardware tab and click the arrow to the left of the hard disk in the Hardware list to expand the details of the hard disk.
    Note: The disk will either be thin or thick. If thin, the drive/volume will need to be converted to thick. 
  • Click Cancel to exit from Virtual Machine Properties dialog box.

 

Note- If the drives/volumes to be shared are currently all thick, continue to step 3 below. To continue converting thin vmware drives/volumes to thick eagerly zeroed, continue with step vii.

 

 

  • Click the DataStore tab on left and then click  Datastore browser as in the screen shot below:
  • Browse to where the vm guest is stored and find the associated .vmdk file which is the drive/volume.
  • Right-click the .vmdk file, and click Inflate. The Inflate option converts the disk to thick provisioned.
  • You may be able to skip the step of reloading the .vmx file since it was not necessary in this scenario after inflating the vmware thin drives. However, VMWare recommends it in case of certain issues. Reload the .vmx file only if required. For more information, see Reloading a vmx file without removing the virtual machine from inventory (1026043).

Notes:

  • If the Inflate option is grayed out or not available, it indicates that the virtual machine is not powered off or that it is not thin provisioned.
  • There should be no snapshots and the conversion is performed on the base disk.

 

  • Check that the following settings are added to each of the appropriate VMWare guest node .vmx files.

 

The VMWare ESXi Host stores the VMWare guest files on the ESXi Host vmfs file system, in a sub folder called datastore1 by default. There you will find folders matching the names of each of the VMWare guests including the .vmx files we need to verify and possibly modify.  The basic underlying OS on VMWare ESXi is BusyBox which uses very basic Linux commands such as ls, cat, cd, mkdir etc. along with an ESXicli command line consisting of various ESXi commands for configuring the VMWare ESXi Host.

 

  • From a Putty session on your connecting workstation, connect and login to the ESXi Host. Enlarge the putty session screen by pulling on the opposite corners until it is almost as large as a normal screen.
  • Type “cd vmfs\volumes\datastore1” or follow the screenshots below for various methods to get to the vms’ folder and press enter. You should see output similar to the following screenshots:
  • Type “ls”  to get a directory listing of all your VMWare guests, which we’ll refer to as VMs going forward. You should see output similar to the blue output on the screen below:

3.1 – vmfs VMWare ESXi Host command line. How to get to the VMs folder:

 

 

  • Type “cd VMName” (VMName being case sensitive) to move into the folder of the OES VMWare guest node folder. Referring to the example above, “cd OES2015Viv” will be used in this scenario since it is the first VM to be configured for shared cluster able VMWare drives/volumes.
  • Type “ls” to see the list of files in the OES VM guest folder. You should see output similar to the following screenshot in 3.2:

 

3.2 OES VMWare VM Cluster Node 1 file folder listing and contents of the .vmx file where settings for sharing volumes with clustering are stored in the example below.

 

  • Find and note the placement and following settings in the file, which are associated to the thick eagerly zeroed drives setup at the beginning of this article to be shared for the cluster. 2 settings. Note that the “multi-writer” statement will exist for each shared volume you need to create. In this scenario, we have created 2 shared VMWare disks/volumes, one for the SBD which is required in clustering, and one for shared data/apps.  Note the proper controller and disk placement as outlined in section 2 above.

 

disk.locking = "FALSE"

scsi1:0.sharing = "multi-writer"
scsi1:1.sharing = "multi-writer"
scsi1:2.sharing = "multi-writer"
scsi1:3.sharing = "multi-writer"

 

  • If the settings are missing or are not correct, either add or modify the settings using “vi” editor in next step. See an example output with screenshot 3.3.

 

 

3.3- VMWare VM Guest .vmx file settings to verify which are necessary to set all the volumes to shared mode in preparation for clustering with OES VMWare guest nodes. See highlighted sections to look for. 2 and 3 are for both shared volumes/drives which will be used and shared by both OES clustered nodes:

 

 

       

  • If the values were not properly added by the GUI process, you must manually add the settings to the .vmx file using the “vi” editor. Type “vi vmname.vmx” (vmname being the name of your vm guest) and press enter. Press the “Esc” key and the “I” key to start editing.
  • Move to the place as in the above screenshots using the down cursor to add the settings manually. Type the settings as they appear in the screenshot for your scenario. Once finished, press the “Esc” key followed by the “!” and “wq” key to save and exit the “vi” editor.
  • Type “cat vmname.vmx” to list the .vmx file and to verify the file is correct. Once correct, you are ready to test the volumes to make sure they are shareable (cluster able), and if it works, move onto creating the OES cluster.

 

  • Testing to make sure the shared volumes created for the cluster are shareable.

 

  • In the browser based ESXi GUI, highlight one of the OES VMWare cluster node guests and click the right arrow button on the penguin to start the VM. Note, if the OES Cluster node guest starts without errors, start the 2nd OES Cluster node guest. See the following example for starting the VM in the ESXi GUI in screenshot 4.1:

 

4.1 – VMWare ESXi browser based GUI from connected workstation:

 

 

  • If you received the following error while opening either VM, “ Failed to open '/vmfs/volumes/4f709747-0ec9e0a0-219b-001e0bccdf5c/test1/test1_1-flat.vmdk” or  “Failed to lock the file (67) (0x2017).” Or something similar, there is likely a syntax issue in the .vmx file, or one of the volumes is still not thick. Shut down both VM’s and re-verify that the vmware drives are thick by reviewing the sections above, and in the Putty session connected to the ESXi Host, review each of the 2 nodes’ .vmx files to make sure the files match the structure and syntax of the screenshots provided above.

 

  • If both OES VMs load successfully, you are ready to setup OES clustering on both nodes in the next section.

 

  • Installing MicroFocus OES Clustering on first node in the cluster

 

 

 

References

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-B2F01BF5-078A-4C7E-B505-5DFFED0B8C38.html

https://kb.vmware.com/s/article/2107518 

https://kb.vmware.com/s/article/1034165

 

https://kb.vmware.com/s/article/2014832

 

https://communities.vmware.com/t5/Virtualizing-Oracle-Discussions/Oracle-2nd-node-fails-to-start-in-cluster-due-to-locking/m-p/1702355

 

https://www.novell.com/documentation/open-enterprise-server-2018/pdfdoc/clus_vmware_lx/clus_vmware_lx.pdf

 

https://www.novell.com/documentation/oes2015/pdfdoc/inst_oes_lx/inst_oes_lx.pdf#b9kmg9x

 

https://kb.vmware.com/s/article/10051

 

 

Comment List
Anonymous
Related Discussions
Recommended