VMWare ESXi Host
In this LAB scenario, only one VMWare ESXi 6.7 Update 2 Host is being used, without a VCenter Host since there is only 1 Host to manage and there is no SAN. Any fairly recent version of VMWare ESXi Host will apply to this situation, so if your VMWare ESXi Host version is not the same, this article should still be applicable. The VMWare ESXi Host shall be referred to as the ESXi Host going forward in the document. The ESXi Host is configured with enough RAM and disk space to accommodate at least a 2 node cluster, along with other VMWare guest servers for DNS, NTP, patching etc.. Refer to the links at the end of the article for installing and configuring VMWare ESXi Host. The ESXi Host’s local disk will be used to create and configure cluster able volumes instead of using a SAN for LAB purposes.
VMWare Workstation 15.5 is being used on the front end workstation with a GUI to connect to the VMWare ESXi Host, to open and close VMs, to put the Host in maintenance mode and to restart the Host as required, all of which and more can be done from a free VMWare ESXi browser GUI if you don’t have VMWare workstation. This free browser based VMWare ESXi GUI is available to configure all aspects of the VMWare ESXi host and guests and will be used in this article to configure multi-writer disks for clustering vmware guest nodes. The ESXcli command line can and will also be used directly on the ESXi Host to provide another method of configuring the VMWare ESXi Host, which is accessible using Putty from a connecting workstation. More information is provided in this article below.
VMWare Guest Nodes for Clustering
Finally, the 2 VMWare guests running either OES2015 or OES2018 which are outlined in the pre-requisites below are in the same Edirectory tree, but are not yet clustered. Everything in one subnet keeps things simple in this scenario.
Steps to cluster the 2 OES nodes are provided below after first preparing the ESXi Host to allow and use VMWare shareable drives.
Note ** You cannot combine both OES2015 nodes and OES2018 nodes in the same cluster. The steps for configuring the VMWare Host and clustered guests are however the same for either version of OES.
Pre-Requisites – It is assumed that you have the following already installed and configured:
A dedicated and functional VMWare ESXi Host with enough RAM and hard disk space to accommodate multiple VMs. Make sure your ESXi host is updated to the latest level version and patches if possible to avoid known issues. Most versions of VMWare ESXi Host should be applicable although being on more recent versions provides the most stability and best performance. Links containing documentation for installing VMWare ESXi are provided at the end of this article.
esxcli system settings advanced set -i 1 -o /VMFS3/GBLAllowMW
Note: This will enable the multi-writer lock. The GBLAllowMW config option is disabled by default on ESXi host.
Int Value: 1 =========== > Value set to 1 from default 0
Default Int Value: 0
Min Value: 0
Max Value: 1
Default String Value:
Description: Allow multi-writer GBLs.
Note: Run esxcli system settings advanced set -i 0 -o /VMFS3/GBLAllowMW to disable this option on the local ESXi Host disks if a SAN is used instead at some point in the future.
Note: Typically, in a production environment, you would enable the above config option on ALL the VMWare ESXi hosts that are part of the clustered environment for redundancy and availability, and the multi-writer settings would only apply to clustered SAN drives/volumes instead of shareable local ESXi host vmware drives/volumes.
Since we only have one ESXi host in this LAB scenario for testing in a free VMWare ESX environment, the above setting is only applicable to one VMWare ESXi host and to its local disks.
Note - Make sure that the shared drives used by all the OES cluster node vmware guests including the SBD volume and the Shared (cluster able) volumes are created on the ESXi local vmfs file system and that they are thick, eagerly zeroed vmware drives, not thin provisioned vmware drives
You can convert pre-existing thin vmware drives to thick with option 2 below, or you can create new vmware thick drives with option 1 below. We will create one shareable vmware drive/volume for apps and data called SharedVol, and one shareable vmware drive/volume for the Split Brain Detector of the cluster to prepare.
Option 1 -To Create the thick eagerly zeroed drives/volumes set for multi-writer sharing:
Option 2 – Convert existing drives/volumes to thick provisioned multi-writer sharing volumes using the following procedure:
Note- If the drives/volumes to be shared are currently all thick, continue to step 3 below. To continue converting thin vmware drives/volumes to thick eagerly zeroed, continue with step vii.
The VMWare ESXi Host stores the VMWare guest files on the ESXi Host vmfs file system, in a sub folder called datastore1 by default. There you will find folders matching the names of each of the VMWare guests including the .vmx files we need to verify and possibly modify. The basic underlying OS on VMWare ESXi is BusyBox which uses very basic Linux commands such as ls, cat, cd, mkdir etc. along with an ESXicli command line consisting of various ESXi commands for configuring the VMWare ESXi Host.
3.1 – vmfs VMWare ESXi Host command line. How to get to the VMs folder:
3.2 OES VMWare VM Cluster Node 1 file folder listing and contents of the .vmx file where settings for sharing volumes with clustering are stored in the example below.
disk.locking = "FALSE"
scsi1:0.sharing = "multi-writer"
scsi1:1.sharing = "multi-writer"
scsi1:2.sharing = "multi-writer"
scsi1:3.sharing = "multi-writer"
3.3- VMWare VM Guest .vmx file settings to verify which are necessary to set all the volumes to shared mode in preparation for clustering with OES VMWare guest nodes. See highlighted sections to look for. 2 and 3 are for both shared volumes/drives which will be used and shared by both OES clustered nodes:
4.1 – VMWare ESXi browser based GUI from connected workstation: