How to Setup a Free VMWare ESXi 6.7x or 7.x production licensed Host with 2 Clustered OES VM Nodes (2859577)

1 Likes

Summary:

 

How to Setup a Free VMWare ESXi 6.7x or 7.x production licensed Hypervisor Host with 2 Clustered MicroFocus OES2015 or OES2018 Linux VMWare guest nodes in a LAB environment

Executive Summary and Purpose

 

This article describes how to setup a LAB environment with a free VMWare ESXi Hypervisor host license for clustering 2 VMWare guest nodes on Open Enterprise Server also called OES. The scenario involves only one VMWare ESXi 6.7 Update 2 Hypervisor Host which is being used without a VCenter Host since there is only 1 ESXi Hypervisor Host to manage and there is no SAN. The local disks on the VMWare ESXi Hypervisor host    will be used in this scenario to create 2 shareable disks for clustering 2 nodes.  

 

Any fairly recent version of VMWare ESXi Hypervisor Host will work with the steps in this article. The article assumes that you have installed and configured at least one ESXi Host with enough RAM and disk space to accommodate at least a 2 node OES cluster along with server VMs for other services such as DNS, NTP, patching,  etc..  

 

For LAB purposes, the ESXi Host’s local disk will be used to create and configure shareable (cluster able) volumes, to be shared between the 2 clustered OES VM guest nodes instead of using a SAN since most LABs do not have a SAN. This will be configured on the VMWare ESXi Hypervisor host using a workstation on any recent version of Windows to connect with using Putty. Once connected, the VMWare ESXcli – command line interface is used to configure settings.

 

The option to configure the VMWare ESXi Hypervisor host and VMs using a browser based GUI on the Windows workstation is also described. Finally, once the ESXi host is prepared for clustering drives, details are provided on installing and configuring basic clustering of 2 MicroFocus Open Enterprise Servers on version 2015. This also applies to the latest version of OES 2018.

 

Additional guidance is provided in terms of official documentation containing best practices, known issues, planning information, and additional information on the use of SANs in general, vSANs or RDM disks. Links for official documentation on clustering OES on VMWare ESXi are included at the end of the document in the references section.

 

How to Setup a Free VMWare ESXi 6.7x or 7.x production licensed Hypervisor Host with 2 Clustered MicroFocus OES2015 or OES2018 Linux VMWare guest nodes in a LAB environment

 

Environment Setup

 

VMWare ESXi Hypervisor Host

In this LAB scenario, only one VMWare ESXi 6.7 Update 2 Hypervisor Host is being used, without a VCenter Host since there is only 1 Host to manage and there is no SAN. Any fairly recent version of VMWare ESXi Hypervisor Host will apply to this situation, so if your VMWare ESXi Host version is not identical, this article should still be applicable. The VMWare ESXi Hypervisor Host shall be referred to as the ESXi Host going forward in the document.  The ESXi Host is configured with enough RAM and disk space to accommodate at least a 2 node cluster, along with other VMWare guest servers for DNS, NTP, patching etc.. Refer to the links at the end of the article for installing and configuring VMWare ESXi Host. For LAB purposes, the ESXi Host’s local disk will be used to create and configure shareable (cluster able) volumes instead of using a SAN, to be shared between the 2 clustered OES VM guest nodes.

Connecting Workstation

VMWare Workstation 15.5 is being used on the front end workstation with a GUI to connect to the VMWare ESXi Host, to start and stop VMs, to put the Host in maintenance mode and to restart the Host as required, all of which and more can be done from a free VMWare ESXi browser GUI if you don’t have VMWare workstation. This free browser based VMWare ESXi GUI is available to configure all aspects of the VMWare ESXi host and guests and will be used in this article to configure multi-writer disks for clustering vmware guest nodes. The ESXcli command line can and will also be used directly on the ESXi Host to provide another method of configuring the VMWare ESXi Host, which is accessible using Putty from a connecting workstation. More information is provided in this article below.

VMWare Guest Nodes for Clustering

Finally, the 2 VMWare guests running either OES2015 or OES2018 which are outlined in the pre-requisites below are in the same Edirectory tree, but are not yet clustered. Everything in one subnet keeps things simple in this scenario.

Steps to cluster the 2 OES nodes are provided last after first preparing the ESXi Host to allow and use VMWare shareable drives.

 

Note: You cannot combine both OES2015 nodes and OES2018 nodes in the same cluster. The steps for configuring the VMWare Host and clustered guests are however the same for either version of OES.

 

Pre-Requisites

 

It is assumed that you have the following already installed and configured:

 

  • Basic knowledge of VMWare and OES is expected. This article is considered intermediate to advanced level.

 

  • Free VMWare ESXi production level license (for LAB use only) either standard free license, or license through VMUG (VMWare Users Group) applied to VMWare ESXi Host. Information on acquiring the free VMWare ESXi Host license is included in the links at the end of this article.

 

  • A dedicated and functional VMWare ESXi Host with enough RAM and hard disk space to accommodate multiple VMs.  Make sure your ESXi host is updated to the latest level version and patches if possible to avoid known issues. Most versions of VMWare ESXi Host should be applicable although being on more recent versions provides the most stability and best performance. Links containing documentation for installing VMWare ESXi are provided at the end of this article.

 

  • A workstation to connect and manage the Host. In this scenario, Windows 10 workstation is being used to connect to the ESXi Host with VMWare Workstation, but managing vmware VMs can be done using the browser based ESXi GUI.  Putty is installed and required on the workstation being used to access the VMWare Host ESXi CLI.

 

  • 2 or more VMWare guests running either OES2015 or OES2018 in the same Edirectory tree. OES2015 or OES2018 VMWare guests need to be setup according to the guidelines and best practices set out in the references and documentation links at the end of this article. It is highly recommended that the VMWare guest nodes be setup with DNS and NTP as per guidelines, but if internal DNS is not feasible in this LAB scenario, you can use hosts files. NTP however is not an option but can be easily setup on one of the of the OES nodes so at least time is synchronized properly between the OES nodes in particular, which is critical for a properly functioning OES cluster.

 

 

  • It is recommended to try and provide Internet access for the OES VMWare guest nodes being clustered in order to have access to applying the latest patches and updates. This also facilitates using an NTP server already setup and available on the Internet for keeping time synchronized between the OES nodes. It is also a good idea to have the latest VMWare Tools installed on each of the OES VM Guest nodes for the best performance, stability and screen size allowed on the VM guest. Notes for installing the VMWare Tools on OES are provided at the end of the article.

 

  • Valid OES subscription for updating and patching and license for OES servers and finally the OES2015 or 2018 Add-On .ISOs for usage on the OES vmware nodes for installing and configuring Micro Focus OES Clustering.

 

Step 1 - Prepare the VMWare ESXi Host for multi-writer mode on the local disks

 

Set multi-writer setting first:

 

From the connecting workstation, open Putty to connect to and login to the VMWare ESXi Host.

Next, enable simultaneous write protection provided by VMFS using the multi-writer flag with the following example 1.1 at the ESXicli Host command line.

1.1  esxcli command to allow sharing vm drives on ESXi host’s local vmfs file system

 

esxcli system settings advanced set -i 1 -o /VMFS3/GBLAllowMW

Note: This will enable the multi-writer lock. The GBLAllowMW config option is disabled by default on ESXi host.

 

Next, verify the setting is correct:

 

Execute the command below to check the value set and to confirm that the Int Value highlighted below in yellow is set to 1 in order to enable or 0 to disable. Refer to Example output 1.2 shown below

 

1.2 - Highlighted area indicating whether enabled (1) or disabled (0) which is default
     
esxcli system settings advanced list -o /VMFS3/GBLAllowMW


         Path: /VMFS3/GBLAllowMW
         Type: integer
         Int Value: 1             =========== > Value set to 1 from default 0
         Default Int Value: 0
         Min Value: 0
         Max Value: 1
         String Value:
         Default String Value:
         Valid Characters:
         Description: Allow multi-writer GBLs.



Note: Run esxcli system settings advanced set -i 0 -o /VMFS3/GBLAllowMW to disable this option on the local ESXi Host disks if a SAN is used instead at some point in the future.
 
Note: Typically, in a production environment, you would enable the above config option on ALL the VMWare ESXi hosts that are part of the clustered environment for redundancy and availability, and the multi-writer settings would only apply to clustered SAN drives/volumes instead of shareable local ESXi host vmware drives/volumes. Refer to specific VMWare and OES details and documentation for clustering SANs on VMWare OES guests in VMWare ESXi in the “References” section at the end of the document.

 

Since we only have one ESXi host in this LAB scenario for testing in a free VMWare ESX environment, the above setting is only applicable to one VMWare ESXi host and to its local disks.

 

Step 2 - Create a separate sub folder to store the shared VMWare drives. Either convert existing drives to thick eagerly zeroed drives for the shared SBD and data/app volumes or create brand new thick eagerly zeroed drives for this purpose

 

Note:  Make sure that the shared drives used by all the OES cluster node vmware guests including the SBD volume and the Shared (cluster able) volumes are created on the ESXi local vmfs file system and that they are thick, eagerly zeroed vmware drives, not thin provisioned vmware drives

 

You can convert pre-existing thin vmware drives to thick with option 2 below, or you can create new vmware thick drives with option 1 below. We will create one shareable vmware drive/volume for apps and data called SharedVol, and one shareable vmware drive/volume for the SBD volume of the cluster to prepare.

 

 

Option 1 -To Create the thick eagerly zeroed drives/volumes set for multi-writer sharing:

 

From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.

  • Create the separate sub folder to store the shareable drives to be clustered with OES. Refer to screenshot 2.1 below to get started:
2.1 - Click on DataStore to your left and the DataStore browser as in the screenshot below:

1.png

 

2.2 – Click on Create directory and type in “SharedCluster” for Directory name and click Create directory to complete the creation

2.png

 

Select one of the 2 VMWare guest nodes. Shut down the VMWare guest node.

Click on the virtual machine in the inventory.

Click Edit Settings to display the Virtual Machine Properties dialog box.

You should see a screen similar to the following below in screenshot 2.3: Click on Add Hard Disk

 
2.3 - Click on “Add hard disk”

3.png

 

Select “New Standard Hard Disk” and the disk is added to the list of disks.

Click “on the right arrow to the left of the new hard disk to expand the details of the disk”. Modify the disk size, disk name, and location as appropriate for the SBD disk/volume first.

Modify the disk provisioning to be thick provisioned eagerly zeroed and note the controller and disk location. See screenshot 2.4 below.

Finally, set the Sharing option to multi-writer sharing as in the screenshot below. Note Controller Location and to the right of that, the SCSI controller and disk placement highlighted in yellow. That information will be required for step 3.2 below.

2.4- Modify the disk to be “Thick provisioned, eagerly zeroed”

4.png

 

Note – Disk Mode above should be set to “independent persistent” according to best practices for OES.  This is a mistake in the screenshot.

Option 2 – Convert existing drives/volumes to thick provisioned multi-writer sharing volumes using the following procedure

 

From the browser, simply type https://ipaddressofESXiserver replacing ipaddressofESXiserver with the IP address of your ESXi Host and that will take you to the VMWare ESXi GUI. Login with proper credentials.

Select one of the 2 OES nodes  and shutdown the VMWare guest node.

Right-click the virtual machine in the inventory.

Click Edit Settings to display the Virtual Machine Properties dialog box.

Click the Hardware tab and click the arrow to the left of the hard disk in the Hardware list to expand the details of the hard disk.
Note: The disk will either be thin or thick. If thin, the drive/volume will need to be converted to thick. 

Click Cancel to exit from Virtual Machine Properties dialog box.

 

Note- If the drives/volumes to be shared are currently all thick, continue to step 3 below. To continue converting thin vmware drives/volumes to thick eagerly zeroed, continue with step vii.

 

2.5 – Click on “Storage” and “datastore1” in the left pane, and “Datastore browser” in the center pane

5.png

 

 

Browse to where the vm guest is stored and find the associated .vmdk file which is the drive/volume.

Right-click the .vmdk file, and click Inflate. The Inflate option converts the disk to thick provisioned.

You may be able to skip the step of reloading the .vmx file since it was not necessary in this scenario after inflating the vmware thin drives. However, VMWare recommends it in case of certain issues. Reload the .vmx file only if required. For more information, see Reloading a vmx file without removing the virtual machine from inventory (1026043).

Note: Create a new folder outside the OES VM folder structures for the shared drives:

You should create a new sub folder called “Sharedcluster” in the datastore1 vmfs file system like in the steps for Option 1 above and then browse to the shared SBD and data volumes one at a time using Datastore browser and right click the .vmdk file, which is the disk created for clustering and select “Move” as in the screenshot in 2.6 below. Move the disk to the new SharedCluster folder in the root of DataStore1:

 

Note: It is highly recommended that the folder you create is not in the same folder structure as either of the 2 OES nodes. Create the “Sharedcluster” folder outside of the 2 OES VM folder structures.

 

 

2.6 – Move any existing volumes you have converted to thick eagerly zeroed drives to the SharedCluster folder you just created

6.png

 

 

 

 

If the Inflate option is grayed out or not available, it indicates that the virtual machine is not powered off or that the drive is not thin provisioned.

There should be no snapshots and the conversion is performed on the base disk.

 

Step 3 – Verify .vmx file settings for each OES cluster node guest and if required, correct the settings which are added by the GUI in step 2 for each of the appropriate VMWare guest node .vmx files

 

Note * If the settings were not manually manipulated, (only with the GUI) there should be no need to correct the files, but it is still a good idea to verify the files.

The VMWare ESXi Host stores the VMWare guest files on the ESXi Host vmfs file system, in a sub folder called datastore1 by default. There you will find folders matching the names of each of the VMWare guests including the .vmx files you need to verify and possibly modify.  The basic underlying OS on VMWare ESXi Host is BusyBox which uses very basic Linux commands such as ls, cat, cd, mkdir etc. along with an ESXicli command line consisting of various ESXi commands for configuring the VMWare ESXi Host.

 

From a Putty session on your connecting workstation, connect and login to the ESXi Host. Enlarge the putty session screen by pulling on the opposite corners until it is about as large as a normal screen.

Type “cd vmfs\volumes\datastore1” or follow the screenshots below for different CLI methods to get to the VM guests’ folders and press enter. You should see output similar to the following screenshots:

Type “ls” to get a directory listing of all your VMWare guests, which we’ll refer to as VMs going forward. You should see output similar to the blue output on the screen below:

 

 

3.1 – vmfs VMWare ESXi Host command line. How to get to the VMs folder:
7.png

 

3.2 – ls contents of datastore1 folder. Seeing list of VMWare VM guests

8.png

 

 

Type “cd VMName” (VMName being case sensitive) to move into the folder of the OES VMWare guest node folder. Referring to the example above, “cd OES2015Viv” will be used in this scenario since it is the first VM to be configured for shared cluster able VMWare drives/volumes.

Type “ls” to see the list of files in the OES VM guest folder. You should see output similar to the following screenshot in 3.3:

 

3.3- OES VMWare VM Cluster Node 1 file folder listing and contents of the .vmx file where settings for sharing volumes with clustering are stored in the example below.

9.png

 

Find and note the placement and following settings in the file, which are associated to the thick eagerly zeroed drives setup at the beginning of this article to be shared for the cluster. There are 2 settings to verify as seen below.

 

 

Note that the multi-writer statement will exist for each shared volume you need to create. If the statements do exist, verify that they correspond with the ESXi Host’s correct drive(s) and controller(s) placement in your VMWare drive settings.

 

In this scenario, we have created 2 shared VMWare disks/volumes, one for the SBD which is required in clustering, and one for shared data/apps. Below we see a configuration example in 3.4 of 2  shared drives. We will have only 2 lines for the multi-writer setting in the .vmx file in our scenario.  Refer back to the notes made on the proper controller and disk placement of your own ESXi Host disks as outlined in section 2 above and verify they are correct in the .vmx file for each OES cluster node.

 

3.4 – Example of 2 settings required in the .vmx file for each OES VM Cluster node – use appropriate controller/disk placement values in highlighted sections below

 

disk.locking = "FALSE"

scsi1:0.sharing = "multi-writer"
scsi1:1.sharing = "multi-writer"

If the settings are missing or are not correct, either add or modify the settings using the vi editor in the next step and refer to the screenshot below for the 3 lines which should be in the .vmx file. See screenshot 3.5 below.

 

3.5- VMWare VM Guest .vmx file - 2 settings (3 lines) to verify which are necessary to set all the volumes to shared mode in preparation for clustering with OES VMWare guest nodes. See highlighted sections to look for. 2 and 3 are for both shared volumes/drives which will be used and shared by both OES clustered nodes

10.png

 

 

            

If the values were not properly added by the GUI process, you must manually add the settings to the .vmx file using the vi editor. Type “vi vmname.vmx” (replace vmname with the name of your vm guest) and press enter. Press the “Esc” key and the “I” key to start editing.

Move the cursor with the down arrow to the place as in the above screenshots to add the settings manually. Type the settings as they appear in the screenshot for your scenario. Note that case sensitivity is important. Once finished, press the “Esc” key, then type “wq” keys to save and quit the “vi” editor.

Type “cat vmname.vmx” to list the .vmx file and to verify the file is correct. Once correct, you are ready to test the volumes to make sure they are shareable (cluster able), and if everything works, move onto creating the OES cluster.

Step 4 - Testing to make sure the shared volumes created for the cluster are shareable

 

In the browser based ESXi GUI, highlight one of the OES VMWare cluster node guests and click the right arrow button “>” on the penguin to start the VM. Note, if the OES Cluster node guest starts without errors, start the 2nd OES Cluster node guest. See the following example for starting the VM in the ESXi GUI in screenshot 4.1:

 

4.1 – VMWare ESXi browser based GUI from connected workstation:

11.png

 

If you received the following error while opening either VM, “Failed to open '/vmfs/volumes/4f709747-0ec9e0a0-219b-001e0bccdf5c/test1/test1_1-flat.vmdk” or  “Failed to lock the file (67) (0x2017).” Or something similar, there is likely a syntax issue in the .vmx file, or one of the volumes is still not thick.

 

Try testing the 2nd OES VM node by highlighting it and clicking the right arrow button “>” on the penguin to start the VM. If you receive the same error as above, shut down both VM’s and re-verify that the vmware drives are thick by reviewing the sections above, and in the Putty session connected to the ESXi Host, review each of the 2 nodes’ .vmx files to make sure the files match the structure and syntax of the screenshots provided above.

If both OES VMs do load successfully and there are no errors, you are ready to setup OES clustering with the shared drives on both nodes in Step 5 below.

 

 

Step 5 - Installing MicroFocus OES Clustering on first OES VM node in the cluster

 

Part A – Initialize the VMWare shareable drives created earlier using NSSMU:

 

On the first OES VM node, make sure the OES 2015 or 2018 Add-On .ISO is connected to the VM by modifying the VM settings.

Right click on the OES VM node desktop and select “Open in Terminal” to open a terminal screen.

Increase the size of the terminal screen by pulling on the outer edges of the screen.

 

Note - The shared VMWare drives that were created need to be initialized in order to prepare them for clustering. Type “nssmu” and press enter.

 

Select “Devices” on the left and press “Enter”. See screenshot 5.1 below:

 

5.1 – Type “nssmu” and select “Devices” on the left of the screen to see all vmware drives created.

12.png

 

Select the first shareable drive to initialize, and press the “F3” key, you should see a screen similar to the following.  Note * Make sure you select the proper drive to initialize, this is irreversible. See screenshot 5.2 below. Type “Y” for Yes to initialize.

5.2 – Press the “F3” key to initialize the shareable drive. Type “y” for Yes to initialize.

13.png

 

Select the Partitioning scheme to be GPT by using the down arrow and then press “Enter” as in the screenshot 5.3:

5.3 – Down arrow to select “GPT” partitioning scheme and press “Enter”

14.png

 

You’ll see the newly initialized drive as in screenshot 5.4:

5.4 – Results of initializing the drive in GPT partitioning scheme

15.png

 

 

viv- Press F6 to make the drive shareable as in the screenshot below:

16.png

 

x- Repeat steps vi, vii, and viii above for the 2nd shareable VMWare drive created.

 

xi- Once you have completed initializing and making both drives shareable for clustering in OES, press the “ESC” key until you exit NSSMU.

 

Part B – Install Clustering for Open Enterprise Server (either 2015 or 2018)

 

Open Yast and click on “Open Enterprise Server” in the left pane, followed by clicking on the “OES Install and Configuration” in the right pane as in screenshot 5.5 below:

 

5.5 – Select “Open Enterprise Server” in the left pane, and “OES Install and Configuration” in center pane

17.png

 

In the Software Selection pane on the left, scroll down to find “Novell Cluster Services” and select it and then click on “Accept”. You should then see a screen such as the following in screenshot 5.6.

5.6 – NCS clustering is being installed on the OES VMWare node

 

18.png

 

 “Scroll down” the list until you see Novell Cluster Services, and click on the “Enable” beside Configuration, right underneath it. You should see output similar to screenshot 5.7 below.

5.7 – Configuration of Novell Cluster Services has been enabled.

 19.png

 

  

Click on “Novell Cluster Services (NCS)” Header to configure clustering services. You will be prompted to login as the admin of the eDirectory tree. Type in the “distinguished admin account name” and “password”..

 

Select “New Cluster”, type in the static IP address assigned to the cluster node, not the IP address assigned to the vmware guest.

Select the drop down to the right of “Select the device for the shared media”. This where you select the “SBD” shareable drive. Make sure to select the smaller of the 2 drives created for the SBD. See the 5.7 screenshot below.

 

5.8 Select new cluster, input IP address of cluster node and select the SBD drive

 20.png

 

Click “Next” on the Use OES Common Proxy User screen.

Click “Next” on the Start clustering services now screen.

Click “Next” on the Use the following configuration screen

 

5.9 You should see a screen similar to the following.

 21.png

 

5.10 Successful completion of the OES cluster should look like the following Click “Finish” to end the configuration wizard.

22.png

 

  

Part C – Configure Clustering for Open Enterprise Server (either 2015 or 2018) using iManager

 

Open a browser and launch “iManager”. Login to “iManager” and select “Clusters” in the left pane.

 

5.11 Click on “My Clusters” in the left pane to see the new cluster node in the right pane.

 

23.png

 

 

Click on “Add” in the right pane to add the new cluster node to the cluster.

Click on the “down arrow” to move into the Novell container and select the “Cluster” object.

The cluster is added to iManager.

 

 

5.12 – Click on the “Cluster” object to manage the OES cluster.

 

 24.png

 

Setup the shared storage pool/volume using the shared vmware data drive created earlier for the cluster node. Click on “Storage” in the left pane.

5.13 You should see a screen similar to the following.

25.png

 

Click on “Pools” under Storage in left pane. Click on the magnifying glass to browse for the OES server node and select it to add it.

5.14 Click on “New” to add the shared vmware data drive created earlier:

26.png

 

 

Type in the “Name” of the shared pool. i.e. SharedPool1. Click on “Next” to continue.

5.15 Select “Cluster Enable on Creation” as well as the shared drive in the checkbox on the left of the drive to be used:

28.png

 

 

Type in a new “IP Address” for the new shared pool, an address different from the cluster node.

Click “Finish” to complete adding the shareable pool.

 

5.16 You should see a screen similar to the following:

29.png

 

Finally, create the shared volume to be used by the pool. Click on “Volumes” under Storage in the left pane.

Click on “New” in the center pane to create the new volume.

Type the “Name” of the new shared date volume. i.e. SharedVol1 and click “Next”.

Click to select the “SharedPool1” pool to associate it to the new shared volume and click “Next

 

5.17 You should see a screen similar to the following 2 screenshots.

 30.png

 

 

The OES Cluster configuration on VMWare ESXi Hypervisor is now complete!

 

To see and manage the new cluster, click on “My Clusters” under Cluster in the left pane. Click on the “Cluster object” in the right pane.

 

5.18 You should see a screen similar to the following:
31.png

 

 
You are now ready to add the 2nd  OES VMWare guest node to the OES cluster.
 

 

 

References

 

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-B2F01BF5-078A-4C7E-B505-5DFFED0B8C38.html

https://kb.vmware.com/s/article/2107518

https://kb.vmware.com/s/article/2014832

 

https://communities.vmware.com/t5/Virtualizing-Oracle-Discussions/Oracle-2nd-node-fails-to-start-in-cluster-due-to-locking/m-p/1702355

Solved: Migrate VM's from XenServer - VMware Technology Network VMTN

How To Convert A Xen Virtual Machine To VMware (howtoforge.com)

https://giritharan.com/move-virtual-machines-from-xen-to-vmware/

 

https://kb.vmware.com/s/article/10051

 

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-4236E44E-E11F-4EDD-8CC0-12BA664BB811.html

 

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.virtualsan.doc/GUID-177C1CF9-EB3F-46C2-BE53-670BF864DC25.html

 

https://www.vembu.com/blog/vmware-vsan-configuration-setup/

 

https://www.altaro.com/vmware/vsphere-vm-rdm/

 

https://www.novell.com/documentation/open-enterprise-server-2018/pdfdoc/clus_vmware_lx/clus_vmware_lx.pdf

 

https://www.novell.com/documentation/oes2015/pdfdoc/inst_oes_lx/inst_oes_lx.pdf#b9kmg9x

 

Cluster Services in a VMware environment documentation

 

Labels:

How To-Best Practice
Comment List
Anonymous
Related Discussions
Recommended