This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How to migrate Windows cluster w2012r2 with RDM disks

Does anybody has experience with migration of Windows cluster w2012r2 and what step to follow? Also we have some RDM cluster disks.


  • Apologies for the delay in responding. PlateSpin supports the migrations of Windows failover cluster. PlateSpin is cluster aware to the extent that it will stop you from changing the hostname and IP address of the new cluster as doing so would break the cluster. The migration approach depends on what you want it do with the shared disks. There are essentially 3 options:

    1. Migrate the cluster, converting the shared volumes to hypervisor native storage such as VMDK

    2. The new server will be connected to the same storage as the old one

    3. You want to use PlateSpin to copy the shared volumes to new RDMs

    I assume it's #3 you are interested in, which is performed using the semi-automated migration process similar to migrating to a physical server. You must provision the new virtual machine in the target VMware platform yourself, and attach the target RDMs to match the active node. Make sure that the SCSI ID of each disk matches and that each target disk is slightly larger than the corresponding disk on the source - 50Mb is enough.

    Before starting the migration, ensure that the cluster cannot fail over by pausing or evicting the passive nodes. You should allow for downtime while the active node is migrated.

    Now boot the target VM from the PlateSpin X2P ISO image which you can download from, and register it to the PlateSpin server. Discover the host name/IP of the active node, not that of cluster virtual node then configure the migration in the Migrate Client.

    Although it is possible to perform a full replication while the cluster remains active, and then follow it with an incremental replication to sync only the changes, the delta sync process could take just as long as a full and so it may be less disruptive to take the application down before you start, and then do the migration in a single step .

    At the end of the migration you will have a functional one-node cluster. If you wish to also migrate the passive nodes you can treat them as regular non-clustered machines. PlateSpin will migrate their local volumes and post migration you can reconnect the shared volumes and re-join them to the cluster.

    There is plenty of information on clustering in the  user guide  and I highly recommend you to read it. I hope this helps.

    Regards, Alain

  • Thanks for the detailed info. I have a doubt on your statement when you say  "Make sure that the SCSI ID of each disk matches ". You mean that the SCSI ID of the RDM disks on the source VM must match with the new RDM disks attached to the target VM?

  • I have not personally tested what happens if the SCSI controller ID of a shared disk changes during a cluster migration, but I am reliably told that they should remain the same. For example, the source server has volume E connected to SCSI controller 1:0 and volume F connected to 1:1. When you create the target active node VM, you should connect the RDM that will host volume E to SCSI controller 1:0 and F to 1:1, to match the source configuration.

  • I just tested this in my lab, deliberately changing the order of the RDMs on the target to force each one to have a different SCSI controller ID to corresponding disk on the source. The cluster was still working at the end of the migration so it looks like the SCSI controller ID don't actually have to match.

    Regards, Alain

  • Glad you were able to get the desired outcome.

    Their is one more way by which cluster/rdm disk can be mapped. I have tested this in my lab.

    As part of job configuration let platespin create the Target Vm with vmdks. On completion, those vmdks can be replaced with actual rdm drives by editing vm properties.

    Run the job, and data will start copying on rdm luns now.