This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Move to new iscsi- target

hello,

(2 node sles10.3 oes2sp3 - connected to iscsi target with volume)

our infrastructure department gave us a new iscsi target where we have to move our clustered volume to.
The old target will not be available soon.

Any ideads on how to "migrate" the volume to this new iscsi target? Is there docu available on this?

thx.
hugo
Parents
  • 0
    There's a few things I can think of:

    1) If your SAN supports copying disks, you can clone/copy the existing iSCSI disk/LUN/volume and then offline the cluster resource of the EXISTING item and possibly online the new one (you'll have to change some IP stuff I imagine). OR possibly be able to re-use the old IP of the old iSCSI LUN and assign that to the copied one.

    2) You can probably use miggui to migrate the data from one to another, although I imagine this will require changing the cluster resource names/volume in the process which may be undesireable (ie, if the volume hosts HOME directories, for example, or if you have items that utilize the UNC path)

    3) Some combination of DST setup. You shadow the current iSCSI disk and basically use the "2nd" scenario described in the cluster docs where you create a secondary volume and basically shadow all the data over to the 2nd shadow volume which exists on the new SAN/whatever. Then offline the resource/break the shadow and then create the new primary on the new iSCSI node and re-link the shadow/DST volume to that. You can keep the same volume name of the primary in that case (I've done that before--just not with iSCSI).
  • 0   in reply to 
    On 03.11.2011 20:26, kjhurni wrote:
    >
    > There's a few things I can think of:
    >
    > 1) If your SAN supports copying disks, you can clone/copy the existing
    > iSCSI disk/LUN/volume and then offline the cluster resource of the
    > EXISTING item and possibly online the new one (you'll have to change
    > some IP stuff I imagine). OR possibly be able to re-use the old IP of
    > the old iSCSI LUN and assign that to the copied one.
    >
    > 2) You can probably use miggui to migrate the data from one to
    > another, although I imagine this will require changing the cluster
    > resource names/volume in the process which may be undesireable (ie, if
    > the volume hosts HOME directories, for example, or if you have items
    > that utilize the UNC path)
    >
    > 3) Some combination of DST setup. You shadow the current iSCSI disk
    > and basically use the "2nd" scenario described in the cluster docs where
    > you create a secondary volume and basically shadow all the data over to
    > the 2nd shadow volume which exists on the new SAN/whatever. Then
    > offline the resource/break the shadow and then create the new primary on
    > the new iSCSI node and re-link the shadow/DST volume to that. You can
    > keep the same volume name of the primary in that case (I've done that
    > before--just not with iSCSI).
    >
    >

    4. Not to forget the good old mirroring trick.. ;)

    CU,
    --
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0 in reply to   
    Thanks Massimo!

    I've not worked with iSCSI at all so I wasn't sure if there was different "magic" that could be used.

    Hudo, Massimo is referring to (I believe) the NSS mirroring feature where you mirror at a partition level. Then you can break the mirror and use the new one. THAT is documented in the OES2 docs somewhere and they even cover a scenario for data migration.

    Personally I'd go with your SAN route if you can (ie, if your SAN supports cloning/copying iSCSI disks), but I'm not knowledgeable enough to know how/what you have to change the iSCSI settings
  • 0 in reply to 
    thx for the info,

    what about this approach:

    1. Connect to new iscsi target
    2. Create new pool volume on this new device
    3. Migrate all data from oldvol to newvol (with miggui)
    4. deactivate old resource
    5. rename newpool and newvol to old names
    6. configure to old ip and old virt. server name

    Any problems with this?
  • 0 in reply to 
    Theoretically it should work, (if you have lots of data, it can take a long time. Like I had several 1.5 TB volumes to migrate, so I skipped the miggui all together in favor of the SAN utility) however, you cannot necessarily change your virtual server names.

    For example, our cluster was originally created with NW 6.5 SP1, so back then we used ConsoleOne where you COULD name your actual virtual objects. So we had a CLUSTERED VOLUME named (in eDir) of:
    CS1-HOME1

    However, if you delete and recreate it with OES2, you have to use iManager and it will automatically create it with the CLUSTERNAME_POOLNAME so you'll get like:
    CLUSTERNAME_POOLNAME.

    So when we created a NEW data volume, and cluster-enabled it, we were FORCED to use:
    NCS1_HOME1
    (that's the name of the cluster volume) and thus had to update all the eDirectory (user objects) and anything ZENworks related as well.

    Now, if your existing objects were already created with iManager, and not "old" like our stuff was, I think you'll be okay.

    But can I ask if you're using the same SAN, just needing to change the iSCSI IP?

    If so why not use the SAN to copy/clone the disk to a new iSCSI with a new IP, offline the old one and online the new one? That would seem to be much faster, IMO.

    Plus then you don't have to change all the eDir names, etc.
  • 0 in reply to 
    hi,

    yes it is a new SAN (they told me) and all our objects were created with iManager.

    So I guess there won't be a problem with my approach... I hope :)
Reply Children
  • 0 in reply to 
    I'd probably test it on the smallest volume you have to get a good time estimate. The timing won't be dependent upon the amount of data so much as the type of data. In other words, a small volume (let's say 200 GB) with millions of tiny files will take significantly longer than say, a 200 GB volume with very large files. You could also probably use Massimo's idea of mirroring the partitions to decrease downtime since the mirroring can happen in the background and once its finished, the data stays in sync.

    Whereas with miggui, you'll either have to run it twice (once for the initial data transfer, then kick everyone off the server and do a "Sync" on the second data transfer), or kick everyone off during the first and final transfer.

    Files that are in use don't get copied with miggui.
  • 0 in reply to 
    I'd probably test it on the smallest volume you have to get a good time estimate. The timing won't be dependent upon the amount of data so much as the type of data. In other words, a small volume (let's say 200 GB) with millions of tiny files will take significantly longer than say, a 200 GB volume with very large files. You could also probably use Massimo's idea of mirroring the partitions to decrease downtime since the mirroring can happen in the background and once its finished, the data stays in sync.

    Whereas with miggui, you'll either have to run it twice (once for the initial data transfer, then kick everyone off the server and do a "Sync" on the second data transfer), or kick everyone off during the first and final transfer.

    Files that are in use don't get copied with miggui.
  • 0 in reply to 
    I'd probably test it on the smallest volume you have to get a good time estimate. The timing won't be dependent upon the amount of data so much as the type of data. In other words, a small volume (let's say 200 GB) with millions of tiny files will take significantly longer than say, a 200 GB volume with very large files. You could also probably use Massimo's idea of mirroring the partitions to decrease downtime since the mirroring can happen in the background and once its finished, the data stays in sync.

    Whereas with miggui, you'll either have to run it twice (once for the initial data transfer, then kick everyone off the server and do a "Sync" on the second data transfer), or kick everyone off during the first and final transfer.

    Files that are in use don't get copied with miggui.