Big news! The community will be moving to a new platform April 21. Read more.
Big news! The community will be moving to a new platform April 21. Read more.

Setting Up a Dynamic Storage Technology (DST) Shadow Volume and Policy in a Novell Cluster Services Environment

Absent Member.
Absent Member.
0 0 6,789
0 Likes

Table of Contents



Aim of this AppNote

Requirements and Assumptions

Details of setup used for this AppNote

Configuration Steps

1. Create DST Shadow Volume

     1.1 Create Two Pools, POOL1 with primary volume, PRIMARY and POOL2 with secondary volume, SECONDARY

     1.2 Unbind secondary NSS volume, SECONDARY from NCP

     1.3 Make SECONDARY as Secondary volume of PRIMARY

2. Verify that NCP client can access the shadow volume

3. Configure DST shadow volume to work on Cluster environment

     3.1 Configure DST Policies

          3.1.1 Configure same DST global policies in all the nodes of the cluster, where the cluster resources will fail over or migrate to.

          3.1.2 Configure DST policy for the particular volume on the first node of the cluster

     3.2 Copy the shadow volume configurations from 1st node's files /etc/opt/novell/ncpserv.conf and /etc/opt/novell/ncp2nss.conf to all other nodes of the cluster.

     3.3 Configure the load and unload script for the shadow volume

          3.3.1 Offline the pools, POOL1 and POOL2, which have the primary and secondary volumes

          3.3.2 Copy the load and unload scripts for the pool, POOL2 which has secondary volume SECONDARY

          3.3.3 Modify the load and unload scripts of the pool, POOL1 that has primary volume, PRIMARY, to be used for the shadow volume

          3.3.4 Bring the Primary pool, POOL1 online using cluster manager page in iManager. Secondary pool, POOL2 should remain offline

4. Cluster migrate the pool from current node, wgp-dt83 to another node, wgp-dt84

5. Execute the DST policy in the current node, wgp-dt84 to verify that policy still works



Aim of this AppNote


This AppNote aims to provide the step-by-step approach to configure a shadow volume in a Novell Cluster environment, especially when the primary and secondary volumes are originally from different cluster pools or resources. This shows how a shadow volume is set up using two NSS volumes, how to access the shadow volume from NCP client, how Dynamic Storage Technology (DST) policy is created and used in the cluster environment and what modifications need to be done to make the shadow volume work in a cluster environment. It also tries to highlight the most important tips or points wherever necessary.



Requirements and Assumptions



  1. One cluster with minimum of two nodes.

  2. Both of the nodes in the cluster must be running OES 2 Linux.

  3. Both of the volumes, primary and secondary, must each reside on a partition shared by all of the cluster nodes.

  4. Novell iManager and NRM are installed and running for managing the cluster resources, NCP volumes and DST.

  5. Windows or Linux Novell client installed on the machine to verify shadow volume functionality and file/folder browsing.

  6. The words "shadow" and "secondary" used interchangeably for the secondary storage area.

  7. Sometimes the primary volume is referred to as "shadowed" when it is the primary volume in a DST shadow volume.



Details of Setup used for this AppNote


For this AppNote, I used a cluster "IDM361". It has the IP address of 164.99.103.196.



This cluster has two nodes. The 1st node is wgp-dt83 (164.99.103.83) and the 2nd node is wgp-dt84 (164.99.103.84).



Configuration Steps


Assuming that all of the requirements are met, let us proceed with the shadow volume configuration as follows.



Documentation Notes: Unless otherwise mentioned, NRM and iManager of wgp-dt83, the first node of the cluster where both pools (POOL1, POOL2) are initially running, is used in this AppNote.



1. Create DST Shadow Volume


1.1. Create two pools. POOL1 with primary volume, PRIMARY, and POOL2 with secondary volume, SECONDARY.



Let us create two pools. POOL1 with volume PRIMARY and POOL2 with volume SECONDARY in the shared partition. Here, PRIMARY will be the primary volume and SECONDARY will be the secondary volume for the shadow volume (primary and secondary volume pair).



Note: As mentioned before, we are focusing on setting up a DST shadow volume especially when primary and secondary volumes are originally in different pools. Otherwise we can create two volumes from the same pool.



To do this, follow the steps below:




  1. Login to the one of the nodes as root and type "nssmu" in the console. Login to wgp-dt83 for this case.

  2. Select the Pools from the Main menu and press Enter.

  3. Press the Insert key and type in the name of the pool as POOL1 and press Enter.

  4. Select the shared device from the device list and fill in the size of the pool and assign the IP 164.99.103.198 and other parameters default.

  5. Select Apply using up/down arrow and press Enter. This completes the POOL1 creation.

  6. Now press the ESC key to get back to the Main menu and select Volumes and press Enter.

  7. Press the Insert key and give the name of the volume as PRIMARY and press Enter and select "(N)o" when asked for Encryption and select the POOL1 from the pool list. This is the volume I am planning to use as the primary volume for the shadow volume. (primary and secondary volume pair)

  8. Repeat the steps i through vii to create the second pool, POOL2 with the IP address 164.99.103.199 and the secondary volume, SECONDARY

  9. Press Esc multiple times to exit nssmu (NSS Management Utility)

  10. Start the shadow file system by typing the following commands in the terminal console.
    (Note: ShadowFS provides a virtual view of the shadow volume, and allows users to access the primary storage area by using the CIFS/Samba protocol.)

    cd /opt/novell/ncpserv/sbin
    modprobe fuse
    /opt/novell/ncpserv/sbin ./shadowfs



Important Note: Here, I am creating new NSS volumes for primary and secondary volume. If you want to use an existing volume with some trustees, as the secondary volume and preserve the existing trustees, you need to copy the trustees to the Primary volume. If you do not do this, all of the old trustees will be overwritten with the trustees of primary and hence lost. So, to preserve the existing trustee information, do the following:


  1. In NRM, log in as the root user.

  2. Select Manage NCP Services > Manage Shares to see the "NCP Shares" page where a list of active volumes displayed

  3. Dismount both the NSS volumes from NCP Server that you want to use as the primary volume and secondary volume by selecting the Unmount button next to each volume.

  4. Open a terminal console as the root user, then copy the trustee file ".trustee_database.xml" from the secondary volume location to the primary volume location as follows:

    cp /media/nss/secondary_volumename/._NETWARE/.trustee_database.xml

    /media/nss/primary_volumename/._NETWARE/.trustee_database.xml

  5. Now mount both the volumes for NCP Server by selecting the Mount button next to each volume in "NCP Shares" page.

  6. At the terminal console prompt, enter "ncpcon nss sync=primary_volumename " the to synchronize the NSS trustee information with NCP Server and then continue to step 2 below.




1.2 Unbind the secondary NSS volume, SECONDARY from NCP



The NCP/NSS Bindings parameter for all the NSS volumes are enabled by default, making the volume NCP accessible all. We need to make sure that the secondary volume is not automatically mounted in NCP Server on system restart. This can be done as follows:




  1. Login to NRM and click on "Manage NCP Services" and then "Manage Shares" to open the "NCP Shares" page.



  2. Click on the "NCP/NSS Bindings" button to open up the "NCP/NSS Bindings" page.



  3. Select "No" under "NCP Accessible" for SECONDARY and click "Save Selection" and then click on the "Share management" button to come back to the "NCP Shares" page.


  4. Now verify that the SECONDARY volume is not seen in the "NCP Shares" page.


    Unbinding the SECONDARY volume from NCP is done.



1.3 Make SECONDARY as Secondary volume of PRIMARY



  1. In the NRM of wgp-dt83, click on "View File system" then select the "Dynamic Storage Technology Options".



  2. Click on "Add Shadow" against the primary volume, PRIMARY to open up the "PRIMARY share Information" page.



  3. Click on "Add Shadow volume" from the bottom of the page.



  4. Type in the path of the secondary volume "/media/nss/SECONDARY" and then click on the "Create" button and get the PRIMARY Share Information page.





Verify that you can see the shadow volume path /media/nss/SECONDARY as above.



2. Verify that NCP client can access the shadow volume and able to see all the files in the primary volume as well as the secondary volume.




  1. Map the PRIMARY using NCP client as follows. Here I have used NCP client for Windows and used the admin user and IP address of POOL1.



  2. Click "Map"

  3. Provide the admin password, eDirectory tree name, and admin context, then click OK.



    Then we get the following.





  4. Here we can see all the files in the primary and secondary volumes. So we conclude that the DST shadow volume is working.



3. Configure DST shadow volume to work on Cluster environment


3.1. Configure DST Policies.


3.1.1. Configure same DST global policies in all the nodes of the cluster, where the cluster resources will fail over or migrate to.



Make sure that the same global policies are configured on each node where you want to fail over the cluster resource.




  1. Login to NRM of the server (https://<IPAdressOfTheServer: 8009>).

    Screenshot follows for first node wgp-dt83 of the cluster IDM361.

  2. Click on "Manager NCP Services" on the left panel.

  3. Then click on "Manage Servers" under "Manager NCP Services" and get the "Server Parameter Information".



  4. Note the values of following parameters from the "Server Parameter Information" page.

    REPLICATE_PRIMARY_TREE_TO_SHADOW
    SHIFT_MODIFIED_SHADOW_FILE
    SHIFT_ACCESSED_SHADOW_FILE
    SHIFT_DAYS_SINCE_LAST_ACCESS
    DUPLICATE_SHADOW_FILE_ACTION
    DUPLICATE_SHADOW_FILE_BROADCAST



  5. Repeat step i though iv for all other nodes of the cluster and verify that all of the nodes have the same values of these parameters



3.1.2. Configure DST policy for the particular volume on the first node of the cluster.




  1. Login to NRM of wgp-dt83 typing https://<IP address>: 8009.

  2. Click View File System and then Dynamic Storage Technology Options.

  3. On "Dynamic Storage Technology Options" click "create a new policy".

  4. On the "Create New Shadow Volume Policy" page, Login to NRM of wgp-dt83 where POOL1 is currently running.



  5. Click on the "Create a new policy" button to get the "Create New Shadow Volume Policy" page and customize the policy as needed. Here, I have created a simple policy to move all the .txt files from the Primary volume to the Secondary volume. To do this, fill in the following parameters in the "Create New Shadow Volume Policy" page leaving the other parameters/ fields to the default values (screenshot follows).

    Description (required): PRIMARY Policy,

    Volume Selection: PRIMARY

    Search Pattern: *.txt*






  6. Click Submit to complete the policy creation and get back to the Dynamic Storage Technology Options page.


  7. Now we see that the DST policy, PRIMARY policy under "Dynamic Storage Technology Policies" with Last Executed status as Not executed. Let us now check if this policy is working or not. The file structure in primary and secondary volume before the policy is executed is as follows. In the PRIMARY volume I have three .txt files and in Secondary I have three .odt files.



  8. Now, execute the policy and check if the policy is working or not. To do this, click on the policy name "PRIMARY policy" under "Dynamic Storage Technology Policies" in the "Dynamic Storage Technology Options" page and select the "Execute Now" option under the Frequency tab.

    Screenshot follows:









  9. Now click "Submit" to execute the policy.



    Now we see that the Last Executed status is updated with the time and number of files moved. Remember, by default the Volume Operation is "Move selected files from primary area to shadow area" in the policy configuration.



  10. Verify that all the .txt files in the Primary Volume are moved to the Secondary volume by login into the server terminal.



  11. Now, verify that the NCP client can still see all the files. There should not be any impact to the clients view. Client will not even know where the files are.


As of now we have just configured the DST shadow volume, policy, and verified the DST functionality with the NCP client. However, the resources are not ready for migration or fail over from one node to another in this cluster environment. As you know, NCP and DST go hand in hand. However, they are not cluster aware, meaning both need to be installed on every node where a resource that contains shadow volumes will migrate or fail over. So we need to make sure certain things like the primary and secondary volumes fail over together, the shadow volume configuration is consistent across all the nodes of the cluster, separate individual policies for each shadow volume is created, individual shadow volume policies fail over along with the volume, etc. These steps are shown below.




3.2 Copy the shadow volume configurations from the 1st node's files /etc/opt/novell/ncpserv.conf and /etc/opt/novell/ncp2nss.conf to all other nodes of the cluster.



Because of the shadow volume configuration on the first node, wgp-dt83, the line "SHADOW_VOLUME PRIMARY /media/nss/SECONDARY" is added to the /etc/opt/novell/ncpserv.conf file and the line "EXCLUDE_VOLUME SECONDARY" is added to /etc/opt/novell/ncp2nss.conf. Copy these lines to the respective configuration files of other nodes of the same cluster. Here wgp-dt84 is the other node of the cluster IDM361. So in wgp-dt84, I have added/copied the line "SHADOW_VOLUME PRIMARY /media/nss/SECONDARY" in the /etc/opt/novell/ncpserv.conf file and the line "EXCLUDE_VOLUME SECONDARY" in /etc/opt/novell/ncp2nss.conf.



So, all the nodes/servers of the cluster will have the line "SHADOW_VOLUME PRIMARY /media/nss/SECONDARY" in ncpserv.conf and "EXCLUDE_VOLUME SECONDARY" in the ncp2nss.conf file.







In wgp-dt84:







3.3 Configure the load and unload script for the shadow volume



As mentioned before, we need to make sure that the primary and secondary volumes fail over together when we deal shadow volume in a cluster environment. As long as the primary and secondary volumes belong to same cluster resources right from the beginning of creating the volumes, then we do not need many modifications. However, if the volumes are from two different pools, we need many modifications to be done. This AppNote aims to show how to do this. Continue with the next step.



3.3.1 Offline the pools, POOL1 and POOL2, which have the primary and secondary volumes




  1. Login to iManager and click on Clusters, then "Cluster Manager" to bring up the "Cluster Manager" page.

  2. Browse the cluster object using the object browser and select the cluster object. Here the name of the cluster is IDM361. Then we get the "Cluster Manager" page updated with the resources.

    screenshot below:


  3. Check the check boxes against POOL1_SERVER and POOL2_SERVER and click "Offline".





3.3.2 Copy the load and unload scripts for the pool, POOL2 which has secondary volume SECONDARY




  1. In the "Cluster Manager" page, click on the pool name, POOL2_SERVER to bring up the "Cluster Pool Properties" page. Click on the "Scripts" tab.



  2. Copy the load and unload script to a temporary test file or somewhere just for reference. I have copied the load and unload scripts as shown below.


POOL2 Load script: 
#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
exit_on_error nss /poolact=POOL2
exit_on_error ncpcon mount SECONDARY=253
exit_on_error add_secondary_ipaddress 164.99.103.199
exit_on_error ncpcon bind --ncpservername=IDM361_POOL2_SERVER --ipaddress=164.99.103.199

exit 0
POOL2 Unload script:
#!/bin/bash
. /opt/novell/ncs/lib/ncsfuncs
ignore_error ncpcon unbind --ncpservername=IDM361_POOL2_SERVER --ipaddress=164.99.103.199
ignore_error del_secondary_ipaddress 164.99.103.199

ignore_error nss /pooldeact=POOL2
exit 0



In the above scripts, the italicized lines are specific to POOL2 and hence will be commented out when they are added to the Load and Unload scripts of the pool, POOL1 that has primary volume, PRIMARY. The basic idea is to control both the primary and secondary volumes using the load and unload script of a single cluster pool resource.


3.3.3 Modify the load and unload scripts of the pool, POOL1 that has primary volume, PRIMARY, to be used for the shadow volume.



Note: Make sure you do not delete anything from primary pool’s Load and unload scripts. The only thing you need to do is to add the lines that are specific to the secondary pool and comment out (for future use) the lines that are not necessary.




  1. Modify the load script: In the "Cluster Manager" page, click on the pool name, POOL1_SERVER to bring up the "Cluster Pool Properties" page. Then click on the "Scripts" tab to view/edit the load script.

    The original Load script for primary pool, POOL1 is as follows:



    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    exit_on_error nss /poolact=POOL1
    exit_on_error ncpcon mount PRIMARY=254
    exit_on_error add_secondary_ipaddress 164.99.103.198
    exit_on_error ncpcon bind --ncpservername=IDM361_POOL1_SERVER --ipaddress=164.99.103.198
    exit 0



    Now it is modified as follows:



    Modified/combined Load script for primary pool, POOL1:




    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    exit_on_error nss /poolact=POOL2
    sleep 10

    exit_on_error nss /poolact=POOL1
    sleep 10
    #exit_on_error ncpcon mount PRIMARY=254
    #exit_on_error ncpcon mount SECONDARY=253
    exit_on_error ncpcon mount PRIMARY=254,shadowvolume=SECONDARY
    exit_on_error add_secondary_ipaddress 164.99.103.198
    #exit_on_error add_secondary_ipaddress 164.99.103.199
    exit_on_error ncpcon bind --ncpservername=IDM361_POOL1_SERVER --ipaddress=164.99.103.198
    #exit_on_error ncpcon bind --ncpservername=IDM361_POOL2_SERVER --ipaddress=164.99.103.199
    exit 0



    Notes on load script:




    • Secondary pool, POOL2 is activated well before the primary pool, POOL1.

    • Sleep of 10 sec is added after POOL2 is activated.

    • Comment out the ncpcon mount command for both primary and secondary volumes.

    • Added mount command for the clustered shadow volume(exit_on_error ncpcon mount PRIMARY=254,shadowvolume=SECONDARY).

    • All the POOL2 specific lines are added but commented out(#). This is not necessary though.




  2. Modify the Unload script:

    The original Unload script for the primary pool, POOL1:



    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    ignore_error ncpcon unbind --ncpservername= IDM361_POOL1_SERVER --ipaddress=164.99.103.198
    ignore_error del_secondary_ipaddress 164.99.103.198
    ignore_error nss /pooldeact=POOL1
    exit 0



    Now it is modified as follows:


    Modified/combined Unload script for the primary pool, POOL1



    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    ignore_error ncpcon unbind --ncpservername=IDM361_POOL1_SERVER --ipaddress=164.99.103.198
    #ignore_error ncpcon unbind --ncpservername=IDM361_POOL2_SERVER --ipaddress=164.99.103.199
    ignore_error del_secondary_ipaddress 164.99.103.198
    #ignore_error del_secondary_ipaddress 164.99.103.199
    ignore_error nss /pooldeact=POOL1
    ignore_error nss /pooldeact=POOL2
    exit 0



    Screenshots for the Load and Unload scripts for POOL1 are shown below.



    Screenshot for the Load Script:





    Screenshot for the Unload Script:







3.3.4 Bring the Primary pool, POOL1 online using the cluster manager page in iManager. Secondary pool, POOL2 should remain offline.




  1. In iManager, click on Clusters then Cluster Manager and select the cluster object using the object browser to bring up the "Cluster Manager" page of that cluster.



  2. Check the check box to select POOL1_SERVER and click Online to bring it up.



  3. Select the server where we want to run this pool resource and click OK.



    I have selected wgp-dt83.





    Cool! The POOL1_SERVER is running now in wgp-dt83. Note that the POOL2_SERVER remains offline.

  4. Now, let us make sure that the NCP client can still access the PRIMARY volume and can see all the files from both PRIMARY and SECONDARY volumes. To do this start the Novell client.



  5. Type in the IP address 164.99.103.198/PRIMARY, <the IP address of POOL1 >/ <volume name > and admin user, and click the Map button.



    Then we get the next page.



  6. Provide the password, eDirectory tree name, and admin context, and click OK.



  7. Cool! NCP client can see both Primary and secondary files together. So our Load script is working. Similarly we can offline the pool and verify that the NCP client can not access the pool anymore and bring it online again and verify that NCP will get access again. This way we can check if the load and unload scripts are working.



4. Cluster migrate the pool from current node, wgp-dt83 to another node, wgp-dt84 and verify that DST policy for the particular volume is also migrated.




  1. Just before migration there is no policy in wgp-dt84. To see this, login to the NRM of wgp-dt84 and click on "View File System" and then "Dynamic Storage Technology Options" to open the "Dynamic Storage Technology Options" page (screenshot follows). As shown below we do not see any DST policy by name "PRIMARY Policy".



  2. Now migrate the POOL1_SERVER from wgp-dt83 to wgp-dt84. To do this, login to iManager and click on Clusters then Cluster Manager and select the cluster object using the object browser to bring up the "Cluster Manager" page of that cluster.



  3. Check the checkbox against POOL1_SERVER to select and click Migrate.



  4. Select the server wgp-dt84 and click OK to start the migration and come back to the Cluster Manager page.


  5. Now the pool is running in wgp-dt84. Login to the NRM of wgp-dt84 and verify that the policy is migrated.



  6. Cool! Policy is also migrated along with the pool. Remember we did not see any policy in wgp-dt84 as verified in step i.

  7. Verify that the NCP client can still access the same primary volume, PRIMARY and all the files from PRIMARY and SECONDARY volumes are seen. (Follow step 2)



5. Execute the DST policy in the current node, wgp-dt84 to verify that policy still works and also verify that NCP client can still access the files and have the same view as in step 2.



  1. Login to the NRM of the other node, wgp-dt84 where the pool is migrated to.

  2. Execute the DST policy "PRIMARY Policy". This time let us move all the .txt files from secondary to Primary as we have already moved all the .txt files from Primary to secondary in step 3.1.2 (viii, ix, x). To do this, click on the policy name and select the "Execute now" option in Frequency section, and select "Volume Operations:" as "Move selected files from shadow area to primary area".


    Screenshot follows.


    Below is the file structure for Primary and secondary volume in wgp-dt84 before policy is executed. Note, all the files, including .txt files are in the SECONDARY volume.






  3. Click Submit to execute the policy right now.


    Here you see, Last Executed is updated with time, and Total Files Moved is also updated.



    Cool! DST policy is still working as expected.

  4. Verify the file structure after the policy is executed.



  5. After the policy is executed, here we see that all the .txt files moved from SECONDARY to PRIMARY volume.

  6. Also verify that the NCP client can still access the PRIMARY volume and see all the files located in both PRIMARY and SECONDARY volume. (Follow step 5).



This concludes that this policy is working after cluster migration and there is no impact on the DST shadow volume functionality and the NCP client’s views.



The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.