UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21. Read more.
UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21.Read more.

Setting Up a Business Continuity Cluster (BCC) in a Single eDirectory Tree Using OES 2 Linux Servers

Setting Up a Business Continuity Cluster (BCC) in a Single eDirectory Tree Using OES 2 Linux Servers

Table of Contents



Aim of this AppNote

BCC overview

Set Up Details

Installation Steps:

     1. Install NCS

     2. Install BCC in all the nodes of all the clusters

     3. Install IDM and create/configure bcc drivers

     4. Configure the clusters for BCC

     5. Verify that your BCC is working by doing BCC migration

     6. Add third cluster into the BCC



Aim of this AppNote


This AppNote aims to provide all the steps required to set up a BCC using a demo setup. This also gives a demo on how to migrate one resource from one cluster to another cluster in BCC. It also shows how to add a new cluster in the existing BCC setup.
However, configuring the Mirrored storage between the clusters is not covered as it vendor specific and left to the user. Instead we create a iSCSI target which is shared/visible to all the clusters which will serve our requirement.



BCC overview



Novell® Business Continuity Clustering (BCC) offers corporations the ability to maintain mission critical (24x7x365) data and application services to their users while still being able to perform maintenance and upgrades on their systems.



Novell BCC is a cluster of Novell Cluster Services (NCS) clusters where cluster maintenance, and synchronization have been automated and allows a whole site to do fan-out failover to other multiple sites. BCC uses eDirectory and policy-based management of the resources and storage systems.



Novell BCC software provides the following advantages:




  • Integrates with SAN hardware devices to automate the failover process using standards based mechanisms such as SMI-S.

  • Utilizes Novell Identity Manager technology to automatically synchronize and transfer cluster related eDirectory objects from one cluster to another.

  • Provides the capability to fail over as few as one cluster resource, or as many as all cluster resources.



Setting up of BCC involves the followings steps:




  • Configuring NCS

  • Configuring Mirrored Storage

  • Installing the BCC 1.2 Beta 3 Software on Every Node in Each Cluster

  • Installing and configuring Identity Manager 3.6 on One Node of Each Cluster

  • Synchronizing the BCC IDM Drivers (for new cluster in BCC)

  • Configuring the Clusters for BCC

  • BCC-Enabling Cluster Resources



Each step, except Configuring Mirrored Storage, is defined below in detail. For this demo, I am not using Mirrored storage as such. Refer to the Set Up Details section below for more detail.



Make sure that you read the section “Set Up Details” before you start the installation.



Requirements for BCC 1.2 Beta Test Environments:


Make sure that OES2 servers you are planning to use for BCC setup meet the following requirements:



  • SUSE® Linux Enterprise Server (SLES) 10 SP2 (shipping version)

  • OES 2 SP1 Beta 2 Linux with the options necessary for Novell Cluster Services installed.



Set Up Details



eDirectory Structure:



eDirectory structure plays an important role for BCC to function properly. If you do not follow the following points, you may end up with many unpredictable behaviors.




Click to view.



Fig. eDirectory structure for demo BCC set-up





  • Make sure that each cluster resides in it's own OU. Each OU should reside in a different eDirectory partition.

  • Best practice is to put all the server objects, cluster object, driver objects and Landing Zone that belong to single cluster , into a single eDirectory partition as shown above.



eDirectory structure used for our BCC set up is shown above. It has three OUs
cluster1, cluster2, cluster3 for three clusters and each of them are separate eDirectory partition. In the first partition, cluster1 created for cluster1, cluster1 landing zone (cluster1LandingZone), server objects (wgp-dt82,wgp-dt83),the two nodes of cluster1, cluster1 object(cluster1) and cluster1’s BCC IDM driver set (cluster1Drivers) reside. Similar is the case for cluster2’s partition cluster2 and cluster3’s partition, cluster3.</p.

IDM (Identity Manager ) and its requirements:



BCC 1.2 requires IDM 3.6 or later to run on one node in each of the clusters that belong to the BCC in order to properly synchronize and manage your BCC.



Make sure that the node where IDM will be installed has eDirectory full replica with at least read/write access to all eDirectory objects that will be synchronized between clusters.




  • Make sure that at least one of the nodes in a cluster is 32 bit OES2 SP1 Linux OS for IDM installation.

  • Make sure that the IDM node has eDirectory Read/Write access on the corresponding partition. In the above diagram, wgp-dt82 is 32 bit and has Read/Write access for the partition,cluster1. Similar is the case for wgp-dt84,wgp-dt89 for cluster2 and cluster3 partitions respectively.



Components Locations:



The above diagram also shows BCC component locations –in which server which software is installed. For example,wgp-dt82 -(32bit) NCS, BCC, IDM: In this server NCS, BCC and IDM is installed and wgp-dt83-NCS, BCC:In this server NCS and IDM install and no IDM is installed. Similarly for other servers/nodes. So, from hereon, lets refer to these nodes (wgp-dt82, wgp-dt84, wgp-dt89), where IDM will be installed as “IDM nodes”



Servers set up:



As mentioned above, this set up has 6 OES2 SP1 Linux servers (wgp-dt81, wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86) and are grouped to form the three clusters cluster1, cluster2, cluster3. Below is how our BCC setup looks like.




Click to view.



cluster1: wgp-dt82 ,wgp-dt83

cluster2: wgp-dt84,wgp-dt86

cluster3: wgp-dt89

iSCSI target :wgp-dt81



Mirror storage set up:



Use whatever method is available to implement the mirrored storage between the clusters. The BCC 1.2 product does not perform data mirroring. You must separately configure either SAN-based mirroring or host-based mirroring.



Choosing and configuring the mirror storage is left to the user as it is vendor specific.



However, for this demo set up we will not be doing mirroring as such. Instead we create an iSCSI target which is shared/visible to all the clusters. Hence, any modification/deletion/edition done on this shared device will be seen and reflected in all the clusters of BCC. This way it removes the necessity of mirroring the storage among clusters. We will use the server wgp-dt81 as our iSCSI target server.



This iSCSI target has 4 raw (unformatted) partitions. Three partitions of size 2GB each are used for NCS cluster specific data and one partition of size 30GB is used as common storage, shared and visible to all the clusters. These partitions are exported as iSCSI targets with the iSCSI identifier mentioned below for easy reference and identifications. From here onwards, these partitions will be referred with the corresponding iSCSI identifier. For example, “cluster1sbd” partition will actually mean the partition, which is created only for cluster 1 and it’s related data’s including sbd partitions.




  1. cluster1sbd: This partition is meant to be used only for cluster1 and sbd partition for cluster1 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster1.

  2. cluster2sbd: This partition is meant to be used only for cluster2 and sbd partition for cluster2 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster2.

  3. cluster3sbd: This partition is meant to be used only for cluster3 and sbd partition for cluster3 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster3.

  4. SharedDevice: This is the common storage visible/shared to all the clusters. This basically means that all the nodes of all the clusters will be connecting to it and is cluster sharable.



Network details:



BCC is meant for the clusters, which are geographically separated/across WAN. However for this demo set up we will use the servers in single LAN and single subnet as shown in the above diagram.



Installation Steps:



Keeping all the notes mentioned above in mind, let us start with installation. First step of BCC installation is to install Novell Cluster Services (NCS). So let us start with this, the NCS installation.



1. Install NCS



1.1. Prepare for iSCSI target server –Create partitions and export them as iSCSI target.



This section will be different if you are using other methods to implement the mirror storage. You can follow this process only when you use the same method, I am using here to set up BCC.



Let us create the 4 partitions on the iSCSI target server ,wgp-dt81 as mentioned in the Setup Details above. Below are the steps to do this.



Steps:




  1. Login Click on Computer then Yast2 and search for Partitioner and click Partitioner Icon or type yast2 disk on the terminal.

  2. Click Yes for the Pop up Warning message and get the Expert Partitioner wizard.

  3. Click on Create button.

  4. Select the Device where partitions need to be created and click OK.

  5. Select the Primary Partition from Partition Type and click OK.

  6. Select Do not format and enter the size in the box named End (2GB entered) and click OK.

  7. Now you will get the partition you have created just now in the next page .It is highlighted in the below clip.

  8. To create next partition, click the Create button and repeat the same process above till you get the require number of partitions. After creating all the four partitions, my screen looks like the one shown below.



  9. Click to view.



    Note that /dev/sda3, /dev/sda4 and /dev/sdb1 of 2GB each and /dev/sdb2 of 30GB are created.



  10. Now click Apply to confirm the partition creation.

  11. Click Apply on the pop up message to complete the task.

  12. To exit the Expert Partitioner wizard click the Quit button.



Once you are done with partition, export those partitions as iSCSI targets with the required identifier name as explained in Setup Details, so that other servers, so called Initiator in iSCSI terminology, can connect to it and use as shared storage.



Below are the steps to do this.



Steps:




  1. Type yast2 iscsi-server in the console terminal of same iSCSI target server, wgp-dt81.

  2. Click Continue on the Pop Up message on the Initializing ISCSI Target Configuration page. This will take us to iSCSI Target Overview page.

  3. Select “When Booting” in Service Start and click on “Targets” tab on iSCSI Target Overview page.

  4. If some target name is already there, select that and click on “Delete” button and click “Continue” on the pop up message to confirm the deletion.

  5. Click on the Add button on the same on iSCSI Target Overview page.

  6. Modify the Identifier field with a appropriate name for easy identification. Put the Identifier as “cluster1sbd” as explained above and then click Add.

  7. Click on the Browse button and select the 2GB partition (in my case it is sda3) and click Open and then OK. Then you will get the page below.


  8. Click to view.


  9. Click the Next button. This will take you to Modify iSCSI Target page.

  10. Again click the Next button if Authentication is not used . If Authentication is used, Authentication parameters need to configure here. Our BCC set up does not require Authentication, so let us not enable Authentication and leave authentication disabled (default) and hence clicked Next.

  11. Again click on the Add button and repeat the same process till you get all the partitions listed as iSCSI target with unique identifier you have given. After adding all the partitions, our page looks like this.


  12. Click to view.


  13. Now click Finish.

  14. Click Yes on the pop up message to Restart the iscsitarget service? on the Saving iSCSI Target Configuration page.



This completes the preparation on iSCSI target. This iSCSI target server,wgp-dt81 is now ready to accept the iSCSI connections from the iSCSI initiator, the servers (wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86, wgp-dt89) which will be forming clusters.



1.2. Prepare the servers, that will be part of cluster, to connect to the corresponding iSCSI targets



After target preparations, all the servers that will be part of clusters need to start iSCSI connection to the iSCSI target. While making iSCSI connections make sure that all the servers are connected to right iSCSI targets as mentioned below.




  • All the servers, wgp-dt82, wgp-dt83 which will be members of cluster1 should connect to two targets –1.target with identifier name cluster1sbd and 2. Target with identifier name sharedDevice.

  • Similarly the severs, wgp-dt84 and wgp-dt86 which will be members of cluster2 should connect to two targets – 1.Target with identifier name cluster2sbd and 2. Target with identifier name sharedDevice. Similar is the case for server wgp-dt89 which will be a member of cluster3.



So, first let us go ahead with the Initiator configurations for the servers, wgp-dt82 and wgp-dt83 which will form cluster1. The same process can be repeated for the servers of other clusters making sure that they connect to the right targets as explained just above.



Lets start with wgp-dt82. Listed below are the steps to do this.



Steps:




  1. Type yast2 iscsi-client in the terminal of the server, wgp-dt82. This will take you to Initializing iSCSI Initiator Configuration page.

  2. Click Continue on the pop up message to continue installation of open-iscsi package. This will take you to iSCSI Initiator Overview page

  3. Select “When booting” Service start and click “Connected Targets” tab on iSCSI Initiator Overview page

  4. Click on the Add button to bring up the iSCSI Initiator Discovery page.

  5. Type the IP of the iSCSI target wgp-dt81 (164.99.103.81) on IP Address field and click on Next. (If you can not see the exported iSCSI targets in the iSCSI Initiator Discovery page on clicking Next, check the following important note.)



  6. Click to view.



    Important Note: Even after iSCSI target configuration, if you do not see the above page at all, firewall setting of the iSCSI target could be the culprit. Quick check will be to disable the firewall in the iSCSI target server. If it is working with firewall disable, then enable the firewall and make sure that “iSCSI Target” service is allowed in the firewall.

    You can do this as follows: Login to the target server as a root, bring up firewall configuration wizard by typing “yast2 firewall”, then click on Allowed Services, select the Service to Allow drop down menu and select iSCSI Target from the menu, then click on Add, click on Next and click on Accept button to finish the configuration.


  7. Select the iSCSI target with identifier name cluster1sbd and click on Connect and then Next. Now verify that the Connected state to True now.

  8. Again select the second iSCSI target with identifier name, sharedDevice then Connect and then Next. Now verify that the Connected state to True now.

  9. Now let us connect to the second iSCSI target, sharedDevice. So, select the target with identifier sharedDevice and click Connect and then Next. At this point our iSCSI Initiator Discovery page looks below.



  10. Click to view.



  11. At this point, wgp-dt82, the first server of cluster1 have connected to all the required iSCSI targets. We do not need to connect any more iSCSI targets. So lets continue with further iSCSI configuration. Click on Next. This will take you to iSCSI Initiator Discovery page with the tab “Connected Targets” selected.

  12. Select the targets one by one and click on Toggle Start-Up button to toggle the Start-Up mode to automatic.

  13. Click Finish to exit the iSCSI Initiator Discovery wizard and complete the Initiator set up for wgp-dt82.

  14. Now verify that all the connected disks are listed in “lsscsi” command in server terminal.


  15. Click to view.




This completes iSCSI connection to corresponding iSCSI targets for wgp-dt82 server, the server which will be a part of cluster1.



Repeat this step, step 1.2 for the server wgp-dt83, which will be another member of same cluster, cluster1.



Repeat the step, step 1.2 for all other servers which will be members of other clusters, cluster2 and cluster3. While making iSCSI connections, make sure that all the servers are connected to right iSCSI targets as mentioned above.



This completes the Initiator and Target configuration. Now all the servers are ready for setting up the Novell Cluster Services(NCS) cluster. Let us go ahead with the NCS configuration.



1.3. Configure NCS using Yast2


1.3.1. Initialize the devices that correspond to connected iSCSI targets and make it cluster sharable



Once the iSCSI target are connected, these targets will be visible as a device in nssmu. They need to be initialized and make it cluster sharable before they are used for cluster. Make sure that the device is initialized only once from any server, not from multiple servers, connected to it.



Let’s initialize the devices that corresponds to the iSCSI targets whose identifiers are cluster1sbd and sharedDevice and make the cluster sharable from wgp-dt82. This will be reflected to all the servers connected to this targets and hence it is not needed to be repeated on other servers.



And device corresponds to iSCSI targets whose identifiers are cluster2sbd and cluster3sbd from wgp-dt84 and wgp-dt89 respectively.



Let us start with the devices that corresponds to cluster1sbd and sharedDevice from wgp-dt82. Given below are the steps to do this.



Steps:




  1. Login to wgp-dt82 as a root and invoke NSS Management Utility by typing nssmu.

  2. Select Devices from the Main Menu and press Enter key.

  3. Select the device that corresponds to iSCSI target whose identifier is cluster1sbd (2GB) from the list of devices using up down arrow and then press F3 to Initialize the device then type Y when message pops up to confirm the initialization. Then press F6 to make the device cluster sharable.

  4. Select the second device that corresponds to iSCSI target whose identifier is sharedDevice (30GB) and press F3 and then OK on the Initialization confirmation and then press F6.

  5. To exit the NSS utility press Esc, Esc multiple times.



At this point we are done with initializing and cluster sharing of the devices that corresponds to iSCSI Target, cluster1sbd and sharedDevice. Now we are ready for cluster setup and configuration. Let us go ahead with it.



1.3.2. Setting up cluster- the NCS configuration.



Let us start with wgp-dt82, which is going to be the first member of cluster1. It is assumed that NCS packages are already installed but not yet configured. Let us do the configuration now. Given below are the steps along with screenshots to do this.



Steps:




  1. Login to wgp-dt82 and type “yast2 ncs” on the terminal to launch NCS configuration wizard.


  2. Click to view.




    Click to view.



  3. Press Continue on the pop up message.


  4. Click to view.


  5. Enter the Admin password and click OK to proceed.


  6. Click to view.


  7. Select the New Cluster, check both Directory Server Address, enter the FDN of the cluster ( Make sure to enter the correct context to conform the requirement for the eDirectory partitions mentioned in the eDirectory structures), IP address of the cluster and storage device with shared media, sdc, the device that corresponds to iSCSI target with iSCSI identifier cluster1sbd .Remember this is the device we noted down during 1.3.1 Initialization the device …… , click Next.


  8. Click to view.


  9. Click on Finish button to finish the configuration and exit the wizard


  10. Verify that cluster is running in this server/node wgp-dt82, the first node of the cluster cluster1:



Click to view.


Cool, we have done with first node of cluster1. Let us make the second server, wgp-dt83 join to this cluster as second member. To do this follow the steps below.




  1. Launch NCS configuration typing yast2 ncs on the 2nd server, wgp-dt83’s terminal. This takes you to the Initializing NCS configuration page.

  2. Click Continue on the pop up message in Initializing NCS configuration page to continue the configuration. This takes you to Novell Cluster Services Configuration page.

  3. Enter the admin password and click OK to continue. This takes you to the below page.


  4. Click to view.


  5. Select Existing Clusters, check both the Directory Server Address, and enter the FDN of the cluster1 and then click Next to proceed.

  6. Click Finish to save the configurations and setting and exit the NCS configuration wizard.

  7. Now verify that cluster is running on this server and this server is the second node of the cluster1.


  8. Click to view.




Cool, we are done with the first cluster, cluster1, set up and configuration and it is running fine.



1.3.3. Follow the same steps, step 1.3.1 –Initialization … and step 1.3.2 -NCS configuration… to setup and configure the remaining clusters, cluster2 and cluster3.



At this point all our clusters, cluster1, cluster2, cluster3 are configured and running. So we are ready to proceed with the BCC Installation.



2. Install BCC in all the nodes of all the clusters



As mentioned in the “Set up notes of eDirectory ,IDM and BCC components locations”, like NCS, BCC needs to be installed in all the nodes of the clusters which are part of BCC. So let us now start installing this is one server wgp-dt82.For other servers, we can repeat the same process.



2.1. Add the BCC admin user(s), bccadmin ( can be any name)—This is optional. If you wished to use another user to administer the BCC, then go ahead with this step else skip this as we can use same eDirectory admin to manage BCC.




  1. In iManager click on Users and then Create Users.


  2. Click to view.


  3. Fill in the required values and click on OK to create the user.

  4. Click on OK to complete the task



2.2. Create BCC group bccgroup -all lower case (hard coded as of bcc1.2)



Given below are the steps to do this.


Steps:




  1. In iManager, click on Groups then Create Group.


  2. Click to view.


  3. Fill in the values in the required fields. Make sure to select the proper context, wherever applicable.

  4. Click on OK to create the group. This will take you to the completion page.

  5. Click OK to exit the wizard.



2.3. Now add the BCC admin user(s) bccadmin, as members of the group-bccgroup.



Given below are the steps to do this.



Steps:




  1. In iManager click on Groups then Modify Groups.

  2. Click on object selector button.

  3. From the object selector page, browse for the group and select the group by clicking on the group name, bccgroup. This will close the Object selector pop-up window.

  4. Then click on OK to modify to the object.

  5. Click on the Members tab then click on object selector button to bring up the object selector window.

  6. Select the user, bccadmin from the Object selector browser and click on OK to complete the selection. This will close the object selector pop-up window.

  7. Click on Apply to save the changes and then OK to complete the task.


  8. Click to view.




2.4. LUM enable the group,bccgroup and include all the workstations of all the clusters


Given below are the steps to do this.



Steps:




  1. In iManager, click on Linux User Management then Enable Groups for Linux.

  2. Click on object selector button to bring up the object selector browser on Step 1 of 2: Select groups page.

  3. Select the group, bccgroup and click on OK to complete the selection. This will close the object selector browser.

  4. Click on Next>> button on Step 1 of 2: Select groups page.

  5. Click on Next>> button on Step 1a of 2: Confirm Selected Groups. This will take you to Step 2 of 2: Select Workstations page.

  6. Click on the Object selector button to select the servers to bring up Object Selector browser.

  7. Browse and select for all the servers (nodes) of all the clusters and click on OK to complete the selection. This will close the Object Selector browser and will take you back to Step 2 of 2: Select Workstations page.


  8. Click to view.



  9. Click on Next>>. This will take you to Summary page.

  10. Click on Finish on the Summary page to complete the task.

  11. Click on OK on Complete: Success page to exit the page.



2.5. Add the BCC admin user(s), bccadmin as trustee of the all the cluster objects –This is not required for eDirectory Tree Admin

Given below are the steps to do this.



Steps:




  1. In iManager, click on Rights then Modify Trustees.

  2. Click on object selector button

  3. Browse and select the cluster object ,cluster1 by clicking on the object name.


  4. Click to view.


  5. Click on OK

  6. Click on Add Trustee button to bring up the object selector browser.

  7. Select the user bccadmin from the object selector browser and click OK to complete the selection.

  8. Now click on Assigned Rights link

  9. Modify the Assigned Rights as per the requirement and click on the Done button. Let us give full access for bccadmin.

  10. Then click on the Apply button to save the changes

  11. Click on OK to complete: Modify Trustee successful page to exit the page.



2.6. Repeat the same process (step 2.3 : Add the BCC admin user(s), …. ) for all other cluster objects, cluster2, cluster3



2.7. Add the BCC admin user(s), bccadmin in the ncsgroup by editing /etc/groups file.



Below is how it can be done -Log in as the server as root and open the /etc/group file and find either of the following lines:



ncsgroup:!:107:

or

ncsgroup:!:107:bccd



The file should contain one of the above lines, but not both.



Depending on which line you find, edit the line to read as follows:


ncsgroup:!:107:bccadmin,<other users separated by comma if any>

or

ncsgroup:!:107:bccd,bccadmin,<other users separated by comma if any>



Replace bccadmin with the BCC Administrator user you created.


Notice the group ID number of the ncsgroup. In this example, the number 107 is used. This number can be different for each cluster node.



Let us start with wgp-dt82, the 1st node of cluster1. Let us put bccadmin and admin in the ncsgroup. Below is how it can be done.




Click to view.



Save the /etc/group file.



Execute the id <bcc admin user name> and verify that ncsgroup should appear as a secondary group of the BCC Administrator user(s).




Click to view.



2.8. Repeat step 2.7 in all the other servers of all clusters.



2.9. Download the BCC software and install the packages



Now let us download the BCC software in all the servers. Let us start with wgp-dt82.




  1. Login to wgp-dt82 as root user and open a terminal and type yast2 add-on.

  2. Select the method to download the package (rpms) on Add-on Product Media page. We used HTTP so select HTTP and click on Next. This takes you to the next page.



  3. Click to view.



  4. Type the server name and location of the corresponding packages. This brings up the License Agreement page.

  5. Select Yes I Agree to the License Agreement and click Next. This will take you to the next page.


  6. Click to view.


  7. Click on dropdown list of Filter and select Patterns from the drop down menu.


  8. Click to view.


  9. Now Novell Business Continuity Cluster will be shown under Additional Software section. Check that checkbox to select all the rpms (novell-business-continuity-cluster.rpm, yast2-novell-bcc.rpm and novellbusiness-continuity-cluster-idm.rpm package).

    Note: novellbusiness-continuity-cluster-idm.rpm is mandatory for the node where IDM will be installed and optional for other non-IDM nodes .So I select all the rpms for all nodes to makes things simple.






  10. Click to view.



  11. Click on Accept button to install the packages.

  12. Click on No on the Install or remove more packages? Pop-up messages to complete and exit the package installation.



2.10. Configure BCC in all the nodes of the clusters



Now we can start BCC configuration. Let us start with wgp-dt82 of cluster1.




  1. Login to the server, wgp-dt82 as root and type yast2 novell-bcc in the server terminal to launch BCC configuration.


  2. Click to view.




    Click to view.



  3. Click Continue if prompted to configure LDAP.



  4. Click to view.



  5. Specify the eDirectory admin password and click OK.


  6. Click to view.



  7. Select/Check both the Directory Server Address and click Next. This will take you to Novell Business Continuity Cluster(BCC) Configuration Summary page.

    Note: The already existing cluster would have been entered for you in the Existing Cluster DN field. So we do not need to modify anything. However we need to make sure that “Start Business Continuity Cluster Service Now “option is checked so that, once installation is completed, BCC software will start.




  8. Click Next to install BCC and then click Finish to save and complete the BCC configuration and exit the wizard.

  9. Verify that bcc software is running on the server now.


  10. Click to view.





2.11. Repeat steps 2.9 to 2.10 in all the nodes of all the clusters.



3. Install IDM and create/configure bcc drivers



3.1. Install IDM on one node (32 bit server) in each clusters



We need to install IDM on all the IDM nodes (wgp-dt82,wgp-dt84,wgp-dt89).



To start with let us create BCC IDM driver for cluster1 in wgp-dt82.



Steps are given below:




  1. Login to wgp-dt82 and open terminal and start installation script install.bin, provided, by loop mounting it as shown below.

    Note: In the below screen IDM 3.6 downloaded is “Identity_Manager_3_6_Linux.iso”.





  2. Click to view.




    Click to view.



  3. Select the language and Click OK. This will take you to License Agreement page:

  4. Select I Accept the terms of the License agreement and click Next on License Agreement page. This will take you to the Select Components page.

  5. On Select Components page just click on Next to go with default option. Default option is enough

  6. Click on OK to the Identity Manager Activation Notice message to install with the 90 days trial period. Later it can be activated. This will take you to Authentication page.

  7. On Authentication page specify the eDirectory admin DN and password and click on Next to get Pre-Installation Summary page.


  8. Click to view.


  9. Click Install on Pre-Installation Summary page wait till it completes and get the Install Complete page. It takes some time.

  10. Click on Done on the Install Complete page to complete the installation and exit the wizard. Note that this page prompts to restart the application server.

  11. Restart the application server (tomcat) as prompted in Install Complete page. To do this login to server ,where IDM is installed, as root and restart tomcat as shown below.



  12. Click to view.


  13. Now verify that the IDM plug-ins are installed in iManager of the same server. The plug-in are shown in the red box in the below clip just for our reference.



  14. Click to view.





3.2. Repeat step 3.1 for the other two IDM nodes of the other two clusters cluster2 (wgp-dt84) and cluster3 (wgp-dt89)



3.3. Configure/Create BCC IDM drivers



Let us 1st configure BCC for two clusters (cluster1 and cluster2) and then join the 3rd cluster, cluster3 in it. This is because, BCC driver creation and configuration required some special care specially when we deal with three or more clusters. From this point let us keep the third cluster cluster3’s BCC IDM driver configuration aside for a while. We will come back at the step 6 for the same.



3.3.1. Create/Configure BCC driver for cluster1 to sync with cluster2



Login to the iManager of one of the IDM node where IDM plug-ins are installed to do this. Let us use wgp-dt82 ‘s iManager.




  1. In iManager, click Identity Manager > Identity Manager Overview.


  2. Click to view.


  3. click Driver Sets > New.



  4. Click to view.



  5. Specify the driver set name (cluster1Drivers) and click on the object selector button to browse and select the context (make sure to select the correct context as mentioned in “Set up Details” section in the beginning of this document), for this driver set.



  6. Click to view.



  7. Uncheck the “Create a new partition on this driver set” option and click OK on the pop up message and then click OK button to complete driver set creation and take you to on the Driver Set Overview page.



  8. Click to view.



  9. On the Driver Set Overview page, click Drivers.



  10. Click to view.



  11. Click on Add Driver from the pop-up menu.



  12. Click to view.



  13. Select that the driver set you just created, if not selected automatically in an existing driver set text box, then click Next.



  14. Click to view.



  15. Click on the object selector button to specify the DN of the server in this cluster that has IDM 3.6 installed on it. It is wgp-dt82 for this set up so select this server.



  16. Click to view.



  17. Click on Next.



  18. Click to view.



  19. Click on the Show drop down menu and select All Configurations.



  20. Click to view.



  21. Select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu.



  22. Click to view.



  23. Click Next.

  24. On the next page of Import Configuration page provide the following values:



  25. Driver name: cluster1toCluster2BCCdriver

    This name can be given anything but should be unique. I’ve given the driver name as.



    Name of SSL Certificate: SSL CertificateDNS.

    This certificate can be seen as follows-click on View Object and click on Organization Object, in the right side pane we can see the certificate object with this name.



    DNS name of other IDM node: 164.99.103.84

    IP address of the IDM server with which this driver will synchronize with. This is IDM node,wgp-dt84 of cluster2 in our BCC set up.


    Port number for this driver: 2002

    If you have a business continuity cluster that consists of three or four clusters, you must specify unique port numbers for each driver template set. The default port number is 2002. I have left the port number to default value-2002



    Full Distinguished Name (DN) of the cluster this driver services: cluster1.cluster1.bcc

    Specify or just browse using object selector button and select the current cluster,cluster1 here.



    Fully Distinguished Name (DN) of the landing zone container: cluster1LandingZone .cluster1.bcc

    This is the container where the cluster pool and volume objects in the other cluster are placed when they are synchronized to this cluster. The NCP server objects for the virtual server of a BCC enabled resource are also placed in the landing zone. I had already created a container called cluster1LandingZone-refer the eDirectory structure.
    I have selected this container for landing zone for cluster1.




    Click to view.




    Click to view.



  26. Click on Next.



  27. Click to view.



  28. Once the Next button is clicked, it will start importing the configuration as shown above. It takes a few minutes. So wait, do not do anything till the next page (shown below) comes up.



  29. Click to view.



  30. Click on Define Security Equivalences to bring up Security Equals wizard.



  31. Click to view.



  32. Click Add.



  33. Click to view.



  34. Now browse and select the desired User object(s) then click OK. Here I have selected users bccadmin and admin, the eDirectory admin.



  35. Click to view.




  36. Click Apply and then OK and come back to the Import Configuration page.




  37. Click to view.




  38. Click Next.



  39. Click to view.



  40. Click Finish to complete the configuration and exit this IDM configuration Wizard and get the next page-Driver Set Overview:



  41. Click to view.





3.3.2. Create/Configure BCC driver for cluster2 to sync with cluster1




  1. In iManager, click Identity Manager > Identity Manager Overview.

  2. Click on New below Driver Sets.

  3. In Create Driver Set wizard, specify the driver set name as cluster2Drivers, context and deselect (disable) the Create a new partition on this driver set option, then click OK .This takes you to the Driver Set Overview page.

  4. On the Driver Set Overview page, click Drivers under Overview.

  5. Click Add Driver from the pop-up menu.This takes you to the Import Configuration page.

  6. Verify that the driver set you just created, ie cluster2Drivers is specified in an existing driver set text box, then click Next.

  7. Specify the DN of the server, wgp-dt84 in this cluster, cluster2 that has IDM 3.6 installed on it, then click Next.

  8. Open the Show drop down menu and select All Configurations and select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu, then click Next.

  9. On the next page of Import Configuration page provide the following values:

    Driver name: cluster2toCluster1BCCdriver

    Name of SSL Certificate: SSL CertificateDNS

    DNS name of other IDM node: 164.99.103.82, the IP address of wgp-dt82, the cluster1 node

    Because this is the server where IDM is installed and cluster2 driver should sync with

    Port number for this driver: 2002, the default value

    Full Distinguished Name (DN) of the cluster this driver services: cluster2.cluster2.bcc

    Fully Distinguished Name (DN) of the landing zone container: cluster2LandingZone.cluster2.bcc


  10. Click to view.





    Click to view.



  11. Click on Define Security Equivalences.

  12. Click Add and browse and select the user objects bccadmin and admin, then click OK.

  13. Click Apply and then OK on Security Equals wizard.

  14. Click Next and then Finish on Import Configuration page. Now you will see this driver in the Driver Set Overview page.




Click to view.



3.3.3. Configure firewall to allow the BCC driver ports-if firewall is enabled.



Make sure that the BCC driver ports are allowed in the firewall if firewall is enabled on the IDM nodes. Let us start with the IDM node,wgp-dt82 of cluster1. Follow the below steps to do this.



Steps:




  1. Login to wgp-dt82 as root and type “yast2 firewall” in the terminal to launch Firewall Configuration:Start-up page

  2. Click on Allowed Services link on the left column of the page

  3. Click on Advanced.. on the right bottom to bring up Additional Allowed Ports page

  4. Under TCP Ports, add the driver port(s) ,2002 and click OK

  5. Click Next and then Accept.



3.4. Upgrade the drivers to use new enhanced IDM architecture and start the drivers.



Once you have created drivers for each cluster, you must upgrade them to the IDM 3.6 format ,the enhanced architecture.


Follow the below steps to do this.




  1. In iManager, click Identity Manager > Identity Manager Overview.



  2. Click to view.



  3. Click on the driver set link to bring up the Driver Set Overview. First I clicked on cluster1Drivers link



  4. Click to view.



  5. Click on the red Cluster Sync icon and you should be prompted to upgrade the driver to new enhanced architecture.



  6. Click to view.



  7. Click OK to upgrade the driver to use new enhanced IDM architecture.



  8. Click to view.



  9. Now let us start the driver. To do this click on the upper right corner of the Cluster Sync icon



  10. Click to view.



  11. Click on Start driver from the pop-up menu.



  12. Click to view.




Now the color of upper right corner of the Cluster Sync icon should changed to green which means that this driver is started and running.

Repeat step 3.4 for cluster2 BCC driver, cluster2toCluster1BCCdriver by clicking on the driver set name “cluster2Drivers” on Identity Manager Overview page.

4. Configure the clusters for BCC



4.1. Enable BCC on each clusters




  1. In iManager, click on Clusters, then click the Cluster Options link.

  2. Specify the cluster name, cluster1.cluster1.bcc or just click on object selector button and browse and select cluster1 object.



  3. Click to view.



  4. Click the Properties button and click on the Business Continuity tab.





  5. Click to view.



  6. Check(enable) Enable Business Continuity Features check box is selected.



  7. Click to view.



  8. Click OK to confirm.



  9. Click to view.



  10. Click Apply to and then OK to save the changes and complete the task.



Repeat step 4.1 for second cluster cluster2.

4.2. Adding Cluster Peer Credentials



In order for one cluster to connect to a second cluster, the first cluster must be able to authenticate to the second cluster. To make this possible, you must add the username and password of the user that the selected cluster will use to connect to the selected peer cluster.



This can be done only through command line as of BCC1.2 -Bug on BCC1.2.



Below is how we can do.



At the terminal console prompt, enter “cluster connections”.



[You should see both the clusters,cluster1 and cluster2. If all the clusters are not present, then either the IDM drivers are not synchronized or BCC is not properly enabled on the clusters. If synchronization is in progress, wait for it to complete, then try cluster connections again.]



For each cluster in the list, type “cluster credentials <cluster name> “ at the server console prompt, then enter the BCC admin username bccadmin or admin and password when prompted. I have used “admin” here. This is shown below.




Click to view.



4.3. Verify the Cluster connections


Now you should see the connections status are fine as shown below.




Click to view.



Repeat steps 4.2 and 4.3 in all the nodes of all the clusters and make sure that all the clusters listed in the cluster connections command and connection status is OK.



Note: At this point, all other nodes would be showing as “invalid credentials” as we have not set the credentials yet.


4.4. BCC enable the cluster resources



Create one pool and a volume in it using nssmu. Make sure that you create pool from the shared device /partition.(30GB). Let us do this in server wgp-dt82, a node of cluster1.




  1. Login to wgp-dt82 as root and open a terminal and type nssmu to launch NSS Management Utility

  2. Select Pools from the Main menu and press Enter key.

  3. Press Insert key to create a new pool.

  4. Enter the Pool name,POOL1 and press Enter key.

  5. Select the device from which the specified pool will be created. Make sure you selected the shared Device.

  6. Specify the size of the pool and press Enter.

  7. Assign the IP address of the pool and select Apply and then press Enter. This completes the pool creation on the shared device. Now to create volume in this pool follow next step.

  8. Now press Esc key to come back to main menu of NSS Management Utility

  9. Now select the Volumes from the main menu and press Enter.

  10. Press Insert key

  11. Enter the name of the volume,pool1vol1.

  12. On the Encrypt Volume? Message type Y on N as per your choice. I chose N. Select the pool, POOL1 you have just created and press Enter. This completes the pool and volume creation through nssmu.(screenshot follows)

  13. Press Esc and then Esc to exit the nssmu



Now let us BCC enable the same pool.



Steps:




  1. Login to iManager and In the left column of iManager, click Clusters, then click the Cluster Manager link and specify the cluster name cluster1.cluster1.bcc, or browse and select the cluster object,cluster1 [Note: Make sure to select the cluster where pools are created. Here I had created the pool POOL1 in wgp-dt82,which is a node of cluster1. So I have selected cluster1]


  2. Click to view.



  3. Click on the pool name, POOL1_SERVER here. This brings up Cluster Pool Properties page.

  4. Click on Business Continuity tab and Check(enable) the Enable Business Continuity Features check box.(Screenshot follows). [Note: Make sure that the appropriate clusters are listed in the Assigned list. Here you should see cluster1 and cluster2.]



  5. Click to view.



  6. Click OK on the Pop-up message.

  7. Click OK again to finish the task.



5. Verify that your BCC is working by doing BCC migration



Steps:




  1. In iManager, click Clusters, then click the BCC Manager link and specify the cluster name, cluster1.cluster1.bcc, or browse and select the same cluster.



  2. Click to view.



  3. Verify that the pool, POOL1_SERVER is seen under BCC Enabled Resources section.


  4. Click to view.



  5. Select the POOL1_SERVER and then click BCC Migrate.



  6. Click to view.



  7. Select the cluster where you want to migrate the selected resource, then click OK. I have selected cluster2. [Note: If you select Any Configured Peer as the destination cluster, the Business Continuity Clustering software chooses a destination cluster for you. The destination cluster that is chosen is the first cluster that is up in the peer clusters list for this resource.]

  8. Wait till it’s state becomes Red and Secondary in the current cluster, cluster1 (screenshot below).



  9. Click to view.



  10. Now verify that the same pool is running in the target cluster,cluster2 now. To check this, select/type the cluster, cluster2.cluster2.bcc in Cluster field (screenshot follows). If its state is green and running now in this target cluster, then you BCC is working.



  11. Click to view.




This completes the setting up of two clusters BCC and demo of BCC migration.



6. Add third cluster into the BCC



If you have three or more clusters in your business continuity cluster, you should set up synchronization drivers in a manner that prevents IDM loops. IDM loops can cause excessive network traffic and slow server communication and performance.



In this set up, cluster1 driver set, cluster1Drvers will accommodate two BCC drivers:


  1. cluster1tocluster2BCCdriver which runs on port number 2002 to sync with cluster2 BCC driver, cluster2tocluster1BCCdriver which runs on port number 2002 as shown in the diagram below.

  2. cluster1tocluster3BCCdriver which runs on port number 2003 to sync with cluster3 BCC driver, cluster3tocluster1BCCdriver which runs on port number 2003 as shown in the diagram below.




Click to view.



There should not be any direct synchronization between cluster2 and cluster3 to avoid IDM loop. If you need one more cluster to be added, you can configure the driver to sync with any of the driver as long as loop is not formed.



6.1. Create one more BCC driver, cluster1tocluster3BCCdriver to sync with new cluster, cluster3, in cluster1’s existing driver set, cluster1Drivers.




  1. Login to iManager of one of the IDM node and click Identity Manager > Identity Manager Overview.

  2. Click on the driver set link, cluster1Drivers below Driver Sets tab.

  3. On the Driver Set Overview page, click Drivers under Overview.

  4. Click Add Driver from the pop-up menu. This takes you to Import Configuration page.

  5. Verify that the driver set you just selected i.e. cluster1Drivers is specified in an existing driver set text box, then click Next.

  6. Open the Show drop down menu and select All Configurations and select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu, then click Next. On the next page of Import Configuration page provide the following values (screenshots follow):



    Driver name: cluster1toCluster3BCCdriver.

    Name of SSL Certificate: SSL CertificateDNS

    DNS name of other IDM node: 164.99.103.89, the IP address of wgp-dt89, the IDM node of the new cluster,cluster3 node.

    Port number for this driver: 2003

    Full Distinguished Name (DN) of the cluster this driver services: cluster1.cluster1.bcc

    Fully Distinguished Name (DN) of the landing zone container: cluster1LandingZone.cluster1.bcc



  7. Click to view.





    Click to view.



  8. Click on Define Security Equivalences.

  9. Click Add and browse and select the user objects bccadmin and admin, then click OK.

  10. Click Apply and then OK on Security Equals wizard.

  11. Click Next and then Finish on Import Configuration page



Now you will see this new driver, cluster1toCluster3BCCdriver in Driver Set Overview page.




Click to view.




6.2. Create one more driver set,cluster3Drivers for new cluster,cluster3 with a driver, cluster3tocluster1BCCdriver in it to sync with cluster1



Repeat step 3.3.1 Make sure that the following values are entered. Values other than the ones mentioned below are same for all the driver configurations.




  1. Give the name of the new driver set as cluster3Drivers.

  2. In the “Welcome to the Import Configuration Wizard” page give “cluster3Drivers.cluster3.bcc” in “In an existing driver set “ field.

  3. In the “Import Configuration” page enter “wgp-dt89.cluster3.bcc” in the “Select a server to define its association:” as this is the IDM installed cluster3 node.

  4. In the continuation of “Import Configuration” page, enter the following fields:

    Driver name: cluster3tocluster1BCCdriver

    Name of the SSL certificate: SSL CertificateDNS

    DNS name of the other IDM node: 164.99.103.82

    (the IP address of wgp-dt82, the IDM installed node of cluster1,
    with which this driver will sync with )

    Port number of this driver: 2003

    Full Distinguished Name (DN) of the cluster this driver services: cluster3.cluster3.bcc
    (the DN of this new cluster )

    Fully Distinguished Name (DN) of the landing zone container: cluster3LandingZone.cluster3.bcc

    (the landing zone for this new cluster)





6.3. Configure firewall to allow the ports for the new BCC driver



Follow the same procedure mentioned in step 3.3.3 with the new port number(s) 2003.

6.4. Now migrate and start the new drivers



cluster1tocluster3BCCdriver and cluster3tocluster1BCCdriver



Repeat step 3.4 for both the new drivers, cluster1tocluster3BCCdriver and cluster3tocluster1BCCdriver.

6.5. Setup cluster credentials



6.5.1. Setup cluster credentials from all the nodes of the old clusters, cluster1 and cluster2 to cluster3



Do the following tasks (shown in the clip - setting up the credentials using cluster credentials and verification of the cluster connections using ‘cluster connections’ command) from all the nodes of the clusters ,cluster1 and custer2.

Shown for wgp-dt82, one node of cluster1.




Click to view.



Make sure that you get the connection status OK. At this point, cluster connections are fine.



Repeat this for all other nodes of the clusters,cluster1 and cluster2 and make sure cluster connections are fine as above.



6.5.2. From all the nodes of the new cluster,cluster3 to all clusters, cluster1, cluster2, cluster3



Now in all the nodes of new cluster, cluster3 do the following tasks - setting up the credentials using cluster credentials and verification of the cluster connections using ‘cluster connections’ command.



In this setup I have only one server, wgp-dt89 in cluster3. So I have done this only in this node as shown below.




Click to view.



6.6. Synchronizing Identity Manager Drivers



If you are adding a new cluster to an existing business continuity cluster, you must synchronize the BCC-specific IDM drivers after you have created the BCC-specific IDM drivers. If the BCC-specific Identity Manager drivers are not synchronized, clusters cannot be used for BCC migration.



Synchronizing the IDM drivers is not necessary unless you are adding a new cluster to an existing BCC.



To synchronize the BCC-specific Identity Manager drivers follow the following steps:




  1. In iManager, click Identity Manager, then click the Identity Manager Overview.


  2. Click to view.



  3. Click on the driver sets link for the new cluster, cluster3Drivers under Driver Sets tab.


  4. Click to view.



  5. Click the red Cluster Sync icon for the driver you want to synchronize.



  6. Click to view.



  7. Click Migrate seen on the right most panel-Driver overview.



  8. Click to view.



  9. Click the Migrate from Identity Vault from the drop-down menu.



  10. Click to view.



  11. Click Add



  12. Click to view.



  13. Browse and select the Cluster object for the new cluster ,cluster3 you are adding to the BCC, then click OK.

    Note: Selecting the Cluster object for driver synchronization causes the BCC-specific Identity Manager drivers to synchronize.





  14. Click to view.


  15. Click Start to start synchronization.



  16. Click to view.



  17. Click on Close to complete the task.



6.7. Make cluster 3 in assigned list of the existing pools.


Steps:




  1. Login to iManager of any server in BCC then click on clusters > cluster options >

  2. Specify/browse and select any one of the clusters. I have selected cluster1.cluster1.bcc

  3. Click on the existing pool name link of your interest. This will bring up the Cluster Pool Properties page.

  4. Click on the Business Continuity tab and verify that new cluster, cluster3 is shown as unassigned cluster under Resource Preferred Clusters section.(screenshot follows)



  5. Click to view.



  6. Now select the cluster3 from the Unassigned: list and click the left directed button to move it to Assigned: list (screenshot below)



  7. Click to view.



  8. Then click Apply and then OK to complete the task.

  9. Verify that the new cluster, cluster3 is shown in the Available Peers list in Business Continuity Cluster Manager page (iManager > cluster > BCC Manager)



6.8. Verify the BCC setup by migrating pools from cluster1 to new cluster, cluster3.




  1. Login to iManager of one of the IDM node and click on Clusters > BCC manager > This brings up the Business Continuity Cluster Manager page

  2. In the Business Continuity Cluster Manager page specify any cluster in Cluster field. I have selected cluster2.cluster2.bcc where pool1 is currently running (screenshot follows)



  3. Click to view.



  4. Under BCC Enabled Resources section select the existing pool POOL1_SERVER and click BCC Migrate. This brings up BCC Migrate page

  5. Select cluster3 under Cluster Destination section and click OK to migrate POOL1_SERVER from cluster2 to cluster3.

  6. Verify that state the pool, POOL1_SERVER in cluster1 and cluster2 changed to Secondary and Running in the destination cluster, cluster3 (Screenshot follows for cluster3).



  7. Click to view.






At this point we are done with the BCC set up of three clusters and migration of BCC enabled cluster resources is also working. Hereafter any pool created and BCC enabled should be able to migrate from any cluster to any other cluster.


DISCLAIMER:

Some content on Community Tips & Information pages is not officially supported by Micro Focus. Please refer to our Terms of Use for more detail.
Top Contributors
Version history
Revision #:
1 of 1
Last update:
‎2008-11-20 18:00
Updated by:
 
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.