ZDML SP1 IR2 Installation on OES2 SP1 Cluster Using iSCSI

Author: Binoy Thomas

This article explains the procedure for installing ZDML SP1 IR2 (ZEN 7 SP1 for Linux) or later versions on OES2 SP1 cluster using iSCSI. iSCSI is a protocol that permits remote access of various mediums. The iSCSI protocol permits remote access of storage via standard TCP/IP. This would be very useful for testers to test ZDML on a cluster when the SAN and related cluster hardware (Fiber Channels and HBA card etc.) are not available. This describes the product installation on a two node cluster but the same can be used with clusters having more nodes.

Minimum System Requirements

  • Two machines for cluster nodes (OES2 supported)

  • One for ISCSI target

Software Requirements

  • OES2 SP1

  • SLES10 SP2 (as ISCSI target)

  • Total four static IPs. Two for cluster nodes, one for shared resource(Pool), and one as Cluster Master IP.

Important steps involved in the setup:

Note: Please ensure that the steps are performed in the following order.

  1. Install OES2 SP1 on two cluster nodes

1. Install OES2 on two cluster nodes

  • If both the Operating System and NSS Partitions are on the same hard disk, the Novell Storage Services file system (NSS) requires that the Enterprise Volume Management System (EVMS) be used as the volume manager of devices that contain (or will contain) NSS pools and volumes. But here we are going to use the iSCSI device (storage) to create the NSS partition. The iSCSI device will act as another disk so you can proceed with normal partitioning while installing nodes.

  • Software selection

    Under OES Services, select NCS, eDirectory, iManager and NSS. Dependant Patterns automatically get selected.

  • Make node 1 as the master server (eDirectory) and add node 2 as the read/write replica.

    Note: Node 1 and node 2 must be in different contexts. So while adding the second node, specify a new OU, Ex: ou=node2.o=novell

  • Make sure both machines are in time sync (when asked for the NTP configuration, select local clock for the first node and specify the first node's IP as the NTP server on the second node ) or point to any common NTP server.

2. Install SLES10 SP2 and configure the iSCSI target

During the SLES installation, leave some space to create the iSCSI partition.

Create a partition without formatting. No mount point should be mentioned as it needs to be mounted using iSCSI initiators from cluster nodes.

Configure the iSCSI target using YaST

Run yast2 iscsi-server. For test setup, only the 'Services' and 'Targets' tabs need to be modified.

Select "When Booting" under the Service tab. Need not select iSNS server.

Click on the "Targets" tab. Delete any entry that already exists and add the target as shown below.

Click on 'Add' and browse and select a partition that you want to make as the iscsi target, say /dev/hda3

In the following example, it is /dev/cciss/c0d0p3.

3. Configure iSCSI initiator on both cluster nodes

Here we are not using iSNS and authentication. Start the iSCSI initiators on each node as shown below.

  • Run yast2 iscsi-client

  • Make the service start as "When booting"

Select the 'connected targets' tab and click on 'add'. Enter the iSCSI target machine's IP address then click next.

Click on 'connect' to establish connection with iSCSI target then click next and again click next.

Ensure the "start-up" is marked as automatic, otherwise click on 'Toggle Startup' to make it automatic.

Now the iSCSI partition can be seen on the nodes as a virtual disk. It can be verified by "yast2 disk".

4. Start Novell Cluster service on both nodes

Start the cluster service using YaST, run yast2 ncs.

Provide eDirectory credentials and select 'New Cluster'. Give the cluster IP address as shown below.

Note: Select the shared media as storage device. (In the example it is sdb.)

Then run 'yast2 ncs' from second node and add it to the cluster by selecting 'existing cluster'.

5. Create POOL and VOLUME on the iSCSI partition from any one of the nodes and mount it

This volume will act as the shared storage for the cluster invoke partitioning tool by "nssmu" from one of the node's command line and select 'pools'.

Press insert on pool to make a new pool, and give the resource IP address.

Then create a volume in the pool.

6. Check the cluster using iManager

Access iManager of the node https://node_IP/nps and give the eDirectory credentials. Use Cluster Manager to view the cluster as shown below.

7. Install ZDML on both nodes

Ensure the shared resource is mounted on node 1 before starting the ZDML installation on it.

Once the installation on node 1 is finished, migrate the shared resource to node 2 and complete the ZDML installation on it. (the mount point cannot be specified during the ZDML installation without migrating the shared resource to respective node)

8. Take the shared resource offline and then to online (using cluster manger in iManager) to start all ZDML services

Additional Information

  1. Installing ZENworks 7 Desktop Management with SP1 in an OES Linux Cluster Environment (Refer to the Appendixes section B.11 Installing ZENworks 7 Desktop Management with SP1 in an OES Linux Cluster Environment)



How To-Best Practice
Comment List