Enabling CIFS on OES 2 SP2 Linux cluster AFTER pool creation

Not sure if anyone has run into this or not, but I figured I would post this for future's sake...

Our Scenario:
We have a few file server clusters in our tree. They are all up-to-date and running OES 2 SP2 Linux. In the past, we have only ever needed NCP and AFP for our clients. Recently we have come across the need to enable CIFS access to these cluster pools as well.

Something that should be simple to do turned into a long and drawn out task.

The procedure sounds pretty simple... Install and configure the CIFS service on the individual cluster nodes, go into iManager, edit the cluster's virtual pool objects, and on the protocols tab check the box to enable CIFS as an advertising protocol.

If only life were that simple... Once I completed that procedure, I offlined the cluster pool resource and attempted to bring it back online (to refresh the configuration of course). That is when I realized this was not such an easy task. The pool resource would go comatose immediately after trying to bring it back online.

The Solution:
After searching these forums and looking for TIDs, I discovered that with OES 2 SP1 Linux, a script called cifsPool.py exists to fix some issues for this exact scenario. Unfortunately, that script does NOT work for SP2. I have tested, re-tested, and triple-tested it to see if it would fix my issue - it did not.

Back at the drawing board, I created a brand new pool; enabling all 3 advertising protocols during creation. Then I did some LDAP searches against the virtual server objects for each pool, comparing the results.

Turns out that the new pool I created added 2 attributes and 1 object class to its virtual server object. These attributes were missing from my production virtual server that I am trying to CIFS enable.

So in order to fix this, go into iManager -> Directory Administration -> Modify Object. Select the virtual server object for your cluster resource. Under General -> Other, select "Object Class", edit it, and add "nfapCIFSConfigInfo" and click OK. Next, add the "nfapCIFSAttach" attribute and set it to the IP address that the virtual pool is bound to (the address you want CIFS to advertise on). Finally, add "nfapCIFSServerName" and enter the server name value from your cluster's CIFS config.

Once this procedure is done the cluster resource came back to life once I performed the online operation. Fully CIFS enabled and ready to service your users.

A "Side" Rant:
Come on now Novell... This is something that should not be that hard for you to fix. Giving out "cifsPool.py" scripts to band-aid an issue is bad enough, but not fixing the problem in the next service release is definitely not a good practice.

Another bug that I stumbled upon while testing in my lab scenario... Where the admin user cannot have a special character in its password or the schema will not extend when installing clustering services.

This bug: www.novell.com/.../dynamickc.do
Parents Reply Children
No Data