UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21. Read more.
UPDATE! The community will be go into read-only on April 19, 8am Pacific in preparation for migration on April 21.Read more.
3716 views

CIFS permissions errors

Jump to solution

Two node OES2015sp1 cluster with three cluster resources (NSS volumes).  Migrated all resources to node 2, removed node 1 from cluster and rebuilt as OES2018sp1 and rejoined the cluster.  Migrated all resources to node 1 (the OES2018sp1 node).  Users with NCP connections have no problems.  Users that are using CIFS connections can read files and save files with the same name but can not create or save as with new name.  The have RWECMF rights to the path in eDirectory.  When you try to create a file you get a permissions denied error.  I am able to replicate this problem with a test account that has RWECMF files to a specific path.  When I authenticate with an eDirectory admin account on the same device with a CIFS connection to the NSS volume, I have full rights.

Labels (3)
0 Likes
1 Solution

Accepted Solutions
Micro Focus Contributor
Micro Focus Contributor
Jimmy and myself discussed on the issue , here is the way to overcome the problem :

1. ncpcon set REPLICATE_PRIMARY_TREE_TO_SHADOW=1 ==> This will replicate only directories structure to shadow, for any new folders or directories after this value is set.

2. ncpcon set SYNC_TRUSTEES_TO_NSS_AT_VOLUME_MOUNT=1 ==> This helps in trustee resynchronization for an NSS volume when it is mounted for NCP

3. ncpcon dismount <vol name>

4. ncpcon mount <vol name>

With the above 4 steps, the missing directory structure will be created and trustees will be sync, file /directory operation should be successful based on the trustee rights for the user

Please try the above steps and let us know the results.

View solution in original post

35 Replies

Update to this.  The NSS volumes with problems were created in an earlier versions of OES (trying to find out which version, was before my time here).  One of the cluster resources is a volume that was created on the OES2015sp1 cluster.  I have no problems with that volume.  I also gave my test account rights to a folder on a NSS volume on a OES2018sp1 server.  That volume was created on that OES2018sp1 server and I also have no problems with that.  Only with the two NSS volumes created  on earlier versions of OES.  Is there a way to "upgrade" the NSS volumes other than creating new NSS volumes and migrating the data over?

0 Likes

Update - the impacted volumes were created on OES11.  A rolling cluster upgrade was performed to OES2015sp1.  CIFS was working fine until we started another rolling cluster upgrade to OES2018sp1.  At this point we have one node at 2018sp1 and one node on 2015sp1.  I migrated the impacted volumes to the 2015sp1 node and CIFS is working normally on them again.  The question becomes, is there anything we can do to the impacted volumes to get CIFS to work properly on 2018sp1 or do we need to recreate the volumes on oes2018sp1?

0 Likes
Knowledge Partner Knowledge Partner
Knowledge Partner

You can always check pool- and volume-media-version in nsscon. You'll see it in the activiation history or (for pools) with "poolmediaversion". But i'd rather suspect something CIFS-related. Defaults have changed with 2018 (e.g. SMBv3 instead of SMBv2), you might want to compare settings with "novcifs -o". Also, as you've reinstalled the node from scratch (as opposed to upgrading it), check the CIFS settings in iManager.

There lots of pools created under NetWare which happily run on recent OES builds, hence, i'd strongly doubt that any sort of recreation would be needed. You hopefully don't you any AD-enablement-dillydally, do you?

 

If you like it: like it.
0 Likes

poolemediaversion looks fine.

ADMIN_DATA 51.00 NSS64
ARC_ADS 51.00 NSS64
ARC_LDS 51.00 NSS64
LAB_DATA 51.00 NSS64

We are not doing any AD enablement.  In terms of CIFS settings, the only difference I see besides the SMB version is that the 2018sp1 node has DFS support disabled while the 2015sp1 node has it enabled and there may be some DFS junction points on the two impacted volumes.

 

0 Likes
Knowledge Partner Knowledge Partner
Knowledge Partner

cifs.log throws a 132? Could you post a snippet from the log?

 

If you like it: like it.
0 Likes

I enabled DFS support on the 2018sp1 node and restarted CIFS.  I then migrated one of the impacted volumes over and still same issue.  I can open and edit files and save with the same name but can not do a save as or create new files.  Migrated the volume back to the 2015sp1 node and it drive mapping reconnected and things work fine.

It works fine on a volume that was created on the 2015sp1 cluster no matter which node it is on, the older volumes have this problem.

0 Likes

I attached the cifs.log (renamed to cifs_log.txt).  It starts at the time CIFS was restarted afer enabling DFS support.  I do not see any 132 errors.

0 Likes
Knowledge Partner Knowledge Partner
Knowledge Partner

Did you dupe the issue at that time? You should (of course normally shoudn't) see something like

ERROR: CODIR: LockCache local error: 0x0, error msg: ERR_NO_CREATE_PRIVILEGE, error: 132

with reference to user and filename.

Which OS do you use? Could you by chance try with via a CIFS mount from a linux box?

 

If you like it: like it.
0 Likes
Knowledge Partner Knowledge Partner
Knowledge Partner

BTW, 51.00 denotes IIRC a 64bit pool with hardlinks enabled on all volumes. So they must have been created running OES2015 or later.

 

If you like it: like it.
0 Likes
Micro Focus Contributor
Micro Focus Contributor
  • Is there  DST configured? IF yes, does shadow path rights are set correctly ?
  • You can check the effective rights for the user using below command
    • “rights –f <path> effective <userFDN> ” - <userFDN > you can get from "rights -f <path> show"
    • Do this on both primary and shadow if shadow configured
0 Likes

Yes, DST is enabled on all volumes.  This will be difficult to check.

On volume: //media/nss/VOL/Path1/Path2

User has rights to path2

On Shadow volume:  /media/nss/ARCVOL/Path1/Path2

ARCVOL is the name of the shadow volume.  Path2 does not exist on the shadow volume.  Path2 on the main volume is new and does not have data old enough to have been moved to the shadow volume.

So, my test account shows effective rights to path2 on the main volume but I can not check path2 on the shadow volume because it does not exist on the shadow volume.  Our DST is set to create the folders as needed, not as a complete mirror of the main volume.

 

The odd thing is that we have three volumes, all three have DST.  Two of the volumes have the problem, one does not

Node1 = oes2018sp1

node2 = oes2015sp1

volume1

volume2

volume3

When volumes1 and 2 are on node 2, there are no problems with CIFS.  When you move them to node 1, you can read and edit files but can only save with same file name and can not create new files.

Volume3 works on either node without problems.  Volume3 was the last volume created and it was created on the oes2015sp1 cluster.

CIFS is configured the same on both nodes except for the new options on oes2018sp1 that are not available on oes2015sp1.

 

 

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.