This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Migrating users and trustees (eDirectory service)

I am migrating Novell OES from one Novell server to another Novell server (which i will be eventually using as Novell file server because it has bigger hard disk) using the miggui (novell migration tools) on the target server. In the Add services to migrate, it only shows file service and iprint. eDirectory service is not shown, so I am not able to migrate users and trustees. How to migrate trustees and users?

Tags:

Parents
  • 0  
    Am 05.07.2017 um 13:44 schrieb srinivaskv:
    >
    > I am migrating Novell OES from one Novell server to another Novell
    > server (which i will be eventually using as Novell file server because
    > it has bigger hard disk) using the miggui (novell migration tools) on
    > the target server. In the Add services to migrate, it only shows file
    > service and iprint. eDirectory service is not shown, so I am not able to
    > migrate users and trustees. How to migrate trustees and users?
    >
    >


    You're looking for "transfer ID" in Miggui. eDirectory is not a service.

    CU,
    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0 in reply to   
    few questions:
    1. wouldn't transfer id rename the target and change it's ip address to that of source and shutdown the source server. I do not want it this way.
    I would like to create a replica. so I only want to transfer users and trustees. I used to be able to do this with mls, maptrustees, migtrustees command, but now it does not seem to work.
    2. does transfer id require the target server be clean that is no volumes or volume data?
  • 0   in reply to 
    Am 07.07.2017 um 11:54 schrieb srinivaskv:
    >
    > few questions:
    > 1. wouldn't transfer id rename the target and change it's ip address to
    > that of source and shutdown the source server.


    Yes.

    > I do not want it this
    > way.
    > I would like to create a replica. so I only want to transfer users and
    > trustees. I used to be able to do this with mls, maptrustees,
    > migtrustees command, but now it does not seem to work.
    > 2. does transfer id require the target server be clean that is no
    > volumes or volume data?
    >
    >


    I guess one of us is confused. Users and trustees aren't server centric,
    you have a directory there. When you use miggui or any other proper tool
    to transfer files from one server to another *IN THE SAME EDIRECTORY
    TREE*, trustees will copy over too. Users do not even exist on
    individual servers, they exist in your directory, you can not and need
    not transfer those.

    CU,
    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0 in reply to   
    Hi,

    Another question along the same lines-i have 2 servers (primary and secondary). When I create a new user in edirectory it does not show in the 2nd server's edirectory (which I access by iManager to the 2nd server). This used to work before but it does not seem to sync now. Sometime takes a long time sometime never at all.
  • 0   in reply to 
    Start with posting the output of
    ndsrepair -T
    and
    ndsrepair -E
    from both boxes.
  • 0 in reply to   
    Output from primary server LFA1 (master):
    > ndsrepair -T

    [1] Instance at /etc/opt/novell/eDirectory/conf/nds.conf: LFA1.O=leo.FASTENERS
    Repair utility for Novell eDirectory 8.8 - 8.8 SP7 v20702.00
    DS Version 20702.02 Tree name: FASTENERS
    Server name: .LFA1.leo

    Size of /var/opt/novell/eDirectory/log/ndsrepair.log = 13895 bytes.

    Building server list
    Please Wait...
    Preparing Log File "/var/opt/novell/eDirectory/log/ndsrepair.log"
    Please Wait...
    Collecting time synchronization and server status
    Time synchronization and server status information
    Start: Thursday, December 06, 2018 14:49:54 Local Time

    --------------------------- --------- --------- ----------- -------- -------
    DS Replica Time Time is Time
    Server name Version Depth Source in sync /-
    --------------------------- --------- --------- ----------- -------- -------
    Processing server: .LFA2.leo
    .LFA2.leo 20808.03 0 Non-NetWare No 5
    Processing server: .LFA1.leo
    .LFA1.leo 20702.02 0 Non-NetWare Yes 0
    --------------------------- --------- --------- ----------- -------- -------
    Total errors: 0
    NDSRepair process completed.

    > ndsrepair -E
    [1] Instance at /etc/opt/novell/eDirectory/conf/nds.conf: LFA1.O=leo.FASTENERS
    Repair utility for Novell eDirectory 8.8 - 8.8 SP7 v20702.00
    DS Version 20702.02 Tree name: FASTENERS
    Server name: .LFA1.leo

    Size of /var/opt/novell/eDirectory/log/ndsrepair.log = 14699 bytes.

    Preparing Log File "/var/opt/novell/eDirectory/log/ndsrepair.log"
    Please Wait...
    Collecting replica synchronization status
    Start: Thursday, December 06, 2018 14:50:16 Local Time
    Retrieve replica status

    Partition: .[Root].
    Replica on server: .LFA2.leo
    Replica: .LFA2.leo 12-06-2018 14:30:38
    Replica on server: .LFA1.leo
    Replica: .LFA1.leo 07-24-2018 17:58:52
    Server: CN=LFA2.O=leo 12-06-2018 14:46:16 -601 Remote
    Object: [Root]
    All servers synchronized up to time: 07-24-2018 17:58:52 Warning

    Finish: Thursday, December 06, 2018 14:50:16 Local Time

    Total errors: 1
    NDSRepair process completed.
    -------------------------------------------------------
    Output from secondary server LFA2 (read/write)

    [1] Instance at /etc/opt/novell/eDirectory/conf/nds.conf: LFA2.O=leo.FASTENERS
    Repair utility for NetIQ eDirectory 8.8 - 8.8 SP8 v20807.08
    DS Version 20808.03 Tree name: FASTENERS
    Server name: .LFA2.leo

    Size of /var/opt/novell/eDirectory/log/ndsrepair.log = 6914 bytes.

    Building server list
    Please Wait...
    Preparing Log File "/var/opt/novell/eDirectory/log/ndsrepair.log"
    Please Wait...
    Collecting time synchronization and server status
    Time synchronization and server status information
    Start: Thursday, December 06, 2018 14:54:05 Local Time

    --------------------------- --------- --------- ----------- -------- -------
    DS Replica Time Time is Time
    Server name Version Depth Source in sync /-
    --------------------------- --------- --------- ----------- -------- -------
    Processing server: .LFA2.leo
    .LFA2.leo 20808.03 0 Non-NetWare Yes 0
    --------------------------- --------- --------- ----------- -------- -------
    Total errors: 0
    NDSRepair process completed.

    [1] Instance at /etc/opt/novell/eDirectory/conf/nds.conf: LFA2.O=leo.FASTENERS
    Repair utility for NetIQ eDirectory 8.8 - 8.8 SP8 v20807.08
    DS Version 20808.03 Tree name: FASTENERS
    Server name: .LFA2.leo

    Size of /var/opt/novell/eDirectory/log/ndsrepair.log = 7640 bytes.

    Preparing Log File "/var/opt/novell/eDirectory/log/ndsrepair.log"
    Please Wait...
    Collecting replica synchronization status
    Start: Thursday, December 06, 2018 14:54:27 Local Time
    Retrieve replica status

    Partition: .[Root].
    Replica on server: .LFA2.leo
    Replica: .LFA2.leo 12-06-2018 14:30:38
    All servers synchronized up to time: 12-06-2018 14:30:38
    Finish: Thursday, December 06, 2018 14:54:27 Local Time

    Total errors: 0
    NDSRepair process completed.
  • 0   in reply to 
    Wow. Such a state is unreachable without manual intervention. While it's technically possible to really repair this offset i'd strongly recommend to follow Massimo's advice. It'll save you time and money.
  • 0   in reply to   
    On 06.12.2018 14:24, mathiasbraun wrote:
    >
    > Wow. Such a state is unreachable without manual intervention. While it's
    > technically possible to really repair this offset i'd strongly recommend
    > to follow Massimo's advice. It'll save you time and money.
    >
    >

    I have though about this a bit, and I *think* one way to get there is to
    remove a non-running server from the eDir tree and later fire it up again.
    In this case here that would have been LFA1. Aka I think LFA1 was the
    Master of the tree, it got downed, removed from the tree at LFA2, which
    consequently became Master and kicked LFA1 out of it's eDir database,
    and then LFA1 got started again. Of course, LFA1 still thinks it's the
    master and that LFA2 is in the tree. That's why it can still talk to it,
    as LFA2 really still is the same server.
    That is most likely what has happened here, at least I can't come up
    with some better explanation.

    CU,
    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0   in reply to   
    Best bet, definitely. Now he has to decide which DIB the majority of current information holds. They've been divorced 14 weeks ago...
  • 0   in reply to   
    On 07.12.2018 13:44, mathiasbraun wrote:
    >
    > Best bet, definitely. Now he has to decide which DIB the majority of
    > current information holds. They've been divorced 14 weeks ago...
    >
    >

    And likely administration has been done on both sides since then ;)

    CU,
    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0   in reply to   
    Just like concurrently administering groups in AD: one side wins.
  • 0   in reply to   
    On 07.12.2018 15:04, mathiasbraun wrote:
    >
    > Just like concurrently administering groups in AD: one side wins.
    >
    >

    With AD, no side wins, it's all lost. ;)

    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0   in reply to   
    Good point. But wait... MS wins: your money, your data, ...
  • 0   in reply to   
    On 07.12.2018 17:54, mathiasbraun wrote:
    >
    > Good point. But wait... MS wins: your money, your data, ...
    >
    >

    There is that.. ;)

    --
    Massimo Rosen
    Micro Focus Knowledge Partner
    No emails please!
    http://www.cfc-it.de
  • 0 in reply to   
    Hi Massimo,

    Thanks for your input and suggestions. Looks like now both servers and hosting eDirectory. I was under the impression that eDirectory is only on primary server. I will try downing the servers and reconnecting them at a later time and update this thread with results. BTW, is there command to take backup of eDirectory data, delete eDirectory data, restore eDirectory from backup. If I could do that then I could just restore the data on secondary.
  • 0   in reply to 
    Every server in a tree hosts eDirectory. You can partition the tree and place replicas of these partitions on servers as you like, but even a server that doesn't hold any replica still hosts eDirectory. In your case i'd assume there is a total of two servers SUPPOSED to be in the tree and that there's only one partition (which is the [root] partition). For unknown reasons (but likely to those Massimo posted earlier in this thread) you ended up with a highly undesirable offset, in which LFA1 assumes to be the master of a tree with a second server called LFA2. While LFA2 considers itself to be the master of a single-server tree. Consequently, LFA1 tries to sync changes made in its database to LFA2, but as LFA2 doesn't know of LFA1 it ignores these sync attempts. On the other hand, if there are changes taking place in the DIB of LFA2, the latter one doesn't even try to sync them anywhere as it's not aware of any other server in its tree.
    What this means in the first place is, that you have some sort of "split brain condidtion" ever since July 24th, i.e. if you created an object on LFA1 since then, LFA2 doesn't know anything about it. And, of course, vice versa.
    The schematic plan (without details) to clean this up would be something like
    - decide which server will survive in the first place
    - backup data and trustee assignments from the other one
    - try to export objects created on the other one after jul24th (e.g. via LDAP)
    - wipe the other one (you can keep NSS volumes)
    - clean up the tree on the survivor (i.e. clean up the replica ring and DIB from references to by now wiped box)
    - patch the survivor to current code
    - install the "other one" into the survivor's tree (PLEASE do so with current code, i.e. the stuff you've used one step before)
    - try to recreate / import objects exported in step#3
    - restore data / reattach data volumes
    - restore trustee assignments as needed

    Just a rough overview with no warranty. There are, of course, other ways to get this done, but the outline above can be followed without special tools.
Reply
  • 0   in reply to 
    Every server in a tree hosts eDirectory. You can partition the tree and place replicas of these partitions on servers as you like, but even a server that doesn't hold any replica still hosts eDirectory. In your case i'd assume there is a total of two servers SUPPOSED to be in the tree and that there's only one partition (which is the [root] partition). For unknown reasons (but likely to those Massimo posted earlier in this thread) you ended up with a highly undesirable offset, in which LFA1 assumes to be the master of a tree with a second server called LFA2. While LFA2 considers itself to be the master of a single-server tree. Consequently, LFA1 tries to sync changes made in its database to LFA2, but as LFA2 doesn't know of LFA1 it ignores these sync attempts. On the other hand, if there are changes taking place in the DIB of LFA2, the latter one doesn't even try to sync them anywhere as it's not aware of any other server in its tree.
    What this means in the first place is, that you have some sort of "split brain condidtion" ever since July 24th, i.e. if you created an object on LFA1 since then, LFA2 doesn't know anything about it. And, of course, vice versa.
    The schematic plan (without details) to clean this up would be something like
    - decide which server will survive in the first place
    - backup data and trustee assignments from the other one
    - try to export objects created on the other one after jul24th (e.g. via LDAP)
    - wipe the other one (you can keep NSS volumes)
    - clean up the tree on the survivor (i.e. clean up the replica ring and DIB from references to by now wiped box)
    - patch the survivor to current code
    - install the "other one" into the survivor's tree (PLEASE do so with current code, i.e. the stuff you've used one step before)
    - try to recreate / import objects exported in step#3
    - restore data / reattach data volumes
    - restore trustee assignments as needed

    Just a rough overview with no warranty. There are, of course, other ways to get this done, but the outline above can be followed without special tools.
Children
  • 0 in reply to   
    Hi Mathias,

    Could you explain how I do this ? - "clean up the tree on the survivor"
    Should I open the objects browser in iManager and delete all objects that have LFA2 including LFA2. (could I use 3rd party LDAP tools to do this?)
    After I clean up the tree , could I do this? Shutdown LFA2. Create another LFA2 server (with the same ip address as before ) and install it under the same tree as LFA1.
    create volumes and restore data from backup tape.

    I also tried some other method:
    Because now we have bigger space available, I created a new server LFA1 and restored the eDirectory that I took from previous LFA1 server. ( I do not need another server LFA2. I have space for all the volumes of LFA2 in this LFA1). Then I removed all references to LFA2. Also recreated server and local certificates. I have all the users from the previous server, but i notice only one problem. If I create a volume, the volume is created but with an error message - NDS error 669. Also the volume does not show in Files/Folders in iManager client.
  • 0   in reply to 
    You would shutdown LFA2 first, then remove it (with ndsrepair) from the replica ring and finally wipe all objects referencing it from the DIB. Once finished you could install a new LFA2 into the tree, you could use the "old" IP again if you like.

    As for the second part: 669 means "no access" which is likely a result of the way you "restored eDir from the previous LFA1 server". When you create a volume the system tries to create a corresponding volume object in NDS, there's a "servernameadmin" object (likely LFA1admin in your case), whose identity is used for this task. This object gets created on server installation and should have inheritable supervisory rights to the server's context. I'd assume that your restore procedure left you with a problem in this area. You'll likely be able to resolve this by re-running the configuration workflow for NSS via "yast2 nss".
  • 0 in reply to   
    Hi Mathias,

    For the second part: I tried the yast2 nss (reconfigure). Even after this I am getting an error (-669) while creating a new Volume (volume is created except for the error message and it is not listed in iManager client). The ndstrace also says authentication failed. Anything else I can do to debug this.

    Thanks for you help.

    Srinivas.
  • 0   in reply to 
    Do you have a user object named "LFA1admin" in the context of LFA1? Does it have supervisory rights to the context?
  • 0 in reply to   
    Hi Mathias,

    Yes I have a LFA1admin object under "leo" context. I open the objects browser, then select "leo" object and look at Modify Trustees, I see LFA1admin listed with supervisor rights.
  • 0   in reply to 
    Try deleting the LFA1admin object. Then rerun the corresponding config workflow with "yast2 nss". This should recreate the object. Afterwards retry creating the volume's DS objects.