eDir Transaction ID Finder


Avoiding eDirectory Meltdowns With the 'eDirectory Transaction ID Finder

This program finds Transaction IDs embedded in eDirectory 8.7.3.x / 8.8x roll-forward log files (as found in the nds.rfl directory).

If repairs are not run regularly on your eDirectory servers, it is important to know the range of transaction IDs that are currently being issued on each server, so that you can take the necessary preventative action when required.

eDirectory is based on the FLAIM database engine, which has a maximum transaction value of FFFFE000 (hex) and as such it is possible to 'run out' of transaction IDs if a repair is not run (which will reduce the last transaction ID counter).

If a server does run out of transaction IDs, only Novell Technical Services has the ability to correct the problem.

See: eDirectory "Transaction ID" Warning / error -618 database stops working and won't open.

By using this program and comparing results, it is also possible to approximate how long it will be before a particular server runs out of transaction IDs, allowing you to plan accordingly if you have a 24x7 eDirectory server.

Running The Program

The program uses a copy of the roll-forward log from any 8.7.3.x or 8.8x database.

The roll-forward log files are normally located in:

  • NetWare - SYS:\_netware\nds.rfl

  • Linux - /var/opt/novell/eDirectory/data/nds.rfl

  • Windows - C:\Novell\NDS\DIBFiles\nds.rfl

In most cases, the roll-forward log will be called 00000001.LOG and should have a fairly recent timestamp.

In some cases, there may be no log files, the log file may be only 512 bytes in size (no transactions) or there may be multiple log files.

Generally, only one log file is in use at any time by one eDirectory instance.

There are multiple ways to identify the current roll-forward log file. If there is only one log file in the nds.rfl directory then this is the file 'in use'. If there are multiple files, identifying the file with the latest time stamp should indicate which file is being used. The dsbk command can also be used:

  • On NetWare: dsbk getconfig

    Depending on the eDir version, the results are displayed on the logger screen or on a separate 'dsbk' console screen.

  • On Linux: dsbk getconfig (if previously configured - if no /etc/dsbk.conf, will not work)

    The output from the command is sent to the default log file (ndsd.log).

  • On Windows: From control panel, run the 'Novell eDirectory services' applet. Highlight the dsbk.dlm line, enter the word 'getconfig' in the 'Startup Parameters' box, and then click on start. Nothing will appear to happen, but the config info should have been written to the output log file.

    The output is written to C:\Novell\NDS\backup.out

Via iManager - eDirectory Maintenance - Backup Configuration options.

If there are no log files, you may need to enable roll-forward logging before you can make use of this program (see eDirectory documentation).

NOTE: In most cases roll-forward log files appear to be created and updated even if roll forward logging is set to OFF.

NOTE: Although it can be OK to delete log files which are no longer used, do not do this unless you need the space. Be sure you do not delete or modify the 'current' roll-forward log file as it could have severe consequences.

Program Requirements:

The program has been tested on SLED10 and Windows XP, but should run on any OS that supports Perl 5.5.8 or later.

The program was tested with Perl 5.5.8 on Linux and Activeperl 5.10.1 on MS Windows.


Make a new directory and place the roll.pl and allroll.pl programs in the directory. Acquire a copy of the the roll-forward log file from the server to be checked and place in the directory with the perl programs.

Execute the program using: perl roll.pl

Enter the name of the log file to be processed (case sensitive on Linux/Unix).


The program allroll.pl is included as a diagnostic utility which will extract ALL transaction IDs from a roll-forward log file regardless of age. The roll.pl program only references what it has identified as recent transactions and will not include transactions considered as 'old' data. Log files are actually re-used from the start, so the ordering of transaction IDs can appear to be inconsistent when viewing transactions from the whole file. The output file for the allroll.pl program is called 'All_eDir_Transactions.txt'.

When running dsrepair/ndsrepair in order to reduce the current transaction ID counter, please make sure you ONLY set the 2 options to 'yes' as indicated in TID#7002658. If any other combination of yes/no switches is used, the transaction ID will probably not be reduced.

Example Program Run / Output:

Name of roll-forward log file [Default=00000001.log]: 00000001.LOG


Last Transaction ID identified: Hex [6392b4e5] : HexMax [FFFFe000]
Dec [1670558949] : DecMax [4294959104]

Number of Transactions left : Dec [2624400155] 61.10% remaining


Comment List
  • I don´t think that´s bound to number of users, I hit the problem this week and the tree has only 2000 users. I think it´s more bound to products you have in the tree, we had ZENWorks ( older versions like 4 ), which used eDirectory very heavily, IDM , GroupWise , this kind of software will utilize eDirectory more than plain user accounts. Just my 2 cents .


  • I think the problem with making any recommendation for a product like eDirectory and the use of ndsrepair/dsrepair is that there is really no general case.

    For example, in a previous comment I noted that a multi million object DIB, was only at the 200 million mark, but no logins were happening.

    I had a test system with 15K objects that I have deleted and recreated 10K of those objects several times, and write on average 15 attrs to each object on each test pass. So I did a LOT of writes, and I am only at 3 million or so.

    Others have reported servers at the 3+ billion mark. Others in the real world have actually gone over the 4 billion, and needed NTS to fix it, prior to 885.

    Additionally, what a single transaction is, seems to depend on how it was generated. I.e. Atomic operation, like a single LDIF op, a user create, etc seem to count as one, whereas modifies out of IDM to multiple attributes, still needs more testing.

    Furthermore, we are still unclear on how logins affect the count? 1? 3? 4 per login? Dunno. Need more testing.

    So for a small DIB, say 10K users, and normal login patterns, it seems quite likely that in the lifetime of the DIB set, you will never hit 4 billion.

    I.e. You will more likely replace the server/disk farm, and recreate the DIB, before you hit 4 biillion.

    For an astonishingly large tree (millions of objects) with low login counts but lots of creates/deletes , it looks likely that a decade or more will be needed to hit 4 billion.

    Therefore it is pretty safe to say, this is important to know about. It is important to think about. But it seems like it will be pretty darn rare! But it could really happen, if you are in one of those specail cases.

    I suspect that the easiest way to trigger this, would be to have an LDAP auth tree, with a few thousand users, and hundreds of thousands, if not millions of login events a day. That will add up very fast. Might still need a year or three to get into trouble, but it could happen.

    I guess my point is, for probably 95% of eDirectory users, this is a total non issue. For that last 5% well, that is why you pay people to support software.

  • Well, I asked around on a mailing list I am on. I looked at some test servers I had access to yesterday and even though I had created 15K objects, modified all of them. Deleted 10K, created, deleted, modified a lot in test, I was only at the 3 million + transaction count.

    So that was an interesting revalation. A friend with a single digit million object tree reported on a year and a half or so old DIB was in the 200 million range. So clearly object creates and modifies is probably not what is going to get us in trouble.

    Another friend did some testing (Prodding him to post the actual details here) and it looks like via LDIF, a a couple of changes at once, count as a single transaction. Now to test in IDM, and see what counts there as a transaction count...

    Then the next issue is to see what login events, via NCP (say Jclient in Console One, and say via Client32) count as, and what LDAP binds count as... Since those are probably what are going to kick the count up very high.
  • Well, running dsrepair on a regular basis is the choice of some and used to be recommended by many in Novell. However, the eDirectory code is a lot better these days that it was years ago, so that recommendation isn't herd much these days.

    The fact that no-one really knew about the FLAIM transaction limit (except the developers) coupled with the fact that customers generally ran dsrepair 'at the drop of a hat', meant it was never an issue, as no one every came close to the 4.2bn limit.

    My own personal recommendation about running dsrepair has always been 'only run it if there's a reason to do so'.

    Well, now we know there is a reason to run dsrepair. How often? I'd say it depends on how 'busy' the server is as regards it's eDirectory database. In other words, just how many changes are being made to the database on that server over a given period?

    The amount of changes being made to a database will depend on a number of factors which will include:-

    1. Size of the eDirectory database (in terms of the number of objects held on any one server).
    2. Number of services/applications running on that server that 'touch' or make changes to objects in the database.
    3. Number of users authenticating to that server or other servers that have replicas containing copies of user objects in this database etc etc

    At a guess, I'd say running a repair once every 3 months (even on a busy server with a big database) should be more than adequate to avoid any risk of running out of transaction IDs.

    See this cool solution on dsrepair usage:-
  • Ok, so Novell's stance seems to be to only run dsrepair if you're having a problem. Yet, with this transaction ID limit, you would think that periodic dsrepairs would be recommended by Novell (?).
  • Hi Geoff, glad the tool is useful :0)

    I am 'cheating' yes, but this is because I needed a quick solution (to check out a few hundred servers at my customer's site) and didn't have time to try and figure out how to do this using the APIs, and to be honest, I doubt the information I would need would even be available to me.....

    In the real world I have found servers sitting at 3.7bn and 3.9bn transactions so they were only a few months from going belly-up. Another server did go belly-up and was down for 2 days, hence I wrote the code to try and avoid any more problems.

    I don't have access to the documents that specify exactly what a transaction is, but it seams likely to me that every update to the eDirectory database constitutes a transaction ID issued, so creating a user would use many IDs.

    If you want to find out how many, create a VM with eDir and process the roll-forward log before and after creating a user, that may give you an idea of exactly how many transaction numbers are used.

    To work out how long I had before certain servers ran out of IDs, I just processed the roll forward logs from one server on a few different days and did the math to figure out how many transaction IDs were being used each day (on average).
  • Chris,
    Ever since i heard about this issue, I have been deeply concerned with it. The worst is, the only way prior to this tool that I knew of, to examine the values was the new ndsrepair/dsrepair in eDir 885.

    This is truly excellent and a very useful tool.

    Where you have looked at real trees, how close to the limit are you seeing, in production environments? I.e. In the real world, is 4 billion a really really big number, or just a large number and have patience, you will hit it.

    Basically the question I have still is, what counts as a transaction?

    So you create a user, is that one transaction, or is it more like 8-10, since the first event is User object, Surname. Then password, then Given Name, then Full Name, then assorted sundry attributes that your IDM solution tacks on.

    If so, a user create that takes 20 transactions, sets one stat. Then say a login event updates Login Time, Last Login Time, Network Address, so is that one, or three transactions?

    You can see how fast 4 billion can get used up once your tree gets into the 10,000 user size or more.

    (Create 10K users, and then say 1K turnover a year. Say that is 200K to create, then 20K a year. Then say 75% login in daily at 4 transactions a login (Network Address has to be cleared eventually) so that is 30K a day for logins.. But you can see how fast they get used up!)

    I see you are sort of cheating and parsing the roll forward logs, since it is really server specific. Is there anyway to figure this value out over say NCP, such that a tool could assess the state of all servers in a tree, in a single go? That would make everyones life easier.

    Of course, Novell should be providing this in iMonitor, but that is another story.