Universal Password Performance Impact by Excluded Passwords


Test Environment:

SLES 11 SP2 x86_64

eDirectory 8.8 SP7 Patch 3 x86_64

Over the years the question has come up a few times on the performance impact of excluded passwords defined by a password policy.  If an administrator adds, for example, a few dozen excluded passwords (company name, abbreviations, commonly-used passwords, etc.) then how much will each password change be slowed-down resulting in a poor user experience or need for more hardware?

Excluded passwords are stored within a Universal Password (UP) policy object in the nspmExcludeList attribute, which is a Stream attribute (implying size limits that are sufficient for just about anything).  Each value is stored on its own line but the attribute itself is single-valued as are, currently, all Stream attributes.  Because it is a Stream attribute the value properly replicates to all servers in the replica ring, even if the value is very large as is likely the case in the test I ran.

It may be useful to note that the use of this list is not designed to hold a dictionary of words to prevent use of those words in passwords.  If that type of functionality is desired check out products like Self Service Password Reset (SSPR) which is available from Novell/NetIQ and is granted to current customers of (the last time I checked) Identity Manager (IDM), Access Manager (NAM), and SecureLogin (NSL).  The free/open version, PWM, is also an option for those interested, including those who use another LDAP store, or even microsoft active directory (MAD) for their store of usernames: http://code.google.com/p/pwm/

Back to the performance tests, I found a list of 10,000 of the most-commonly used passwords.  Google will find this for you if desired, but suffice it to say that many of them are not suitable for work (NSfW) so I will not link to it here.  The list is already in newline-delimited format so all that remains is to create the attribute on the password policy object of the password exclusions.  In my case I downloaded the file with the list of password frequencies to get an idea of how many times the passwords had been found in the dataset used to create the list.  The following command pulls out the first field (passwords, not frequency) and then creates a base64 string of the resulting output.  This is then easily imported into a password policy using LDAP:

cat ./10kpasswords | awk -F, '{print $1}' | base64 > 10k-excluded-password.b64

This large value is added to an LDIF like the following in place of the last bit of text:

dn: cn=10k-exclusions-policy,cn=password policies,cn=security

changetype: modify

add: nspmExcludeList

nspmExcludeList: hugeBase64StringGoesHere

To create a base test I also created an identical password policy that had no password exclusions.  Both policies were the default policy, so no history was configured, no limitations on password complexity were in place, etc.  The only change made to both policies was to make the minimum password length six characters (four character is just too few... so is six really) and to remove the maximum password length limitation.

To do my test I created another LDIF which changes a single user's password one thousand times, and then timed the time taken to import that via ldapmodify over and over.  A sample of the LDIF looks like this:

dn: cn=testNoExcludedPass0,ou=user,o=data

changetype: modify

replace: userPassword

userPassword: t3stp@s5w0rd0


dn: cn=testNoExcludedPass0,ou=user,o=data

changetype: modify

replace: userPassword

userPassword: t3stp@s5w0rd1

Notice that the password increments by one each time; a simple bash loop created the output and I used the counter controlling the loop appended to the end of the password to create a simply-changing password each time.  Because UP uses encryption for all values (3DES I believe) the variation of the change is not that important, but this way I had passwords that I knew should work (despite the long list of excluded values) and which were all unique to avoid issues should I decide to test with password history in the mix at some point.

Besides these user above I created a user with the policy applied (the one above, as the name implies, had a policy applied that did not exclude passwords) and did the same tests.  After doing this a few times, my results are basically a negligible, statistically-insignificant change in times required to change a user's password a bunch of times.  When my system is under a load the thousand changes, with or without exclusions, take about a minute.  With the system is otherwise idle the changes take around forty seconds, but the times are the same regardless of the user tested, the one with the policy defining the 10,000 exclusions or the one with the policy without exclusions.

While this should not come as a big surprise, it is nice to have some confirmation.  Adding a full dictionary of values may incur a greater performance penalty, but it is not clear based on the little bit of testing above if it would be enough to really warrant much concern.

The logic behind checking a password against a list of known values is pretty simple which probably explains why the penalty when testing only 10,000 passwords is so hard to pin down.  Similar logic could be implemented with something as simple as the following on the command line:

grep -q 'potentialPassword' /path/to/newline-delimited/dictionary.file

If the command returns that a match was found, the password is in the dictionary.  Assuming the ability to read the file is not overly limited by system I/O this command returns in about 0.026 seconds on my laptop against a 26 MiB text file, which should store easily store two million (or more) words.  eDirectory is not using this exact command, of course, but it demonstrates the logic and performance potential for this type of system.

In the end eDirectory performs well with a substantial list of excluded passwords in a given password policy.  If anybody has time to import a huge list with millions of words and can post back results, please do so as those may show better time differences to get a real idea about the feasibility of full dictionary word prevention using UP out of the box, even though it is not recommended or meant to work that way currently.  Note that while I could load and view the policy in iManager with my 10,000-word list, I would expect problems once the system has millions of words in place which could prevent the use of iManager on the affected password policy going forward.


How To-Best Practice
Comment List