The Validator team have recently released version 1.4 and I thought I would go through and explain what is new in this version.
I have done this for IDM releases, Designer releases, and some driver updates. I hope people find this helpful.
If you use Identity Manager and you have not seen IDM Validator, you need to stop and go get a copy. If you do not have it, how do you test your IDM system? Validator is a test tool, that is designed to make testing an IDM implementation much easier.
First off all a major warning! Once you open a test suite's JSON file in Validator 1.4 the format is converted so that file cannot be used in Validator 1.3 afterwards. What this means is COPY your test suite to the 1.4 instance, and backup your 1.3 test file. Do not MOVE the test suite JSON file!
There are two major types of tests you can use Validator for. (There are many more of course, but there are two major approaches). The first is to unit test a specific bit of functionality you are working with. Say you are working on managing Password Expiration. You can build a test that does a password change, then looks to ensure password expiration changes as you expect. Each time you run it, it makes the connection, does the change (via LDAP) then looks up to see the things you wanted changed, did change.
The second major approach is a complete end to end test that starts from scratch and continues to the end, then cleans up. In this approach you clean up first, then setup a specific test case (User, group, whatever) that looks exactly as you want it. That is the Setup phase. In the Test phase, you make some change to simulate your processes, and then you assert that the results look as you expect. In the Cleanup phase you delete the user, make sure all bits and pieces are gone. So that your next test can start clean.
Thus during development you can use Validator to unit test the stuff you are working with, and once complete use it as a health check that everything is working.
You could construct tests that walk your user through their entire life cycle in one single test, and just ensure the end results as you expect. Or you could have a create test, that ensures basic creates work. Then apply some lifecycle change (password change? Title change? Leave of Absence) and ensure that works. Finally deprovision them, and ensure that works.
If you set up tests that can run every time to success on their own, and with all the life cycle tests by themselves, you can use a new feature in Validator 1.4, called Scheduler. The idea is that there are really two customers for Validator. The developer like me, who uses it during development and the manager/Service Desk/CIO who wants to regularly check, is the IDM system working.
For developers, use it as traditionally used. For Service Desk/management use Scheduler. That is, set up a series of tests the demonstrates the entire (or as much as you care) IDM system and lifecycle. Then allow higher ups, to run them at will, and on a schedule.
The license thus differs, where Developers need a more expensive version since it does more, but there is a cheaper Scheduler available for management to use. But also it protects the people who should not see how the sausage is made from the bloodbath. They just see success or failure.
This is a major new feature for 1.4 and it opens a new market for the product, and demand for developers to build the tests the bosses want to see, so this is good all around.
I noticed at least one feature, that maybe was here before 1.4, but I just noticed it now, and I see it is not called out at all in the listed enhancements.
In Firefox at least, when you right click in a field, you now get a custom menu, not the Firefox browser menu, and the options are:
My biggest complaint with Validator is that it is focused on height not width and most modern screens are wider than tall. If I were working at a desk, I would seriously consider a high resolution, wide screen monitor, turned on its side just for Validator. I run out of screen real estate in height every time I use it. This however makes a huge difference so I am quite glad to have noticed this. Sure you can fold up your items, and sure there is a Collapse all, but I often need several open, and even on a 1080P screen, I find with 3 or more actions open I am out of screen space.
From the readme, there are a bunch of items called out as enhancements and fixes. Let me start with the enhancements, since they are often the more interesting and new features are fun.
* Added a GUI connector so the GUI of websites can be tested like UserApp.
This is a truly awesome new feature. I figured it would be a pain to use, but I had it working and testing in under five minutes. Basically it is a connector that knows how to run Selenium based tests. If you walk over to the web team in your enterprise or school, odds are good they know what Selenium is, and might even use it every day. Which means they probably can help you figure out any complex test cases.
You need to install a Firefox plugin (I only tried on Firefox, I assume it supports other browsers) which allows you to 'record' a web session. It looks for what you clicked on, frames, CSS, fields, etc. When done you get a file that can be replayed at will.
The first thing I did when the beta came out was record logging in to Shibboleth, getting redirected to SSPR, going into Configuration mode, (entering the crazy passwords this customer had set, even in Dev) and then going to a specific setting I was working on.
It was great, I could automate something I needed to do every 10 minutes, in under 5 minutes of learning the very first time.
This means, you can now automate some User App or other web based application testing. Selenium is not perfect but is one of the more developed solutions out there. The recording plugin is free, and I am not sure if you have to buy anything from Selenium, but so far, no need on my end.
This is a huge enhancement since up till now, testing workflows meant you could start a workflow with all the fields filled in on the request, but not really test the interface. Now you can do more interface testing.
Recently I have been working on some User App projects where we are doing some amazing interface work in the forms, and I have yet to have time to try and automate those, but would be interesting to see if I can do it.
* LDAP Delete Objects allows wildcards to delete multiple objects. It is documented in the hover-over text.
I could not get the hover over text to show up. I hovered so much that Validator called the cops and got a restraining order. Budda boom, I'm here all week folks... Regardless, the key point is that you can now do a wildcard in the leaf node.
This means you could specify a DN of cn=*,ou=data,o=acme to delete everything in a container. Or you could say cn=test*,ou=data,o=acme to delete a series of test users. This is a huge improvement if you needed to test lots of objects, since cleaning them all up was a large series of tokens. When I had to do stuff like that, I would just copy and paste the action multiple times, then edit the test suite JSON file itself directly to make the needed changes.
You cannot do wildcards in other nodes, but that is probably ok, and covers many more cases than before. As always, with great power comes great responsibility and do not do Spiderman's equivalent of "rm -rf *" from root of a Unix'y style box.
I am glad to see this sort of thing. Now I of course always want more, and have suggested in Ideascale that they consider delete ops or search ops to use an LDAP filter. For example, if you wanted to delete all users where (employeeStatus=inactive) might be useful. That of course means it could be really dangerous as a tool. I would then think a Test results button, so you could test it and see if it returns the two or three you expect or everybody you did not expect.
* Added show/hide feature for tables in Action Editor
Another new thing I noticed in terms of UI changes that is not documented is that tables of values, like objects to delete, attributes to be set can now be hidden and shown to save screen real estate. This I think is new in 1.4, but regardless is quite helpful. Often when setting up a create test you basically read back all the attributes of a user from the connector, use that as an example, and clean it up. But that newly created user might have 5 or 40 different attributes to set up your exact test case. That many lines in the table eats up all the screen. Now you can hide it, which is great.
I love how they sneak in these little UI tests that make life better and do not make a big deal about it.
One of the nicest things about Validator is that it is mostly written by the Consulting division at NetIQ/Novell/Microfocus. These guys use it every day. When a customer says, gee, we could really use it to test X, then they often just write that for them. I was onsite with one of the developers at a customer and we were testing Active Directory logins. I pointed out that LDAP 49 errors have sub-codes for Active Directory and they were not supporting them all. We discussed it, and the Consulting guy agreed, and just went into the source, added in all the values we could find documented and now everyone has those in that build. (This was back in the 1.1 or 1.2 days).
Thus all sorts of features get added in when they are in the real world, because the real world demands it. There is no architect deciding on features (Well there may be, but it is not just that guy driving the product). Many of the minor tweaks and features come because in order to use it successfully, they needed the feature so they just plain added it. Which is great. Sometimes in a product you see features get built in that are big and complex, and are part of some giant model, but the little fixes everyone needs daily get left behind. With Validator we often get a mix of both and it is great. This reminds me of the early days of Designer for IDM when we got nightly builds as they worked on it, and often you could discuss a feature with the developers and it would just show up overnight. Alas, as things get bigger and more complex and more people involved you need direction for your product and you need to allocate resources and prioritize features. I get all that, I just wish there was a way to get the best of both worlds there.
I am pretty happy so far with the new features, and just the few I have addressed already are pretty neat. Stay tuned for more new features in part 2!