Operation Attribute means that the attribute is available in the event received as part of the operational XDS document. This is the fastest since it is retrieved straight from memory from the DOM object.
Source Attribute means that the attribute is retrieved from source data source.
Attribute is my favorite, since it tries to take the easiest way out. If the operation attribute is available, it will use that, if not, it will go back to the source data source to get it.
Destination Attribute always looks to the destination data source, since it cannot be in the operation document. After all the operation always starts in the source, from Identity Managers perspective. The source varies depending on the channel the event is in, the application is the source in the Publisher channel, and eDirectory is the source in the Subscriber channel.
One of the really powerful and subtle features of Identity Manager is how it handles caching. If you watch at the beginning of a rule, you will often see the first query to an object ask for more information than the circumstances of that specific moment would require. For example, you might at the beginning of a policy have a condition to the effect of: If source attribute acmeFlagAttribute is available, but you will see the engine query for acmeFlagAttribute, Full Name, and Given Name. Why did it ask for more attributes than you need in this query?
What has happened is the engine has read a little bit ahead of itself in the policy object and realized that you are about to ask for the Full Name and Given Name of the current object a few lines of code later. So why not optimize one of the slower operations in the DirXML Script stable (I currently think do-start-workflow is currently the slowest, but to be fair it is starting a SOAP call and probably has to wait for it to complete before finishing its execution) and as long as there is a Query event happening, why not get as much info that we need at that time. The extra attributes are cached, and when the action or condition comes up that needs to know the value of the attribute, it can be read from cache in memory which is really fast.
Normally this is great! It means that if you need to reuse the attribute in a policy, it probably takes the same amount of time to store it in a local variable as it does to just keep asking for the Source Attribute (or possibly Attribute, being more efficient in general).
However, it is important to know and understand the limits of the caching, and some of its possible downsides.
First, lets talk about the limits of the caching. In order to use caching efficiently you need to know how long an object will stay in cache, its lifetime, or time to live. Additionally you need to know what will use the cache, and what will ignore it.
The definition of the lifetime of the cache was provided to me by one of the designers of Identity Manager as: "The lifetime of the cache is a single operation being processed by a single policy." That means, within one policy object, the cache stays alive. Thus if you have one rule per policy object and you keep using the same attributes over and over again in each policy object, you will be querying (a 'slow' event) once per policy object.
In that kind of a design, it might make sense to put some of the rules together into one policy object to boost performance by minimizing the number of queries.
The reason I do the "fingers raised as quotes" thing about queries being slow, is that it really depends. eDirectory for indexed attributes can be surprisingly quick at responding to queries. Other systems, it all depends. Usually the JDBC driver querying a database is quite fast, whereas a SOAP driver will be quite slow, due to the nature of how they handle queries.
It is however clear that querying, no matter how fast the system responds will always be slower than reading it out of memory from cache.
The tokens for Attribute, Source Attribute, and Destination Attribute all update and use the cache. Operational Attribute sort of does, but since it is already reading the value from the current document, is there really a difference between using the cache or not in that case?
That deals with the tokens that implicitly query for things. That is, while you do not use a Query token, nor a Java call to srcQueryProcessor, you are ultimately doing a query, as you will see in trace, a <query> event is generated, and a response is received.
There are two ways to explicitly query a data store for information. The original method is a Java call to the srcCommandProcessor or the destCommandProcessor to use the query function to get some information back. The newer method with Identity Manager 3.5 is the Query token (The Query token in Identity Manager) which ultimately is just a wrapper around the Java class anyway, but it is a very nice wrapper. There are a number of improvements added to it that make it very worthwhile.
One improvement is that the direct Java call does not know about the cache, the Query token does. However, the Query token will always generate a query event and process the response, as will the Java call. The difference is that the Query tokens results will be used to update the cache.
Another improvement is that the wrapper gives us a very nice interface to the function, that makes it much easier and cleaner to support. It is much easier when there is a bit of GUI around a command, instead of just parameters to a function.
Now for the downsides. This should usually NOT be an issue, but there is a very real use case that this problem can occur. At a client recently we had to handle moves of users in and out of three containers, an Active, Disabled, and New container.
The problem was we had a relatively rare, but useful case where the user was in disabled, then some event causes them to be re-enabled, so we move them to the New container (For users without any entitlements yet). Then soon after we would likely give them a first entitlement, which would move them out of New and into the Active container.
The original design left many of these moves embedded in the drivers that saw the original event. That is, if the database was the source of the event that generated the move, then the JDBC driver would receive that event from the database, event upon it, and decide to move the user.
Moves (within a partition) can be quite tricky, from a purely eDirectory perspective. When between partitions it just gets more complicated. Moves use the obituary process to make sure that the integrity of the directory is maintained, which is all for the best. If you get in a situation where you want to move a user twice within relatively short time frames, then you may see one of several error, usually -637 Inhibit_move obituary, or previous move in progress errors. This is reporting that the first move has not completed, so the second move is not allowed to begin. Give it time and it will all clear up. (The other possible error is a 654 partition busy, but that should be pretty rare).
If the move event is in a driver, and it gets a 637 error in response, it goes into a retry loop. Again, this is a good thing, but in our lab environment, it was taking up to 30 minutes for the obituaries to clear and the 637 error to go away. All events in the queue behind the move event were piling up, and not processing until this event cleared. (In actual fact, it was a modify of an attribute, that the driver converted to a Move event, so it was actually not a Move event in the TAO file, it was a modify).
To ameliorate this issue, we added some standalone Null drivers, who watch for the changing attribute to trigger the move upon. This has the benefit of isolating the delay, and stall in processing events to a single purpose driver, who if it gets delayed is not as bad as the main database JDBC driver.
Where this becomes relevant for the issue of querying is that we ran into a niggling timing detail that was really problematic to resolve. The user appears for a short period of time, in both containers, based on the query results, and there were events that came through in testing where at the beginning of the event, the user was in one location, and by the time the rules were finished the user had completed the move to the second location. Additionally, one attribute for entitlements that was key which was being updated at the same time the move was happening, and at the beginning of the event there was only one entitlement value on the user in the attribute to be processed. By the time the rule gets to the part where it processes that attribute to start adding the entitlements, the second one had been written in.
Being the performance oriented guy that I am, I was using Attribute whenever possible, so as to take advantage of the cache. However in this case the cache was misdirecting me. I needed to test again at the end of the rule to see if there were more than one value in the attribute, and reading it out of cache would only return the single attribute.
In this case, knowing that the Query token would force a query and update the cache enabled me to Query for the attribute directly, instead of using the Attribute or Source Attribute token (you can imagine why Operation Attribute would be useless, knowing it by definition has stale data, from the time of the beginning of the event).
The other approach I could have taken was to start up a second Policy object after the first, and move the logic that handles the entitlement attribute to the second policy object. We know from the prior discussion that the lifetime of the cache is a single policy object, so by moving some of the rules to a second object, we get the chance to clear the cache and read it fresh again. It happens I did not want to break this out into two rules, if I could avoid it for some clarity of code reasons. But it was my backup plan in case nothing else would work out.
Having gone through that somewhat convoluted but very real world example, I hope my point is clear. The cache in Identity Manager is an excellent feature for boosting performance. However it is critical to understand its boundaries, in order to handle cases that run into the edges of those boundaries.
If you do not know when it works and how it works, it is very hard to figure out what is going wrong. The information about the cache's uses and limits was something that came up in a discussion in the Novell Support Forums. The Support Forums (at least for Identity Manager) are truly a wonderful resource. Some of the developers, of the engine, and of some of the drivers read and respond on the forum. If you have an interesting tricky question about how the Identity Manager engine or some specific driver works, you are very likely to get a useful response! (Please think about your question, provide the needed information like version involved, and Dstrace output showing the problem if you want the fastest results. If you don't provide it up front, whoever answers will pretty much ask for it, since it is often impossible to troubleshoot with seeing the trace). It is not clear to me where else you could get some of the information that is available from the developers.
You can find the Novell Support Forums at http://forums.novell.com or via NNTP at nntp://forums.novell.com and the relevant forum is: novell.support.identity-manager.engine-drivers