Highlighted
Absent Member.. Absent Member..
Absent Member..
308 views

Any tips for improving integration process performance?

Jump to solution

Hi all

 

we have a uCMDB to ServiceNow integration running and at the moment this is capable of pushing CIs over as long as the total amount of CIs+relationships is around less than 1600. Anything higher than that and the process eventually dies with the "Timeout expired. See DDM logs to see what is taking so long." error.

 

The log files (on the Probe or on the ServiceNow side) do not show anything that would indicate a cause to the error, but looking at the message text it seem that the Probe is waiting for a reply from ServiceNow that it fails to receive.

 

Being that this error only occurs when the amount of CIs and their relationships exceeds around 1600, I am thinking that this could have something to do with uCMDB trying to process more data than it can handle in a given amount of time (e.g. due to resources on the server or other reasons).

 

(Admittedly though this can also be caused by ServiceNow not being able to handle the amount of data that it receives but that is something I need to inquire about at another forum)

 

I have tried the settings available in the uCMDB ServiceNow adapter such as:

  • Max threads
  • Max execution time
  • Grouping Interval (seconds)
  • Max. CIs in group

 

...but varying the values in these has not had any noticeable effect on the performance and has not stopped the integration from leading to an error as described above.

 

So I wanted to know if anyone has tips for how to increase the performance of the integration process, e.g. through Java virtual machine parameters or any other settings?

 

Or alternatively I would be interested to know if there is a way to slow down the Probes processing of the CIs (I would like to try this in order to determine if the problem is on ServiceNow side).

 

TIA

0 Likes
1 Solution

Accepted Solutions
Highlighted
Absent Member.. Absent Member..
Absent Member..

Re: Any tips for improving integration process performance?

Jump to solution

Thanks Gregory, I tried that tip. In fact it looks like a useful way to group the TQLs as when they are set up like that, the Integration Job automatically starts the TQLs in sequence.

 

But to let you know, I found out where the timeout value is set.

 

I took a look at the PushAdapter class inside pushadapter.jar file (located in DataFlowProbe\runtime\probeManager\discoveryResources\Service-Now).

 

In this class I noticed that the method "invokeAdHocDDMTask" had following lines which essentially states to throw the DataAccessGeneralException if the task is being processed longer than what is returned by getTimeoutMillis().

    while ((!isDone) && (System.currentTimeMillis() - startTime < getTimeoutMillis()));

    if (!isDone) {

      throw new DataAccessGeneralException("Timeout expired. Check DDM logs to see what's taking so long");

    }

 

 

The getTimeoutMillis() method on another hand gets its value from the "_props" class instantiated earlier in the PushAdapter class.

 

The "_props" class, on another hand, is like a HashTable and is just used for storing properties. The class is populated with properties when the HashMap "params" is instantiated; this instantiation happens on the line that looks as follows:

HashMap params = getJobParameters(testConnection, addResult, updateResult, deleteResult);

 

 

The "_props" class is populated by multiple lines when the above line is called. For our purposes the one of interest looks as follows:

    params.put("timeout", String.valueOf(getTimeoutMillis()));

 

 

Above line calls another method named "getTimeoutMillis", which looks like this:

  private int getTimeoutMillis() {

    return Integer.parseInt(this._props.getProperty("timeout.seconds", "600")) * 1000;

  }

 

 

Above method retrieves the property named "timeout.seconds" and sets a timeout value to "600000" milliseconds if nothing else is specified.

 

By looking at the other retrieved property names I noticed that they correspond to what is defined in the OOB "push.properties" file for the ServiceNow Integration.

 

So I added the entry "timeout.seconds=1800" into that "push.properties" file and the integration job got a timeout of 1800 seconds. This has now allowed me to push a larger CI set across without problems.

View solution in original post

0 Likes
5 Replies
Highlighted
Micro Focus Expert
Micro Focus Expert

Re: Any tips for improving integration process performance?

Jump to solution

If we are sure that it is just probe is waiting for the result from service, then we can track the operation being done from the service now application.

 

May be logs from service now, related to api, webservices.

Highlighted
Super Contributor.
Super Contributor.

Re: Any tips for improving integration process performance?

Jump to solution

Hi,

ServiceNow is customised Push Adapter.  If you get "Timeout expired" error, than it's simple timeout. There could be no errors at all, but if single push job takes too much time, than it's killed - this timeout is not configurable (I haven't found any option to increase it yet). From my experience, this "internal" timeout is about 3 minutes - you have to push all CIs in that time 🙂

 

Only solution I have found, is to definie multiple TQLs which select data to be pushed instead of one TQL as you probably did. Make sure that each TQL selects less than "X" CIs - you should experimentally determine value of "X". In your case it will be lower than 1600 CIs. Next, go to ServiceNow integration point and create additional integration jobs. Each job must use single query you defined earlier.

 

When you do that, probe runs those jobs independently, each has its own timeout. Jobs could run in parallel, so make sure that this option suites you - for eaxmple, in your environment it could be impossible to maintain data integrity when running multiple data push jobs at once.

 

regards

Gregory

Highlighted
Absent Member.. Absent Member..
Absent Member..

Re: Any tips for improving integration process performance?

Jump to solution
Thank you for your replies.

There is no clear indication in any of the log files that uCMDB is waiting for a reply from ServiceNow, so it does look like an internal timeout.

The probeGW-TaskResult.log file does output a timeout value that looks like this:
<destinationData name="timeout">600000</destinationData>

The 600000 value (if it is milliseconds) in above seems to fit with the approximate 10 minute timespan that the Integration Job runs so my current guess is that it has something to do with this issue.

However I have not found any location where I can modify this value (I opened a ticket with HP about this on Sunday but it is still "work in progress").

Also I tested with splitting the TQL between 5 different Integration Jobs (each inside the same Integration Point). uCMDB allowed me to run at most 3 of them at the same time so it looks like it has some internal limit on how many of them can be run in parallel.
0 Likes
Highlighted
Super Contributor.
Super Contributor.

Re: Any tips for improving integration process performance?

Jump to solution

You should also try adding all TQLs to one integration job - single job inside integration point with multiple TQLs.

 

regards

Gregory

0 Likes
Highlighted
Absent Member.. Absent Member..
Absent Member..

Re: Any tips for improving integration process performance?

Jump to solution

Thanks Gregory, I tried that tip. In fact it looks like a useful way to group the TQLs as when they are set up like that, the Integration Job automatically starts the TQLs in sequence.

 

But to let you know, I found out where the timeout value is set.

 

I took a look at the PushAdapter class inside pushadapter.jar file (located in DataFlowProbe\runtime\probeManager\discoveryResources\Service-Now).

 

In this class I noticed that the method "invokeAdHocDDMTask" had following lines which essentially states to throw the DataAccessGeneralException if the task is being processed longer than what is returned by getTimeoutMillis().

    while ((!isDone) && (System.currentTimeMillis() - startTime < getTimeoutMillis()));

    if (!isDone) {

      throw new DataAccessGeneralException("Timeout expired. Check DDM logs to see what's taking so long");

    }

 

 

The getTimeoutMillis() method on another hand gets its value from the "_props" class instantiated earlier in the PushAdapter class.

 

The "_props" class, on another hand, is like a HashTable and is just used for storing properties. The class is populated with properties when the HashMap "params" is instantiated; this instantiation happens on the line that looks as follows:

HashMap params = getJobParameters(testConnection, addResult, updateResult, deleteResult);

 

 

The "_props" class is populated by multiple lines when the above line is called. For our purposes the one of interest looks as follows:

    params.put("timeout", String.valueOf(getTimeoutMillis()));

 

 

Above line calls another method named "getTimeoutMillis", which looks like this:

  private int getTimeoutMillis() {

    return Integer.parseInt(this._props.getProperty("timeout.seconds", "600")) * 1000;

  }

 

 

Above method retrieves the property named "timeout.seconds" and sets a timeout value to "600000" milliseconds if nothing else is specified.

 

By looking at the other retrieved property names I noticed that they correspond to what is defined in the OOB "push.properties" file for the ServiceNow Integration.

 

So I added the entry "timeout.seconds=1800" into that "push.properties" file and the integration job got a timeout of 1800 seconds. This has now allowed me to push a larger CI set across without problems.

View solution in original post

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.