Highlighted
Super Contributor.. Super Contributor..
Super Contributor..
264 views

HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

HP uCMDB: 9.05.CUP14.412

DDM Content Pack: 11.14.874

 

Hi,

 

We are currently pushing CIs and Relationships to Service Now CMDB.

 

Let's say for example, I have an Integration TQL which has 100 Business Application CIs related to 4000 Windows CIs with 4000 relationships. When we run the Integration job with a Default setting of 4000 for Data Push Chunk Size, the job processes the CIs first and then the relationships. So, 4100 CIs and 900 Relationships make it through. But the next chunk which would have the remaining 3100 relationships are skipped with this error message.

 

<2014-11-03 14:01:20,634> [ERROR] [Thread-39] - [processRelations] Could not get SN sys_ids for parent or child of this relationship...skipping!

 

The above error does not make sense because the CIs have already been pushed first and only after that relationships go through. Can you please look into the jython script to see why this would happen (Line 545).

 

For testing purposes, we increased the chunk size to 15,000 but encountered another issue. The probe couldn't handle the XML data spit and it trashed the probe with MqSql errors.

 

<2014-11-03 15:13:45,990> [ERROR] [ProbeGW Tasks Downloader] (DBServicesImpl.java:478) - Failed closing statment
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (21684395 > 10485760). You can change this value on the server by setting the max_allowed_packet' variable.
    at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2635)
    at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2621)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1552)
    at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:892)
    at com.mysql.jdbc.ServerPreparedStatement.close(ServerPreparedStatement.java:458)
    at com.hp.ucmdb.discovery.library.dblayer.DBServicesImpl.close(DBServicesImpl.java:471)
    at com.hp.ucmdb.discovery.library.dblayer.AbstractDataAccessObject.close(AbstractDataAccessObject.java:71)
    at com.hp.ucmdb.discovery.probe.agents.probegw.dbservices.ProbeGwDAO.storeTask(ProbeGwDAO.java:131)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributer.handleNewTaskFromServer(ProbeTasksDistributer.java:255)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull.retrieveTasksFromServer(ProbeTasksDistributerPull.java:388)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull.access$400(ProbeTasksDistributerPull.java:35)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull$RetrieveTasksFromServerThread.run(ProbeTasksDistributerPull.java:188)

 

So, I'd like to know the recommended settings for Data Push Chunk Size and other related settings to avoid bad consequences.

 

Kindly oblige.

 

Thanks,

Praveen

0 Likes
1 Solution

Accepted Solutions
Highlighted
Outstanding Contributor.. Outstanding Contributor..
Outstanding Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Hello ,

 

I hope you are doing well .

 

Sorry for delay but I needed to confirm with my back line. It is dangerous for the environment and this process need to be autorized . I suggest you that if you really need this option you can open a ticket and request a process to do that .

 

Best Regards ,

Melissa Carranza Mejias
Customer Support Engineer

If you find that this or any other post resolves your issue, please be sure to mark it as an accepted solution.
If you are satisfied with anyone’s response please remember to give them a KUDOS by clicking on the STAR at the bottom left of the post and show your appreciation. “

View solution in original post

0 Likes
6 Replies
Highlighted
Outstanding Contributor.. Outstanding Contributor..
Outstanding Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Hello ,

 


I am an UCMDB Support representative, and I will be researching according to your question.
Please give me a bit more time and I’ll be back as soon as possible.
Thanks for your understanding.

 

 

Best Regards,

Melissa Carranza Mejias
Customer Support Engineer

If you find that this or any other post resolves your issue, please be sure to mark it as an accepted solution.
If you are satisfied with anyone’s response please remember to give them a KUDOS by clicking on the STAR at the bottom left of the post and show your appreciation. “
0 Likes
Highlighted
Outstanding Contributor.. Outstanding Contributor..
Outstanding Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Hello Praveen ,

 

I hope you are doing well .

 

Regarding to your questions :
1- <2014-11-03 14:01:20,634> [ERROR] [Thread-39] - [processRelations] Could not get SN sys_ids for parent or child of this relationship...skipping!

R/ This is because the limit approximate  of the CIs is 4000 and in this case you have a 3100 relationships. This relationships has more than 4000 Cis . We can to run in a small chunks .

2- When we try to increase the chunk size we  get a performance issue .

 

Best Regards ,

Melissa Carranza Mejias
Customer Support Engineer

If you find that this or any other post resolves your issue, please be sure to mark it as an accepted solution.
If you are satisfied with anyone’s response please remember to give them a KUDOS by clicking on the STAR at the bottom left of the post and show your appreciation. “
0 Likes
Highlighted
Super Contributor.. Super Contributor..
Super Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Hi Melissa,

 

It is like you rephrased my question again.

 

For the initial load, we are running the integration job with small TQLs to stay under the chunk size but there has to be a better way.

We want as much automation as possible on our production instance. It is hard to predict what the deltas would look like and design smaller tqls to stay within limit.

 

Please check with your folks and let me know what settings needs to be changed on the probe's mysql db to handle an increase in the data push chunk size.

 

Thanks,

Praveen

 

 

0 Likes
Highlighted
Super Contributor.. Super Contributor..
Super Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Any updates or progress here ?

0 Likes
Highlighted
Outstanding Contributor.. Outstanding Contributor..
Outstanding Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Hello ,

 

I hope you are doing well .

 

Sorry for delay but I needed to confirm with my back line. It is dangerous for the environment and this process need to be autorized . I suggest you that if you really need this option you can open a ticket and request a process to do that .

 

Best Regards ,

Melissa Carranza Mejias
Customer Support Engineer

If you find that this or any other post resolves your issue, please be sure to mark it as an accepted solution.
If you are satisfied with anyone’s response please remember to give them a KUDOS by clicking on the STAR at the bottom left of the post and show your appreciation. “

View solution in original post

0 Likes
Highlighted
Super Contributor.. Super Contributor..
Super Contributor..

Re: HP uCMDB : Changing the Data Push Chunk Size Default setting

Jump to solution

Thanks Melissa.

I've opened a support ticket.

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.