HP uCMDB : Changing the Data Push Chunk Size Default setting

HP uCMDB: 9.05.CUP14.412

DDM Content Pack: 11.14.874




We are currently pushing CIs and Relationships to Service Now CMDB.


Let's say for example, I have an Integration TQL which has 100 Business Application CIs related to 4000 Windows CIs with 4000 relationships. When we run the Integration job with a Default setting of 4000 for Data Push Chunk Size, the job processes the CIs first and then the relationships. So, 4100 CIs and 900 Relationships make it through. But the next chunk which would have the remaining 3100 relationships are skipped with this error message.


<2014-11-03 14:01:20,634> [ERROR] [Thread-39] - [processRelations] Could not get SN sys_ids for parent or child of this relationship...skipping!


The above error does not make sense because the CIs have already been pushed first and only after that relationships go through. Can you please look into the jython script to see why this would happen (Line 545).


For testing purposes, we increased the chunk size to 15,000 but encountered another issue. The probe couldn't handle the XML data spit and it trashed the probe with MqSql errors.


<2014-11-03 15:13:45,990> [ERROR] [ProbeGW Tasks Downloader] (DBServicesImpl.java:478) - Failed closing statment
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (21684395 > 10485760). You can change this value on the server by setting the max_allowed_packet' variable.
    at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2635)
    at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:2621)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1552)
    at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:892)
    at com.mysql.jdbc.ServerPreparedStatement.close(ServerPreparedStatement.java:458)
    at com.hp.ucmdb.discovery.library.dblayer.DBServicesImpl.close(DBServicesImpl.java:471)
    at com.hp.ucmdb.discovery.library.dblayer.AbstractDataAccessObject.close(AbstractDataAccessObject.java:71)
    at com.hp.ucmdb.discovery.probe.agents.probegw.dbservices.ProbeGwDAO.storeTask(ProbeGwDAO.java:131)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributer.handleNewTaskFromServer(ProbeTasksDistributer.java:255)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull.retrieveTasksFromServer(ProbeTasksDistributerPull.java:388)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull.access$400(ProbeTasksDistributerPull.java:35)
    at com.hp.ucmdb.discovery.probe.agents.probegw.taskdistributor.ProbeTasksDistributerPull$RetrieveTasksFromServerThread.run(ProbeTasksDistributerPull.java:188)


So, I'd like to know the recommended settings for Data Push Chunk Size and other related settings to avoid bad consequences.


Kindly oblige.