uCMDB - NNMi integration, always ends up in error.
Our integration between uCMDB and NNMi seems to fail quite often. It runs perhaps 5 hours and after that we have kind of different error messages.
OutOfMemoryError: java.lang.OutOfMemoryError: Java heap space
DynamicService failed processing tables on destination: "IPADDRESS) (ID=6e368a0c24f3bc4544825ea3c040ab65)
and so on....
We thought that we missed quite a lot of CI:s and relationsships aswell. So we unchecked "Fail entire bulks due to invalid CIs". After that we got a lot of "new" CI:s added (and also ip subnets that we have been missing for a while).
But it seems as if the job gets error status all the time. I guess before it thought it was okey to fail entire bulk and give the job Success status....
I know we miss a couple of CI:s now and also that one day a layer2connection is updated, the next day it is not, hopefully it is updated (last access time) the day after that again and so on...
Some of the missing CI:s they have been aged out. I can see in NNMi that the node looks okey. I can also see that the node in Custom Attribute have the (old) UCMDB id set.
1. Should I remove the uCMDB id i Custom Attribute filed in NNMi on the missing nodes? Will it kind of see the node as a "new" node and add it in uCMDB? Or does it not matter what I do?
2. Any ideas why our integration fails with out of memory and so on, any ideas what we can do to avoid it?
3. PageSize settings, does that only have to do with the communication between the probe and the ucmdb server? Not with the communication probe - NNMiserver?
4. Is the PageSize settings related to "fail entire BULK"? Or how is the BULK calculated?
our settings in the integration right now:
5. Do you think some of the settings can be causing some of the nodes not to be updated in uCMDB (with last access time)?
6. How does the integration work regarding which nodes/ip:s it starts with? Does it start with the lowest IP or how does it decide which node/ip to handle first?
Wbr / Fredrik