Discovered agent data not reaching RTSM
Discovered agent data from Sys_SystemDiscovery - specifically relationships to things like filesystems - is not reaching RTSM in some cases in my environment. As a result, I have Computer CI's with related Agent and IP Address CI's but nothing more.
On the agent, when I force the System Discovery policy to run, I do get entries in System.txt with data to send up to OMi like this:
In OvSvcDiscServer.log on the Gateway I see corresponding messages:
[Mon Jul 24 08:58:53 EDT 2017][Thread-23] Updated SyncData Key:SI:filesystemcollection:/db2/BGP/log_dir@@<servername>)
[Mon Jul 24 08:58:53 EDT 2017][Thread-23] Updated SyncData (Key:SI:filesystemcollection:/db2/BGP/db2dump@@<servername>)
[Mon Jul 24 08:59:11 EDT 2017][Thread-23] Adding hosted-on relation from CI: SI:filesystemcollection:/db2/BGP/db2dump@@<servername> to node: no**<servername>
[Mon Jul 24 08:59:36 EDT 2017][Thread-23] Adding hosted-on relation from CI: SI:filesystemcollection:/db2/BGP/log_dir@@<servername> to node: no**<servername>
But in RTSM I still don't see these filesystems, or anything else that showed up in System.txt. Is there another log somewhere that I can look at to see if the CI's are failing to be created?
All relevant RTSM logs are on the DP server under <HPBSM>\log\odb\odb folder.
cmdb.reconciliation*.log files would be the usual suspects to be checked first.
Just wanted to leave an update here...
This problem is resolved. It is absolutely essential that you have a fully-qualified hostname in the following locations:
- The Computer CI (UNIX CI in this case)
- OPC_NODENAME in the eaagt namespace on the agent
- LOCAL_NODE_NAME in the xpl.net namespace on the agent
Once you set all of those, you can restart opcmsga and it re-registers all the necessary node data in RTSM.
Interesting. What normally happens is that OPC_NODENAME is set automatically by the agent to the hostname of the node (either short or long, depending how it is defined on the node). If the hostname is set the same as you would expect to see it when doing a DNS lookup on the OMi server, nothing need be done.
LOCAL_NODE_NAME is used to override OPC_NODENAME. This might be needed, for example, in the situation where the agent is installed in a VM in a cloud environment where the local hostname is completely different from the hostname the OMi server would resolved from its external IP.
This is exactly what was happening in my case. Our AIX systems do not consistently use DNS (yeah... I know). Most of our AIX systems do not have fully qualified names in OPC_NODENAME, but our OMi server will resolve the FQDN. The mismatch has caused several issues.
Thanks alot for posting your solution!
I am currently supporting a customer where we dismissed the missing OPC_NODENAME but it looks like this might be the solution to our problem as well.