Reporting variations between actual and desired state monitoring in Operations Bridge Manager

by in IT Operations Management

By: Carol Park and Martin Recker

Reporting variations between actual and desired state monitoring in Operations Bridge Manager

Micro Focus Operations Bridge Manager (OBM)2018.11 delivers the ability to compare the agent-based policy configuration with the desired state on the OBM server, all the way down to the parameter level.  This also includes reporting on SiteScope servers to compare the assigned versus actual deployed instances per SiteScope template.

The capability is provided by the opr-node command line interface.  OBM 2018.11 also includes a REST web service for the node config report.

This article walks you through the configuration reporting options in chronological order, including using the report for automated remediation, and finishing up with examples of the latest parameter-based reporting.

OBM (formerly OMi) has always had a GUI-based Node Configuration Report which is generated for one node at a time.  It compares the assigned policies with the policies on the monitored node and reports any mismatch.  This includes missing policies, policies that exist on the node but are not assigned, policy version and enabled state differences.


Figure 1


With OBM 10.63 IP2 and above, the opr-node CLI includes the -node_config_report switch to report configuration mismatches on multiple nodes at a time.  Now with OBM 2018.11, the CLI has been further improved to compare parameter values as well (see Figure 1).

The basic syntax is:

<OBM_Home>/opr/bin/opr-node -node_config_report [ -num_parallel <num> ]

                                                [ -show_details | -issues_as_csv | -no_agent_check]

                                                [ -parameter_values ] <targets>


By default, the command contacts 50 agents in parallel, which you can adjust with -num_parallel <num>.

Also, by default, no parameters are checked.  See the Parameters section below for checking parameter values.

Bear in mind that most mismatches are typically due to a policy version mismatch rather than just a pure parameter value mismatch.  A policy version mismatch is always shown and has a higher priority than a parameter mismatch.  Mismatches are typically due to deployments that didn’t happen, such as:

  • Failed deployment jobs (eg, agent unreachable for longer than the retries configured in Infrastructure Settings).
  • Suspended jobs that are queued up behind a failed job.
  • Jobs that are currently running when the report is run.
  • Jobs that are manually deleted by the user.

CSV output

If you specify -issues_as_csv, then each conflict is output as a one-liner in this format:

<issue> can be INSTALL, REMOVE, ENABLE, DISABLE, UPDATE_TO.  For example:

  • INSTALL means the policy is part of the desired state on the OBM server but not installed on the agent.
  • ENABLE means the policy needs to be enabled on the agent to match its corresponding status on the OBM server.
  • UPDATE_TO means that the policy version on the agent needs to be updated to the same version as assigned on the OBM server.

If a node cannot be reached NODE_ERROR,<node>,<nodeCiId>,<reason> is printed.


# /opt/HP/BSM/opr/bin/opr-node -node_config_report -issues_as_csv -vn "OMi Deployment with HP Operations Agents"

INSTALL,,44b61dc0126f70cb8644878b84fd527f,"Site Log",1.9


REMOVE,,44b61dc0126f70cb8644878b84fd527f,"OMi Server Processes (Linux)",1.0

NODE_ERROR,,4c48643322c4f3d08c8b735bf785beb0,"Not authorized to list installed policies on node."

3/3  (2018.05.21 19:23:57)

INFO:  Operation succeeded

Note: In all of the examples, the CLI is shown without entering credentials because they were saved in an RC file (default $HOME/.opr-cli-rc) via these commands:

<OBM_Home>/opr/bin/opr-node -set_rc username=admin

<OBM_Home>/opr/bin/opr-node -set_rc password=xxx

But you could save entire connection information in the RC file, including jks, jksPassword, smartcard, winCrypto, server, port, ssl / url details as of OBM 2018.05.



If none of these options (-show_details | -issues_as_csv | -no_agent_check) is specified, then the default output is one status line for each node (OK, CONFIG_ISSUE, NODE_ERROR) and one line for each policy issue found.  This format is particularly useful to automate remediation.

Remediation example on Linux (the same CLI runs on Windows)

# /opt/HP/BSM/opr/bin/opr-node -node_config_report -vn "OMi Deployment with HP Operations Agents" > /tmp/statusFileA

3/3  (2018.05.21 19:34:23)


# cat /tmp/statusFileA

omi-0.omisvc.opsbridge1.svc.cluster.local (4ef87d24b38516259e736745e699a449) : OK (44b61dc0126f70cb8644878b84fd527f) : CONFIG_ISSUE

  "Site Log" 1.9 enabled, Agent: NOT_FOUND,  INSTALL

  "Sys_SystemMetricStreaming" 1.3 enabled, Agent: NOT_FOUND,  INSTALL

  "OMi Server Processes (Linux)" 1.0 enabled,  REMOVE (4c48643322c4f3d08c8b735bf785beb0) : NODE_ERROR (Not authorized to list installed policies on node.)

INFO:  Operation succeeded


Re-deploy assigned content to the nodes which have the incorrectly assigned content::

# grep CONFIG_ISSUE /tmp/statusFileA | awk '{ printf $1" " }' > /tmp/issueNodesA

# /opt/HP/BSM/opr/bin/opr-agt -deploy -force -node_list "`cat /tmp/issueNodesA`"


If your OBM is configured to create Suspended jobs, then start the jobs manually:

# /opt/HP/BSM/opr/bin/opr-jobs -list suspended -nl "`cat /tmp/issueNodesA`"

# /opt/HP/BSM/opr/bin/opr-jobs -start suspended -nl "`cat /tmp/issueNodesA`"

# /opt/HP/BSM/opr/bin/opr-jobs -list -nl "`cat /tmp/issueNodesA`"


Re-run the report to confirm that all conflicts are resolved:

# /opt/HP/BSM/opr/bin/opr-node -node_config_report -nl "`cat /tmp/issueNodesA`"  > /tmp/statusFileB

# grep CONFIG_ISSUE /tmp/statusFileB | awk '{ printf $1" " }'



If you want to compare the assigned (desired state) of the monitoring configuration including parameters, then specify -parameter_values -show_details (both are required in OBM 2018.11 if you want to see which parameters have conflicts).  The CLI will take longer to run when you include parameters.  With -show_details it is also quite verbose.

Parameters report example

This example shows a partial output for a single target node.  It shows two policies with mismatched parameters, each flagged with FIX_PARAM_ISSUE.  For example, the PostgreSQL_HitsCount policy shows the OBM server set the PostgreSQL BlockHits Count Severity parameter to Warning, but the agent is configured with Major At the end of the output, note the Summary shows the report checked 1 node and found 1 node had a CONFIG_ISSUE, in this case the two mismatched parameters.

# /opt/HP/BSM/opr/bin/opr-node -node_config_report -parameter_values -show_details -nl

Node Ci ID              = 44cb3382b0f70af7a136bb87ba425433

Ci Name                 = mneme

Ci Type                 = unix

Primary DNS Name        =

Operating System        = Linux Red Hat 7.5 3.10.0

IP Address              =

Node Status             = CONFIG_ISSUE

Aspect -> Node Assignments (12)

  aspect "Apache WS Discovery" 1.1


      Ci Type = unix

      Aspect Version ID = 0c0a9229-4b37-b51a-810c-5be04f96ff27

      Assigned By = AutoAssignment: OMi Deployment with HP Operations Agents

      Assignment Date = 2018.12.17 08:26:52

      Enabled = Yes

      Directly Assigned = No

      Parent(s) = CLP: Discovery (1.3)

    Policy Templates from Aspect

      "ApacheWS-Discovery" 1.0 enabled, Agent:1.0 enabled

        Parameters: 0


Direct Template Assignments (5)

  "3PAR_Traps" 1.6 enabled, Agent:1.6 enabled

    Parameters: 0


Indirect Template Assignments (28)

  "OA-Systemtxt" 1.2 enabled, Agent:1.2 enabled

    Parameters: 0Picture3.png




Proxy Assignments (0)

Policy Templates on Agent no longer assigned (0)


1/1  (2019.02.06 17:15:07)

Summary: 1 total, 0 OK, 0 NODE_ERROR, 1 CONFIG_ISSUE

INFO:  Operation succeeded


SiteScope monitors

You can report on what monitors are deployed from OBM to a SiteScope server in the OBM UI.  Navigate to Administration > Setup and Maintenance > Connected Servers, select a SiteScope connected server and click "Launch SiteScope report".  This report does not include the parameter values.

In OBM 2018.11 (OBM 10.71), you can now also use the opr-node CLI to report on the monitor instances deployed from OBM to a SiteScope server, including parameter values.

SiteScope report example

In this example, is a SiteScope server.  The "Linux basic template (Basic Linux)" template was assigned to four node CIs and the corresponding policy was deployed on the SiteScope server.  At some point, a user accidentally deleted one of the SiteScope monitor groups, so there are now three nodes being monitored in SiteScope as shown in this screenshot.


Figure 2


The node configuration report identifies the missing instance and marks it as NOT_FOUND.  Note that, at the end, the Summary line reports OK rather than CONFIG_ISSUE, because the configuration with the issue is located on the proxy server as opposed to the CI to which the configuration was assigned.

# /opt/HP/BSM/opr/bin/opr-node -node_config_report -show_details -parameter_values -nl

Node Ci ID              = 4a7090e20763d06a96746cbf3ead3f7f

Ci Name                 = prospero

Ci Type                 = nt

Primary DNS Name        =

Operating System        = Windows Server 2012 R2 6.3

IP Address              =

Node Status             = OK


Proxy Assignments (1)

  "Linux basic template (Basic Linux)" 1.0 enabled, Agent:1.0 enabled, FIX_PARAM_ISSUE

    Parameter Issues: 1

      Host:"": NOT_FOUND, Agent: AVAILABLE

    Parameters: 1



           "Host:"":User Name" : "root", Agent: "root"

           "Host:"":Password" : *****, Agent: *****

           "Host:"":Frequency (sec)" : "600", Agent: "600"


           "Host:"":User Name" : "root", Agent: "root"

           "Host:"":Password" : *****, Agent: *****

           "Host:"":Frequency (sec)" : "600", Agent: "600"


           "Host:"":User Name" : "root", Agent: "root"

           "Host:"":Password" : *****, Agent: *****

           "Host:"":Frequency (sec)" : "600", Agent: "600"

Policy Templates on Agent no longer assigned (0)


1/1  (2019.01.05 17:12:25)

Summary: 1 total, 1 OK, 0 NODE_ERROR, 0 CONFIG_ISSUE

INFO:  Operation succeeded


Considerations when reviewing the report output

Be aware of these potential false positives:

  • Policy version or parameter conflicts could be reported if the same policy template is part of multiple assignments. For example, two aspects result in the same policy with different parameter values or different versions of the same policy being deployed to the node.
  • In a manager-of-manager scenario, if more than one OBM/OM server is responsible for the same agent, then policies on the agent are not considered if they belong to a different OBM/OM server from the OBM server where the node config report is run. You can see the owner by running ovpolicy -l -level 2 on the agent.  Or, if you are on the OBM server, then run ovpolicy -l -level 2 -host mgdnode.fqdn -ovrg server
  • In a secure environment, if the agent is configured for pull mode, then it may have the correct monitoring deployed, but the node config report shows a NODE_ERROR because OBM does not have connectivity to the agent.


REST web service

The format of the URL is https://GW.FQDN/opr-config-server/rest/ws/10.71/node_config_report_list/<CI_ID>.

To report on multiple nodes, the format is https://GW.FQDN/opr-config-server/rest/ws/10.71/node_config_report_list?filter=<CI_ID_1>&filter=<CI_ID_1>...

These URL parameters can be applied:

  • listParameterValues, default is false
  • dontContactAgent, default is false
  • skipOtherPolicyOwner, default is false (Note: CLI always sets it to true)
  • format, default is xml, but could specify json

The output data model is described in <OBM_Home>/opr/api/schema/OprDataModel.xsd on the OBM server.

REST web service example

In this example, the opr-ci-list CLI is used to get the CI ID.  The REST web service output is quite verbose, so the collapsed structure is shown.  The XML tag <desired_state> represents what the OBM server side has assigned for the node and <deployed_state> represents what is actually on the agent.

# /opt/HP/BSM/opr/bin/opr-ci-list -nl


Type: unix

ID: 44cb3382b0f70af7a136bb87ba425433

Label: mneme



# /opt/HP/BSM/opr/bin/ -r -u ""



Figure 3


Try out the node configuration report with OBM 10.63 IP2 or higher.  You can capture the majority of cases where there is a mismatch of actual and desired state monitoring configuration, or even just report the desired state for a group of nodes with the -no_agent_check switch.  Or upgrade to OBM 2018.11 to also be able to report assigned parameter values and detect mismatched parameter values, including mismatched SiteScope monitoring configuration.



Operations Bridge
  • We have build already a few scripts to verify desired state on all Agents for 10.63. So today i liked to check the embedded solution as this may much better than my homebrew stuff. but there seems to be a few corners ... /opt/HP/BSM/opr/bin/opr-node -rc_file ./rc_file -node_config_report -issues_as_csv -vn "OMi Deployment with HP Operations Agents" > emebdded_report.csv 50/242 (2019.04.03 15:13:38) 100/242 (2019.04.03 15:13:56) java.text.ParseException: Unparseable date: "2019-03-08T11:48:43+01:00" at java.text.DateFormat.parse( at org.codehaus.groovy.runtime.DefaultGroovyStaticMethods.parse( at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke( at java.lang.reflect.Method.invoke( .... may be MF Support is able to fix
  • This is a great feature/addition to the OBM suite and we hope to take advantage of this capability after upgrading.