Acclaimed Contributor.. KAKA_2 Acclaimed Contributor..
Acclaimed Contributor..
698 views

Structured Log File Policies

Dear Experts,

Here is my use case/ requirement: an automation running on application server processes the files every time it receives and write a logline:

NameOfFile | TimeOfExecution | serverName

application team don't want to get the alarm rather want to see somewhere what was the last time and which file and status.

I am wondering is this policy fits in such requirement? is there any example/ use case / detailed document explaining this policy? ( online help is not sufficient)

Thank you in advance.
-KAKA-

 

0 Likes
9 Replies
Acclaimed Contributor.. KAKA_2 Acclaimed Contributor..
Acclaimed Contributor..

Re: Structured Log File Policies

Could someone please reply? i am not able to get what will be the end result? what is the use case/scenario? i am more interested in metric and generic output policies.

it is bit urgent.

-KAKA-

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Structured Log File Policies

It sounds like you might want to log a metric every time something is written to a structured log file, so that the app owner can graph it to see any gaps in the times or to verify the regularity of processing.

You can create a policy to log metrics from a structured log file data source.  For this, you can create the policy on OMi and assign/deploy to OA12 or create and activate the policy on OpsCx (BSMC). 

Let me walk you through an example:

In this example the logfile looks like this:

2017-01-30 17:43|if_one|OK
2017-01-30 17:43|if_two|OK

For each tab of the policy mentioned below, I attached a screenshot so you can recreate the policy.


1. Source tab

You need to decide whether to specify an OM pattern or your logfile is structured enough to use a single delimiter between fields (eg, like CSV).  If the latter, then if you want to log more than one metric from a single logfile entry, then the data also needs to be self-describing so you can set the metric name.  Otherwise OM pattern gives you flexibility to pick out the bits you want in the log.

I chose the latter because I assume you are an old hand with OM pattern matching, so wanted to show the other approach.  The field names become variable names.  Put a comma between each field name.  Enter the delimiter in the Data Field Separator box.

Whichever you use, be sure to test the pattern by loading sample data and then viewing the sample data (this popup has two tabs - one for the Raw data and one for the Structured data.  The Structured data, when you click the Refresh icon, shows your variable names as column headings with their data from the sample underneath.  Caution: load a file that is representative of your data but not too big (not multi-MB).


2. Mappings tab

My example logfile doesn't have a 'real' metric to log, but will convert OK to a number to log as the metric.  So OK becomes 1 and ERR becomes 0.

3. Defaults tab

If you loaded and viewed the sample data, then you can drag-and-drop them into the appropriate fields.  This defines what could be logged to the agent's performance data store by default, unless you specify otherwise in the Rules tab.

Let me translate what you set in this tab with what it becomes in the agent's performance data store:

"Data domain" is the data source ("ovcodautil -showds").  I hardcoded it.
"Metric class" is the class within the data source.  I hardcoded it.
"Metric name" is the name of the metric within the class.  I hardcoded it.  Note: with a self-describing logfile such as node:CPU:99:Memory:98 and a pattern of node,metricName,metricValue (with metricName,metricValue as recurring fields) you could specify <$DATA:metricName> as the metric name.
"Value" is the metric value.  In this case I used the mapping so the value will either be 1 or 0 based on the Status being OK or ERR.
"Time measured" can be left empty and so will default to the time when the policy executes at each polling interval, but in this case I used the DATETIME function to show you can set the time to what is in the logfile.  If the time in the logfile was Epoch, then you wouldn't need this function and could just specify <$DATA:logdate>.
"Related CI" is the instance.  Eg, in this example, each interface is a different instance to track for its status.

4. Options tab

I decided I want ALLlog entries to generate metric data as per the Defaults tab, so I didn't create any rules to be selective or override the Defaults.  So, just tick the box to store unmatched records (same concept as forward unmatched in an opcmsg policy).


Once the policy is running on the node, then check that data is being logged.  Eg:

"ovcodautil -showds" should show the data source name.
"ovcodautil -dumpds <data source name>" should show the most recent metrics logged for that data source.

Here is my example:

# ovcodautil -dumpds KAKA
===
01/30/17 05:58:00 PM|relatedCi                     |if_one    |
01/30/17 05:58:00 PM|originalName                  |N/A       |
01/30/17 05:58:00 PM|metricClass                   |Interface |
01/30/17 05:58:00 PM|unit                          |N/A       |
01/30/17 05:58:00 PM|Status                        |      1.00|
01/30/17 05:58:00 PM|timeMeasured                  |1485827880|
01/30/17 05:58:00 PM|timeLogged                    |N/A       |
01/30/17 05:58:00 PM|integrationId                 |N/A       |
01/30/17 05:58:00 PM|node                          |N/A       |
===
01/30/17 05:57:00 PM|relatedCi                     |if_two    |
01/30/17 05:57:00 PM|originalName                  |N/A       |
01/30/17 05:57:00 PM|metricClass                   |Interface |
01/30/17 05:57:00 PM|unit                          |N/A       |
01/30/17 05:57:00 PM|Status                        |      1.00|
01/30/17 05:57:00 PM|timeMeasured                  |1485827880|
01/30/17 05:57:00 PM|timeLogged                    |N/A       |
01/30/17 05:57:00 PM|integrationId                 |N/A       |
01/30/17 05:57:00 PM|node                          |N/A       |


Then you can create a chart in Performance Dashboard.  For this, select the node where this policy is running.  Then select the agent data source, and you should see and select the class, metric and instance in the drop list.  See screenshot attached.

 

One last comment: "Related CI" in the Defaults tab can also be used select a CI in PD and it can graph the data for its instance.  Eg if there were a CI representing "If_one" then when that is selected it would see just the metrics for that instance.  I didn't go into how to do that here - I have replied to someone else on the community not log ago about this but it entails setting the External ID attribute of the CI to be exactly the same as the Related CI in the policy and the Monitored By attribute of the CI to contain BSMC:<connected server name>.

 

Hope this is enough to get you started.

CP.

Acclaimed Contributor.. KAKA_2 Acclaimed Contributor..
Acclaimed Contributor..

Re: Structured Log File Policies

Thanks for explaning things in detail. I will give a try as soon as our test setup is running again 🙂 -KAKA-

0 Likes
Aodizzio Contributor.
Contributor.

Re: Structured Log File Policies

Hi Carol,

 

I tried to deploy your policy on windows servers, OA 12.05.006 but it didn´t work. The datasource is not even created. For Linux servers the policy works perfectly for the same OA version.

Is there any problem with this type of polices for windows? I am just changing the path of the log and the policy is well deployed but the DS is not created. Do you have any idea why is this happening?

 

 Thank you and best regards,

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Structured Log File Policies

It should work the same way on Windows as on Linux, after you change the log file path/name to the Windows location.

If you have the option, the first thing I would do is upgrade the agent version to 12.10 or higher. It has improvements for logging metrics from structured log files.

CP.

 

0 Likes
Aodizzio Contributor.
Contributor.

Re: Structured Log File Policies

Hi Carol,

Thank you for your quick reply. Unfortunatly we don´t have the option to upgrade to OA 12.10 at the moment because our OMi version is 10.62 (for this client) and we should upgrade it to 10.72 if I am not mistaken to support that version of the agent.

 

I don´t know if you could have any policy of this type that you knew for sure that works for windows and you could share with me. If you don´t have it no issues.

 

Thank you again and best regards.

0 Likes
Outstanding Contributor.. andreask Outstanding Contributor..
Outstanding Contributor..

Re: Structured Log File Policies

MEtric collection in structured logfile policy is broken with Agent version 12.06 and 12.10.

If 12.05 is affected cant tell you fro sure.

You need to get a hotfix from support.

12.11 releae notes state that it is fixed, but i havent yet got a chance to test it properly.

Good Luck!

0 Likes
Aodizzio Contributor.
Contributor.

Re: Structured Log File Policies

Thank very much for the information Andreas. I will proceed to ask the support for this hotfix...

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Structured Log File Policies

Thanks Andreas for picking up my mistake. 

 

CP.

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.