structured metric logfile policy for several metrics in one row
i created a script around httpstat for moniting websites. It creates logfiles with performance data that look like:
timestamp sitename metricvalue1 metricvalue2 metricvalue3 metricvalue4
i created a structured logfile policy for metrics and used OM style pattern, which matches the sample data:
This works all fine but i cant find out how to store all metricvalues into perf agent. Created 4 conditons but policy stops after first match so only first vaule is stored in perf agent. Like :
[root@omi-lin1 TYB_Httpstat]# ovcodautil -dumpds TYB_Httpstat
06/24/20 12:55:34 PM|relatedCi |http://somewebadress.com|
06/24/20 12:55:34 PM|originalName |Namelookup|
06/24/20 12:55:34 PM|metricClass |Website_performance|
06/24/20 12:55:34 PM|unit |ms |
06/24/20 12:55:34 PM|Namelookup | 0.00|
06/24/20 12:55:34 PM|timeMeasured |1592996196|
06/24/20 12:55:34 PM|timeLogged |N/A |
06/24/20 12:55:34 PM|integrationId |N/A |
06/24/20 12:55:34 PM|node |http://www.somewebadresse.com|
I checked documentation but doesnt help me to figure out how to store multibe values per row into per agent.
Alternative is to create 3 polices for each value one.
As this policy is for structured data and is able to parse multiline i wonder if only one value can be stored in perf agent. So i am sure R&D had in mind that csv files may contain more than one metric value each line.
Did somebody something similar already or know how to do this?
Any advise is welcome! Thanks!
I think you are right that you need multiple policies if you are using an OM pattern to match all metrics as you have them in one line. If you have flexibility in determining the log file format, you could do it this way:
timestamp sitename metricname metricvalue
with a separate line for each metricname/metricvalue pair rather than everything in one line. Then in the Default tab, you could specify Metric name as <$DATA:metricName> and Value as <$DATA:metricValue>
The alternative would be a file format like this:
Then instead of using an OM Pattern you would use Static Fields:
The Recurring Fields would be metricName,metricValue and the Data Field Separator would be | (or whatever you use in the file).
The Default tab would look the same as in the previous scenario.
Of the two choices, I would recommend using OM Pattern since it is more intuitive and less risk of issues.
By accident i found an Example how to do this with recurring pattern.
HEre the Link just for reference if others fall over over the same situation.
Oh God! Thanks you !
I spent 2 h to understand the documentation and I couldn't find a way to avoid create one policy by metrics.
the documentation should be rewrite and more explain because I think, more people want to read CSV file which metric and in one line it's obvious you'll fine more than one metrics.
So correct me if i'm wrong;
If I have the line :
I should use "statitcs feilds " to extract all of metrics name and get value.
but How should I confgure the rule?
If I create a rule like :
if Timestamp mathc "*" , the policy should get every metrics and values in the line ?
Yes, its a real pain.
For my understanding it doesnt work with more than one metric/value pair per line as after a match the processing stops. so one single line can not processed several times which is needed to store more matched values.
What you can do is to create multible policies for the same logfile. i do this for opctrapi monitoring.
policy one, matches and stores the first metric/value pair of a line, policy two matches pair two and so on.
but be aware, i never got more than 5 policies for one logfile to storel metric/value pairs properly working. also it took some time after deployment until all values are collected. means i checked every minute with ovcodautil -dumpds MYDATASOURE and first run i saw that 2 metric were stored, after a few more runs i saw that all 5 metrice are collected. i never got it to work with 7 policies for the same logfile. but it is useless to open a case for this.
my recommendation is to use a wrapper script to translate your "csv" file to something like this:
TIMESTAMP CI_NAME METRIC VALUE
This is a real life sample of one of my logfiles.
2021-03-30T15:22:01 https://website.de HTTPStatus=301
2021-03-30T15:22:01 https://website.de DNSLookup=5
2021-03-30T15:22:01 https://website.de TCPConnection=1
2021-03-30T15:22:01 https://website.de TLSHandshake=112
2021-03-30T15:22:01 https://website.de ServerProcessing=0
2021-03-30T15:22:01 https://website.de ContentTransfer=0
2021-03-30T15:22:01 https://website.de TotalTime=118
be aware to use a supported timestamp or you run into the trouble of conversion, which is badly documented. took me days to figure out how to convert it.
I cant imagine this feature could not get more than one metric per line.
My CSV file has more than 10 metrics per line so I wont created 10 policy for one file. we are in 2021 not in the 80 🙂
Thank you so much!
Thats my thinking, too!
But i dont want to lower your expectations, Support will tell you "works as designed" and open an ER at Idea Exchange. Which by the way i already did for this.
It looks for me that not much people use this kind of policy, expect me, and so it happens that for nearly each agent version i found a defect in opcgeni.
Please let me know abou tthe outcome of the case! Thanks!
Yes I know what you mean, but what is the best way to collect metrics from CSV ?
My csv files are generate by script so I can put the metric in the JSON file but, does the policy will get all metrics ?
if you can edit the scripts then i would do it like that.
split it up to one metric per file so at the end you have 10 files if it is a large amount of data you need to parse in one run. you can keep your csv for easier use in e.g. excel. Or modifiy that your file to contain each line only one metric & value per line similar to my example above.
use one policy to read all the files. i usually do this by giving /path/zo/my/files/<*>.perf .then Agent will parse all files *.perf files in this folder.
i use to overwrite the files on every script run. so only current data is in the files. saves trouble with logrotation. important that this will work is to use a proper timestamp. so even if a file gets read by the agent it will not save data that is already in the agent. also so it is save to use "read always from" beginn option.
i use om patterns to parse the data. so i use <*.timestamp><S><*.CIName><S><*.metricname>=<#.metricvalue>
then build a condition that stores data on <metricname> is equal or greater 0 or -1.
if this will not work after first deployment of the policy wait the time of one intervall of policy run and then do a opcagt -cleanstart. Sometimes Oaagent is unreliable, but i am to lazy to debug this to open a case for this. have too much cases already so i life with the workaround.
We found a wordk around to collect any metrics we want from a log file.
Coso is the key.
here the step :
1 ) Create a Dataset according your log file by using the content administration pod.
2 ) Use the generic output policy and match each metric from the file with the colunm in the Dataset
3) Use a Foward Data Policy to send data to coso
4) Use Coso as data sources to display the collected metric.
so we have 1 file and 2 policies for all metrics.
I hope it'll helps you.