Commander Commander
Commander
660 views

NPS Network Performance Server file system full

Hi team,

Hi Team,
I have a problem on NPS, it is installed on a dedicated server, the version of NNMi is 10:21.
For some time now I have seen the / var / opt / OV file system grow, in the recent past it has reached 100%.
I cleared the cached and archived files for both performance and metrics, I got the FS back to 84% as it usually is, but it has been growing again, now it is at 92% and will grow again.
I set the data retention period to 365/30/7.
Do any of you know if there is a rotation mechanism on file retention? Maybe it doesn't work?
Could it be that the DB has expanded by itself in the past and therefore used more disk space?
If anyone knows how to enlighten me I would be grateful.
Thanks

Antonio

0 Likes
7 Replies
Micro Focus Expert
Micro Focus Expert

Hello Antonio,

/var/opt/OV/NNMPerformanceSPI is your $NPSDataDir, it is very "hungry" folder as per product design. The fastest it grows at /logs. Well, usually there is no bigger lost if you cleanup it, except of the /pids subfolder - but this is on your decision which log rotation mechanism or software to use. Next troublemaker is archive - for each extension pack it is own. I doubt that is a good idea to delete files as you have done - later or faster you may want to resend the data to input and have got no the full history of metrics.

Correct handling of problem that is to follow support matrix which says which disk required, a capacity depends on the scale of monitored network.

BR,

Commander Commander
Commander

Hi Eugene,
well found, if I remember correctly the size of the disk has been respected, I remember having already checked this thing.
But to be safe, I double-check the document and let you know, because a rumor was not clear.
I'll let you know
thanks

A.P.

Commander Commander
Commander

Good morning Eugene,
to search for the information you asked me, I found it difficult to extract the total report of monitored objects from NPS, is there a way to see this list from NPS?
Also, what command should I use to see the NPS version

From NNMi I can see all the monitored objects, I am attaching the photos.

In the Support Matrix document, for a dedicated nps server, there is a table, which I am attaching that too, and it is very clear, but you need to know how many objects are monitored.
Now I don't understand if in the table, it refers to the objects managed by NPS or those managed by NNMi, the second and third column, so I was asking you above how to get the complete report of the objects from NPS.

I now have total disk space, 466 gb:

[root@NPS logs]# df -h --total
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 12G 1.2M 12G 1% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/mapper/rootvg-lv_root 4.0G 1.8G 2.3G 43% /
/dev/sda1 253M 183M 71M 73% /boot
/dev/mapper/rootvg-lv_home 1014M 33M 982M 4% /home
/dev/mapper/rootvg-lv_var 3.0G 1.4G 1.7G 46% /var
/dev/mapper/rootvg-lv_tmp 2.0G 68M 2.0G 4% /tmp
/dev/mapper/rootvg-lv_home_tws 1014M 33M 982M 4% /home/tws
/dev/mapper/rootvg-lv_var_log 1014M 395M 620M 39% /var/log
/dev/mapper/rootvg-lv_var_opt_ov 350G 330G 21G 95% /var/opt/OV
/dev/mapper/rootvg-lv_var_crash 2.0G 33M 2.0G 2% /var/crash
/dev/mapper/rootvg-lv_var_log_audit 1014M 635M 380M 63% /var/log/audit
/dev/mapper/rootvg-lv_opt 509M 186M 323M 37% /opt
/dev/mapper/rootvg-lv_opt_ov 15G 11G 4.5G 70% /opt/OV
/dev/mapper/rootvg-lv_opt_beta_agent92 509M 26M 483M 6% /opt/beta/agent92
/dev/mapper/rootvg-lv_var_tmp 1014M 33M 982M 4% /var/tmp
tmpfs 2.4G 0 2.4G 0% /run/user/0
NNMi_1:/var/opt/OV/shared/perfSpi/datafiles 34G 29G 5.5G 84% /net/NNMi_1/var/opt/OV/shared/perfSpi/datafiles
total 466G 373G 93G 81% -

[root@NPS logs]# free -h
total used free shared buff/cache available
Mem: 23G 16G 570M 76M 6.2G 6.3G
Swap: 9.2G 6.7G 2.5G

What do you advise me to do?
Initial storage was R14 / H70 / D800 but I had to lower the parameters to R7 / H30 / D365I hope you can give me a solution to this problem of my uncontrolled increase in disk space.

Thanks in advance

Antonio

Micro Focus Expert
Micro Focus Expert

Hello Antonio,

Well, 200K of interfaces falls into the big/huge scale, depends on a percentage of objects which are monitored by Perf iSPI. OOTB, not everything is monitored, you create a Monitoring configuration to align with requirements of particular project. If you don't have an idea how many nodes/interfaces are monitored, try to get a report on consumption (for a licensing):

# /opt/OV/support/nnmtwiddle.ovpl invoke com.hp.ov.nms.licensingejb:service=PerformanceSpiConsumptionManager gatherPerfSpiConsumption

Finally, you may get a count(*) directly from NPS DB, distinc-ed by Node / Interface name.

BR,

Micro Focus Expert
Micro Focus Expert

(I don't like accessing DB directly, let me replace the advice on...)

It's better to run -s accomplished by -q, so it prints out a summary of rows in DB - easy way to conclude sizing:

https://docs.microfocus.com/itom/Network_Node_Manager_i:2020.11/dbsize

Commander Commander
Commander

Hi Eugene,
here I am reporting :-), as shown in the attached image, the nodes monitored in NNMi are 2791, the license is for 3501, the interfaces monitored in NNMi are 221257.
By running:
[root @ NNMi_1 PerfSPI] # /opt/OV/support/nnmtwiddle.ovpl invoke com.hp.ov.nms.licensingejb: service = PerformanceSpiConsumptionManager gatherPerfSpiConsumption
1629
How do I get the total of NODES and INTERFACES directly from NPS DB?
As I asked you before, to see the installed ISPI version, which of the two commands is real between these two:
cat /opt/OV/Uninstall/HPNNMPerformanceSPI/installer.properties
or
cat /opt/OV/NNMPerformanceSPI/patch.info

Also, as I asked before, in the Support Matrix document, for a dedicated nps server, I don't understand if the table refers to the objects managed by NPS or those managed by NNMi, second and third column.

A.P.

Micro Focus Expert
Micro Focus Expert

Hello Antonio,

This is a "post-priori" approach - you may evaluate the number of unique interfaces already exported to NPS:

# dbisql -c DSN="PerfSPIDSN" -nogui "select count(DISTINCT [Interface UUID]) from f_Raw_InterfaceMetrics"

There are so many other approaches as many NPS architects, I believe 🙂 My point of view is that well designed Monitoring configuration should not poll more than 30% of interfaces as seen in NNMi - just calculate that rate by getting /opt/OV/support/printcounts.ovpl, so you know the number of interfaces as while and a count of polled for iSPI for Metrics.

Now, let's check Sizing recs:

https://docs.microfocus.com/itom/Network_Node_Manager_i:2020.11/SizingRecommendation

Take that count as getting by printcounts.ovpl, and see column "Number of polled interfaces". You know the scale of NNMi, get same for Traffic iSPI, QA, MPLS, etc and apply for next table. Finally, you know the tear sizing of NPS, and may plan HW - see "Dedicated NPS: Recommended Hardware Size"

BR,

 

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.