I have a problem which reports being full, but actually isn't:
fs2b:~ # du -sh /media/nss/HRZ 638G /media/nss/HRZ fs2b:~ # df -h /media/nss/HRZ Filesystem Size Used Avail Use% Mounted on HRZ 805G 803G 0 100% /media/nss/HRZ fs2b:~ # nlvm list pool HRZ Name=HRZ State=Active Type=NSS32 Size=804.99GB Shared=Yes IsSnap=No ADMediaSupport=No Used=804.99GB Free=0KB Segs=1 Volumes=1 Snapshots=0 Move=No Created: Tue Aug 14 10:13:31 2007 Pool segments: Index Start Next Size Partition 1 0 1688205984 804.99GB sdp1.1 Volumes on this pool: Volume State Mounted Quota Used Free ADEnabled HRZ Active Yes None 802.04GB 0KB No fs2b:~ # nlvm list volume HRZ Name=HRZ Pool=HRZ State=Active Mounted=Yes Shared=Yes Mountpoint=/media/nss/HRZ Used=802.04GB Avail=0KB Quota=None Purgeable=0KB Attributes=Salvage,Compression,UserSpaceRestrictions,DirectoryQuotas ReadAheadBlocks=64 PrimaryNameSpace=LONG Objects=983903 Files=897724 BlockSize=4096 ShredCount=1 AuthModelID=1 SupportedNameSpaces=DOS,MAC,UNIX,LONG CreateTime: Tue Aug 14 10:15:10 2007 ArchiveTime: Never
When I deleted a 80 GB file, free space temporarily was shown - but about a minute later it was again zero.
I don't find any big files changed the last 24 hours.
The volume is primary of a DST pair; the secondary does not seem to have a problem.
The server is a node in 2-server-cluster, NCS. A few days ago I upgraded one node from OES 2018.3 to OES 24.2. During the upgrade all volumes were on the other node, OES 2018.3. After the upgrade I cluster migrated two volumes to the upgraded node - first a test volume, which showed no problems, then this HRZ volumes. I don't know if there is a connection with the upgrade.
Any ideas whats wrong here and what to do? I am considering a pool verify, but I am hesitating while the system thinks there is no free space - afaik a verify needs to write?