Cadet 2nd Class Cadet 2nd Class
Cadet 2nd Class
2174 views

OMI 10.10 Postgres DB vacuuming

Jump to solution

Hello Everyone

The Embedded Postgres DB of our OMI 10.10 on Linux is getting bigger and bigger.

In the OMI Database Guide in Chapter 16 there's a desription of vacuuming the Postgres DB.

I tried this once for the Events DB and after two hours the OMI wasn't processing any Events anymore and the Memory usage of the Server was at almost 100%.

Is somebody of you maintaining the Postgres DB regularly?

Does anybody has the same issue with that?

Thanks for your help!

Best Regards

Roger

Labels (1)
Tags (3)
1 Solution

Accepted Solutions
Vice Admiral
Vice Admiral

vacuumlo helped to cleanup old objects but there was no disk space gained!

Quick and dirty solution is to export db, drop & recreate db, import db again. After this the effective disk space gets freed!

Hope this helps someone else 😉

View solution in original post

4 Replies

Hello there,

The following article might come handy:

https://softwaresupport.hpe.com/group/softwaresupport/search-result/-/facetsearch/document/LID/QCCR8D45410

WARNING: Please make sure you don't execute "vacuumlo" against the Management database as it will leave in a inconsistent state.

Best regards,

 

Vice Admiral
Vice Admiral

Hi Christian

Thank you for this reply. This is very helpfull.
Our OMi database grows constantly and is currently at ~800GB (event db).

Is there any best practice on preventing this?

I was already testing on testing stage server with reindexdb, vacuumdb and vacuumlo.
It seems to me that vacuumlo does not free the effective disk space.
I wonder if I have to run vacuumdb command with FULL option?!?

I am very thankfull for tips on this.

Kind regards,
Thomas

0 Likes
Vice Admiral
Vice Admiral

vacuumlo helped to cleanup old objects but there was no disk space gained!

Quick and dirty solution is to export db, drop & recreate db, import db again. After this the effective disk space gets freed!

Hope this helps someone else 😉

View solution in original post

Commodore Commodore
Commodore

Hi Thomas

I found this out the hard way, this is poorly documented in OMi

first check the disk space occupied by large objects in events db:

SELECT pg_size_pretty(pg_table_size('pg_largeobject'));

Vacuum the large object by executing (depending on the pg_largeobject size this may take a while to run): 

C:\Program Files\PostgreSQL\9.3\bin\vacuumlo -h localhost -v -U postgres omi_event

Reclaim the disk space by vaccuming the table by executing:

C:\Program Files\PostgreSQL\9.3\bin\vacuumdb.exe -U postgres -h localhost -d omi_event -t pg_largeobject -f -v -z

Check the pg_largeobject size with:
  SELECT pg_size_pretty(pg_table_size('pg_largeobject'));

Do this as often as you like, the more often the faster the process will take.

Cheers

Mick

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.