Welcome Serena Central users! CLICK HERE
The migration of the Serena Central community is currently underway. Be sure to read THIS MESSAGE to get your new login set up to access your account.
Highlighted
squartec Trusted Contributor.
Trusted Contributor.
143 views

Groupwise Mobility Automate Postgres Vacuum not running

Running SLES 12 SP 3 & GW 18.0.1Build 285

I have followed the TID https://support.microfocus.com/kb/doc.php?id=7009453

to automate the postgres vacuum but modified it to this on the recommendation of MF Support (changed to 5 vs FRI):

0 22 * 6,12 5 [ `date +\%d` -le 7 ] && /opt/novell/datasync/tools/dsapp/dsapp.sh -v -i

I had this added to the crontab in January. When I do a DSAPP & run a  general health check it tells me there has been no maintenance done and to refer to this TID.

Is there something wrong with the syntax of the crontab so it's not running ? Not quite sure where to check to see if it's ever been executed, but this should of run automatically sometime in June at least.

Thanks in advance!

 

Labels (1)
0 Likes
10 Replies
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

If the maintenance has been run you'll find it documented in

/opt/novell/datasync/tools/dsapp/logs/dsapp.log

You can search for the keywords [vacuumDB] and [indexDB], respectively.

If you schedule it via crontab you'll have to do so as root, so you should have the cron statement in

/var/spool/cron/tabs/root

As i understand the code (rough view, though) it'll by default (that can be changed) consider 180 days without vacuum / reindex as a problem

0 Likes
squartec Trusted Contributor.
Trusted Contributor.

Re: Groupwise Mobility Automate Postgres Vacuum not running

Thanks so much, that helped ! Was able to confirm that the job did start and complete as per logs below. It's odd because this date falls within those 180 day parameters - it's interesting why it's saying it hasn't been run?


2019-06-07 22:00:05 EDT][INFO] ------------- Starting dsapp v249 -------------
[2019-06-07 22:00:05 EDT][INFO] Running in CRON
[2019-06-07 22:00:05 EDT][INFO] Detected SLES version 12
[2019-06-07 22:00:06 EDT][INFO][getDSVersion] Version: 18
[2019-06-07 22:00:06 EDT][INFO] Building XML trees started
[2019-06-07 22:00:06 EDT][INFO] Building XML trees complete
[2019-06-07 22:00:06 EDT][INFO] Operation took 1.019 ms
[2019-06-07 22:00:06 EDT][INFO] Checking hostname
[2019-06-07 22:00:06 EDT][INFO] Assigning variables from XML started
[2019-06-07 22:00:06 EDT][INFO] Assigning variables from XML complete
[2019-06-07 22:00:06 EDT][INFO] Operation took 66.921 ms
[2019-06-07 22:00:06 EDT][INFO][checkPostgresql] Successfully connected to postgresql [user=datasync_user,pass=********]
[2019-06-07 22:00:06 EDT][INFO] Running switch: vacuum
[2019-06-07 22:00:06 EDT][INFO][getDSVersion] Version: 18
[2019-06-07 22:00:06 EDT][INFO][getDSVersion] Version: 18
[2019-06-07 22:00:06 EDT][INFO][rcDS] Stopping Mobility agents..
[2019-06-07 22:00:32 EDT][ERROR][rcDS] Problem running 'rccron stop'
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 3179
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 3180
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 2854
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 24311
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 24312
[2019-06-07 22:00:32 EDT][INFO][kill_pid] Killing process: 24504
[2019-06-07 22:00:32 EDT][INFO][vacuumDB] Vacuuming datasync database..
[2019-06-07 22:03:06 EDT][INFO][vacuumDB] Operation took 153346.667 ms
[2019-06-07 22:03:06 EDT][INFO][vacuumDB] Vacuuming mobility database..
[2019-06-07 22:13:51 EDT][INFO][vacuumDB] Operation took 644859.082 ms
[2019-06-07 22:13:51 EDT][INFO] Running switch: index
[2019-06-07 22:13:51 EDT][INFO][indexDB] Indexing datasync database..
[2019-06-07 22:15:35 EDT][INFO][indexDB] Operation took 103957.655 ms
[2019-06-07 22:15:35 EDT][INFO][indexDB] Indexing mobility database..
[2019-06-07 22:18:44 EDT][INFO][indexDB] Operation took 188903.349 ms
[2019-06-07 22:18:44 EDT][INFO][getDSVersion] Version: 18
[2019-06-07 22:18:44 EDT][INFO][getDSVersion] Version: 18
[2019-06-07 22:18:44 EDT][INFO][rcDS] Starting Mobility agents..
[2019-06-07 22:19:08 EDT][ERROR][rcDS] Problem running 'rccron start'
[2019-06-07 22:19:08 EDT][INFO] ------------- Exiting dsapp v249 -------------
0 Likes
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

The relevant part is to found in

/opt/novell/datasync/tools/dsapp/lib/dsapp_ghc.py

where dsapp grabs maintenance info from the database (maybe something's wrong there...). In the

def ghc_checkManualMaintenance():

part there's the definition for "dbMaintTolerance", is it set to 180 for you?

 

 

0 Likes
squartec Trusted Contributor.
Trusted Contributor.

Re: Groupwise Mobility Automate Postgres Vacuum not running

Thanks for this - yes it is set for 180 days.
0 Likes
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

With GMS stopped, try the following

psql -U datasync_user datasync
SELECT last_vacuum FROM pg_stat_user_tables;
\q

and

psql -U datasync_user mobility
SELECT last_vacuum FROM pg_stat_user_tables;
\q

Do you see the timestamps you'd expect?

Don't forget to restart GMS afterwards...

 

 

0 Likes
squartec Trusted Contributor.
Trusted Contributor.

Re: Groupwise Mobility Automate Postgres Vacuum not running

Thank you - I did both - and they both showed up with no entries?

0 Likes
squartec Trusted Contributor.
Trusted Contributor.

Re: Groupwise Mobility Automate Postgres Vacuum not running

Actually it shows blank entries - but then states there are 14 rows and 11 rows returned for each?
0 Likes
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

Maybe (!) a current postgres DB has different locations to store this info than an 8.x one had. So it might (!) be possible that dsapp 2.49 (released 2 years ago) doesn't reflect it. I'd have to check on a GMS18 system. Maybe someone else can have a quick look?

 

0 Likes
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

Update: Postgres9 docs say:
Last time at which this table was manually vacuumed (not counting VACUUM FULL)
Note the part in the brackets, which cannot be found (by me) in the PG8 docs. As dsapp runs a full vacuum (iirc, sitting here with a cellphone, no access to the code) the behaviour would be expected with gms18.
0 Likes
Knowledge Partner
Knowledge Partner

Re: Groupwise Mobility Automate Postgres Vacuum not running

Apparently my assumption made above pretty much applies. If you like, you can check it as follows:

- stop GMS, perform a backup of the DB (likely unneeded, just in case...)

- check

psql -U datasync_user datasync
SELECT last_vacuum FROM pg_stat_user_tables;
\q

as you've done before. Watch the "empty" results

- issue

vacuumdb -U datasync_user -d datasync -t cache -v

which will do a "single" vaccum on the "cache" table

- recheck

psql -U datasync_user datasync
SELECT last_vacuum FROM pg_stat_user_tables;
\q

which this time should return a recent timestamp for ONE (the cache) table

- restart gms

If you can reproduce this the dsapp warnings can be neglected as it's basically just a cosmetic issue due to the fact that dsapp hasn't been adapted to current postgres behaviour.

 

 

 

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.