(DP) Support Tip: DP 9.x - Backup or GUI error pointi to need to increase max_locks_per_transaction

EDIT 25/May/2018 :

The root cause of this issue had been identified, please use correct solution instead of increase of the locks.

Issue is fixed in following ways :

* DP A.09.08_b113 and above

* Via Hotfix / SSP QCCR2A70420_DP907_r55110

Only DP versions A.09.07, DP A.09.07_b110 and DP A.09.08 are affected.

 

Backup may fail with this error :

[61:4001] Error accessing the database, in line 1909, file /svnstore/unix/panther/dp_907_rel_nightly/src/sm/bsm2/bcsmutil.c.
Database subsystem reports:
"Internal error: DbaXXXX functions."

GUI may fail with DbaXXXX error accessing devices & media or related tasks

 

Justification :

In the pg_log/* files, one or several of similar lines woud appear :

=================================================

2016-09-26 09:00:15 CEST WARNING:  out of shared memory

2016-09-26 09:00:15 CEST ERROR:  out of shared memory

2016-09-26 09:00:15 CEST HINT:  You might need to increase max_locks_per_transaction.

=================================================

Recommendations :

In the postgresql.conf file, the default limit for max_locks_per_transaction = 64.

A review by the CPE IDB / Postgres team shows that this is not adequate to the DP demand and so both Proactive and Reactive recommendation is to increase this value.

Small environments :
max_locks_per_transaction = 1024

Medium size and large environments :
max_locks_per_transaction = 4096

NOTE :

max_locks_per_transaction = 1024 will occupy additional 28MB of memory
max_locks_per_transaction = 4096 will occupy additional 111MB of memory

- comparing to default 64 (if max_connections = 100)

If you unsure, better set the 4069, it just occupies some additional memory

 

IMPORTANT NOTE : Make a copy of postgresql.conf before you make any changes to it

If this parameter is not set in postgresql.conf (or it is commented out with #), just add the above line to the end on postgresql.conf file.
If this parameter is already set, then make sure it set at least to the above recommended value, if not, change to recommended value.

The config fle is located under following directory (default) of your CM:

WINDOWS : \ProgramData\OmniBack\server\db80\pg\postgresql.conf
UNIX : /var/opt/omni/server/db80/pg/postgresql.conf

This parameter will only became active on the DP services restart.

Parents
  • IMHO, this solution works great, but is not complete. The legacy is the logging of autovacuum of PostgreSQL. I get spammed with lines like:

      2017-04-10 10:23:38 CEST LOG:  autovacuum: found orphan temp table "pg_temp_9"."id_95711486382101581" in database "hpdpidb"

    The same ID's over and over again. Although I don't have proof that the 33GiB of logging is caused by max_lock_per_transaction being to small, I am convinced that it is: logging started shortly after the first GUI error and it is just like the GUI error ("PL/pgSQL function "set_temporary_variable" line 17 at SQL statement") about temporary table/storage.

    I created the following (Linux/bash) one-liner to create necessary drop statements:

      grep 'autovacuum: found orphan temp table' -- "$(ls -1t /var/opt/omni/server/db80/pg/pg_log/postgresql-201*.log | head -n 1)" | grep -E -o '"pg_temp_[0-9] "\."id_[0-9] "' | sort | uniq | while read TEMPTABLE; do echo '/opt/omni/sbin/omnidbutil -run_script <(echo '"'DROP TABLE ${TEMPTABLE}'"') -detail -admin'; done

    It checks the latest PostgreSQL logfile for orphaned temp table logging, extracts the name of the temp table and produces a one-liner to delete that temp tables. Once I felt safe actually executing them, I saved the output and ordered bash to executes all those lines.

     

    But that only stops the excessive logging. For actually reclaiming some space, I went with gzipping all logfiles over 10MiB:

      find /var/opt/omni/server/db80/pg/pg_log -maxdepth 1 -name 'postgresql-201*' -type f -size 10M -exec gzip -9 {} \;

Reply
  • IMHO, this solution works great, but is not complete. The legacy is the logging of autovacuum of PostgreSQL. I get spammed with lines like:

      2017-04-10 10:23:38 CEST LOG:  autovacuum: found orphan temp table "pg_temp_9"."id_95711486382101581" in database "hpdpidb"

    The same ID's over and over again. Although I don't have proof that the 33GiB of logging is caused by max_lock_per_transaction being to small, I am convinced that it is: logging started shortly after the first GUI error and it is just like the GUI error ("PL/pgSQL function "set_temporary_variable" line 17 at SQL statement") about temporary table/storage.

    I created the following (Linux/bash) one-liner to create necessary drop statements:

      grep 'autovacuum: found orphan temp table' -- "$(ls -1t /var/opt/omni/server/db80/pg/pg_log/postgresql-201*.log | head -n 1)" | grep -E -o '"pg_temp_[0-9] "\."id_[0-9] "' | sort | uniq | while read TEMPTABLE; do echo '/opt/omni/sbin/omnidbutil -run_script <(echo '"'DROP TABLE ${TEMPTABLE}'"') -detail -admin'; done

    It checks the latest PostgreSQL logfile for orphaned temp table logging, extracts the name of the temp table and produces a one-liner to delete that temp tables. Once I felt safe actually executing them, I saved the output and ordered bash to executes all those lines.

     

    But that only stops the excessive logging. For actually reclaiming some space, I went with gzipping all logfiles over 10MiB:

      find /var/opt/omni/server/db80/pg/pg_log -maxdepth 1 -name 'postgresql-201*' -type f -size 10M -exec gzip -9 {} \;

Children
No Data