BSM purging schedule principle



What is the purging schedule on BSM database?

Even with all documentation, I could not understand its principle.

Could anybody, simplify it or may be make a little example.

I want to know when exactly should the pmanager purge data definitly from the database.




  • Verified Answer

    Hi Elmassimo,

    I think the "how to" of PM is not described in any manual.

    Here is a summary, as far as my understanding and working experience goes:

    - we do have tables and views in the profile database
      a lot of tables have a "partner" view, but not all

    - the view - for example BPM_TRANS - points to one or more tables in the BSM profile database, for example BPM_TRANS_10000
      the view gets adjusted any time the underlying database table(s) change, so one later day it migth point to BPM_TRANS_20000 or to
      BPM_TRANS_10000 AND BPM_TRANS_20000.
      so whenever you read from the table, you use the view, and you get to the current and active data
    - we are using a DBMS which allows native partition (for example MS SQL Enterprise Edition)
      each database table, for example BPM_TRANS_10000 can have one or more native partitions, which in the end contain the data, for example

    - each profile database has two tables maintaining this, PM_CATALOG and PM_NATIVE_CATALOG
       PM_CATALOG keeps track of the tables (in BAC 8 I think this was called a view partition) which make up for example BPM_TRANS
       PM_NATIVE_CATALOG keeps track of the native partitions which make up for example BPM_TRANS_10000

    (output modified in split into three lines)


    PM_START_DATE            PM_END_DATE              PM_STATE  PM_START_NUMBER       PM_END_NUMBER        
    2015-08-16 01:00:00.000  2015-11-18 01:00:00.000  ACTIVE    1439679600000.000000  1447804800000.000000 


    (output modified)


    BPM_TRANS_10000  P228                  ACTIVE     1439679600000.000000  1439852400000.000000  NUMERIC
    BPM_TRANS_10000  P229                  ACTIVE     1439852400000.000000  1440025200000.000000  NUMERIC
    BPM_TRANS_10000  P273                  ACTIVE     1447459200000.000000  1447632000000.000000  NUMERIC
    BPM_TRANS_10000  P274                  ACTIVE     1447632000000.000000  1447804800000.000000  NUMERIC

    When now data is added, BSM checks what partiton to use
     based on the PMN_START_DATE and PMN_END_DATE or
     based on the PM_START_NUMBER and PMN_END_NUMBER,
     (this is decided based on the PMN_COLUMN_TYPE)
    and adds it there, end of story.
    If BSM tries to add a really old entry, for example dated 2015-06-01 to BPM_TRANS_10000,
    it will fail with the ugly message
     No partition found that may hold data for the specified date: (2015-06-01 ...), table: BPM_TRANS DB: ...
    as the table only starts at
     2015-08-16 01:00:00.000
    This also is the case if BSM tries to insert based on the number, here the number we use is the EPOCH time,
    for example
    PM_START_DATE 2015-08-16 01:00:00.000 == PM_START_NUMBER 1439679600000.000000
     1439679600 = 2015-08-16 01:00:00 GMT 2:00 DST

    As you can see, the native partitions are consecutive,
     P228 ends at number   1439852400000.000000
     P229 starts at number 1439852400000.000000
    and so on

    Did I miss ssomething?

    Oh yes.
    The above should explain a little bit how data is put into the DB.
    Now PM comes into the game:

     PM checks if the oldest native partition is older than the "Keep Data For" configuration for the table in question.
     If it is, it drops this very native partition
     and updates the table, for example BPM_TRANS_10000 to make it aware of the removed native partition
     it also updates PM_CATALOG / PM_NATIVE_CATALOG
     (this will increase the start date and/or start number)

     After that PM creates native partitions in advance, usually 6 hours ahead of time,
     and updates the table, for example BPM_TRANS_10000 to make it aware of this new native partition,
     it also updates PM_CATALOG / PM_NATIVE_CATALOG
     (this will increase the end date and/or end number)

    As an example for the deletion:
    on my test system BPM_TRANS "Keep Data For" configuration is set to

    BSM -> Admin -> Platform ->  Setup and Maintenance -> Data Partitioning and Purging 

    Business Process Monitor
     BPM_TRANS  Raw transaction response time and availability data  3 months

    PM checks if the end date / end number of the oldest partition
     BPM_TRANS_10000 - P228 - 1439852400000.000000
    is older than 3 months
     1439852400 = 2015-08-18 01:00:00 GMT 2:00 DST

    now is
     1447749228 = 2015-08-17 09:33:48 GMT 2:00 DST
    so the end date is not reached yet, and thus the native partiton is NOT deleted,
    but tomorrow it will ...

    If you check on the DPS the PM logs (<HPBSM>\log\pmanager), for example pmanager.log,
    you can follow the flow:

    2015-11-17 09:45:20,655 [PMCycleSchedulerTimer] ( INFO  - Starting new cycle
    -> PM wakes up
    2015-11-17 09:45:20,795 [PMCycleSchedulerTimer] ( INFO  - PM is handling database: 'VM402_BSM_PROFDEF';
    -> it works on profile db VM402_BSM_PROFDEF
    2015-11-17 09:45:20,796 [PMCycleSchedulerTimer] ( INFO  - Server time at the start of the cycle: Tue Nov 17 09:45:20 CET 2015
    -> current time to compare all DB times with

    2015-11-17 09:45:22,460 [PMCycleSchedulerTimer] ( INFO  - Hiding partition (in DB: VM402_BSM_PROFDEF, DBHOST:
    2015-11-17 09:45:22,461 [PMCycleSchedulerTimer] ( INFO  - TABLE: M_HR01F1_F_90000
     PARTITION:  P12
     ID:         12
     TYPE:       NUMERIC
     CREATED:    1447710321000 (Mon Nov 16 22:45:21 CET 2015)
     START:      1447732800000 (Tue Nov 17 05:00:00 CET 2015)
     END:        1447740000000 (Tue Nov 17 07:00:00 CET 2015)
    -> this native partition has reached its "end of life", the dtaa in it is older than the keep data for allows for this table,
    the native partition status is changed from ACTIVE to HIDDEN so that it can be dropped / deleted in the next cycle

    2015-11-17 09:45:22,471 [PMCycleSchedulerTimer] ( INFO  - Dropping partition:
    2015-11-17 09:45:22,471 [PMCycleSchedulerTimer] ( INFO  - TABLE: RUM_TCP_APP_STAT_20000
     PARTITION:  P529
     ID:         529
     TYPE:       NUMERIC
     CREATED:    1445002372000 (Fri Oct 16 15:32:52 CEST 2015)
     START:      1445025600000 (Fri Oct 16 22:00:00 CEST 2015)
     END:        1445061600000 (Sat Oct 17 08:00:00 CEST 2015)
    -> this native partition was in stae HIDDEN already, and now is dropped / deleted

    2015-11-17 09:45:23,576 [PMCycleSchedulerTimer] ( INFO  - Creating new native partition (in DB: VM402_BSM_PROFDEF, DBHOST:
    2015-11-17 09:45:23,577 [PMCycleSchedulerTimer] ( INFO  - TABLE: HI_STATUS_CHANGE_60000
     PARTITION:  P356
     ID:         356
     TYPE:       NUMERIC
     CREATED:    1447749920000 (Tue Nov 17 09:45:20 CET 2015)
     START:      1447772400000 (Tue Nov 17 16:00:00 CET 2015)
     END:        1447783200000 (Tue Nov 17 19:00:00 CET 2015)
    -> a new native partition is added to a table

    2015-11-17 09:45:42,332 [PMCycleSchedulerTimer] ( INFO  - Done
    2015-11-17 09:45:42,333 [PMCycleSchedulerTimer] ( INFO  - Going to sleep until the next cycle

    -> PM completed its taks and now pauses for an hour

    Now it should be possible to answer your question
    >I want to know when exactly should the pmanager purge data definitly from the database.
    if you do have "Keep Data For" set to something other than "Infinite", PM deletes the oldest native partition based on that date as explained above. The smallest entity PM can delete is a native partition.
    Does this help?
    Please let me know if you have further questions.



  • Zuper Siggi.

    I must take my time to digest all that.

    It is a great article we have here since no other comparable documentation is publicly available for now.


  • I've changed my Database retention from infinite to 3 months daily and monthy and to 1 month as rawdata.
    It was working since 6 months before I've changed the retention policy.
    And now almost 4 weeks passes and the database still in the same size as when I've changed the retention policy.

    I found that data could be purged manually using Shrink feature.
    I think I will try this one.