Workflow Notification Migration Fails Due to Primary Key (PK) Error
We have developed Notifications (email) on Workflow state transition on our DEV server. During migration, the workflow migrated to our TEST and QA servers ok. When we migrated to our PROD server a Primary Key (PK) error was encountered because a notificaiton already exists on the PROD server using the ID created on our DEV server.
- I a couple of previous migrations we have deleted the notificaiton and manually re-created on PROD. Not ideal and definately a BUG in the migrator scripting
I'm having trouble getting a straight answer from HP PPM support so I am posting questions here?
I assume that the migrator is actually tring to migrate DB "ID" values from DEV to the target PPM server (yuk). Since we have developers working on DEV, and others who create notifications on PROD, it's only logical that the knta_notifictions_s (sequence ID) value will diverge and be different. Today, PROD is larger than DEV.
- Does the migrator actually migrate ID values for workflow notificaitons?
- If this is true and the nextval of knta_notifications_s is different there can be collisions during the migration process if the value is used on PROD already
Are there any ideas what really is happening and how to properly migrate workflow objects without PK collisions?
ps: this is a work-around I proposed to the HP support tech.. Just wonderg if this make sense. I have not obtain confirmation if my assumptions are correct...
I assume that the "work-around" resolution to this issue is to re-adjust the next sequence number on our servers to allow windows of sequence numbers as I proposed in the update on Oct 13 2011.
This will NOT solve the root problem which is the migration scripts that incorrectly migrate DB ID numbers and a SW migration bug will be submitted due to this ticket. Please confirm that a SW bug was created independent of this ticket.
The facts are:
- when we migrate a workflow notification it migrates the notification id from the source server to the target server. if the notification ID is already used on the target then a PK error is obtained. Deleting notifications on either the source or target are not a realistic option.
- the work-around is to update the next-used notification id values on both source and target systems so "new" notification id migration will not collide.
Here are the real next values from KNTA_NOTIFICATIONS_S (as of today)
- DEV: 34211
- PROD: 32151
- QA: 31591
- TEST: 31391
So, the MAX of all of these are: 34211, round that up to 35000 as the base.
The proposal would be to set the KNTA_NOTIFICATIONS_S nextid() value on each of the servers to:
- DEV: 35000
- QA: 45000
- TEST: 55000
- PROD: 65000
In this case, new notifications created on any of the DEV, QA or TEST systems would be created using an ID value that has not been used on PROD and migrate ok. This would occur until the DEV, QA, TEST notification id exceeded 65000 value updated on PROD.
This would not solve the on-going problem we have with existing notifications when they migrate that may be using ID values that already exist on PROD...
--> If this is what you propose then please provide the exact SQL statements necessary to update the sequence number in the KNTA_NOTIFICATIONS_S table. The example you provided was not clear.
--> Also confirm that updating the sequence numbers will really solve this issue and that we don't have to update other tables. Just want to be sure.
We have encountered the very same issue on 9.12.
The only solution we could come up with is to delete the problematic notifications in the target environment, but this is only temporary.
We also noticed similar issues when migrating workflows from which we deleted steps (this is a development environment, so this is quite normal).
I also noted another issue that could be related to this: in the workflow layout window, delete a transition, do not save, and add the same transition again. While saving, you will get a unique constraint violation error(if you save after delete, it willnot throw the error). This issue did not happen in 7.5.
Given all the above, I have come to the conclusion that HP has added some erroneous code, which cannot properly handle the combination of id, name and reference code as primary key for the configuration entities.
Please keep us posted, I would really like to know what us the actual problem.
--remember to kudos people who helped solve your problem
One of the first lessions our developers learned was to never delete workflow steps. Just make them unconnected... These leftover turdlets prevent many different migration issues / errors including when requests on the target systems are still in them and/or PK migration errors.
PPM support did not responded if there was "defect" when migrating workflow notifications so I implemented this work-around to reset it to values that would not collide during migrations
- Root problem: workflow notifications try to migrate NotificationID values
My work-around was to reset the KNTA_NOTIFICATIONS_S "nextval" on all servers to ranges that have not been use already.
- Identify MAX of KNTA_NOTIFICATIONS_S nextval on all servers
- round up to nearest 1000
- reset DEV to this number, reset to LAB to number+10000, QA to number+20000
- reset PROD to 30,000 above the number on DEV
- new notifications created on DEV will be 30000 less than prod and allow many migrations before dev should collide with PROD
- note that this does not solve existing notifications which may still have ID values that may collide during migration 😞 Delete and recreate to acquire a new ID...
How To Reset PPM Notifications ID
This example was used to reset the sequence # on DEV to 38,000. Note that "increment by 651" was the difference between the current value identified in the select statement and 38000. 38000 is the MAX from any of our PPM servers (DEV, TEST, QA, PROD).
PROD was set to 68000 BTW...
select * from user_sequences where sequence_name like 'KNTA_NOTIFICATIONS%';
alter sequence KNTA_NOTIFICATIONS_S increment by 651;
select KNTA_NOTIFICATIONS_S.nextval from dual;
alter sequence KNTA_NOTIFICATIONS_S increment by 1;
Caution: Once you execute the 1st alter statement to change to increment to a large number you must execute the select and then alter sequence increment by 1 again or it will leave the increment to a large number…
Hi Alex, Eric,
since I'm one of the onloved supporters, a thought on the strategy:
The 'usual' setup of the environments of a 'default' customer should ideally encompass the following:
Test system where you test a developed whatever;
Development system, where you develop,reconfigure etc.
Both Test and Development should be a clone of Production.
Thus, you take a clone of Production, then you develop or reconfigure any entities that you want changed, you migrate them to Test, do acceptance and viability tests and then you migrate them to Production. It is not really anticipated that you have a test and development environments from which you want to migrate changes or new entities to Production, on which you are conducting untested developments changes ;-). So, the design is that any entitiy etc has related IDs in the database to track sequences, versions etc. If you make changes to workflow notifications on both dev and prod, collisions will follow. You might not like it, but that is the way that it's supposed to work.
Just my two cents 🙂
Have a great one.
you are right, BUT (there is always a BUT) we have encountered this issue even though both DEV and TEST environments are cloned from the PROD, and the configuration is done ONLY in the DEV environment.
--remember to kudos people who helped solve your problem
then THAT is a valid support request. The assumption of Eric was that development (or changes) are made to notifications on prod as well, which would create IDs than can have adverse effect on the migration of workflow (notifications). If you get a constraint error on the ID or any other field although the workflow is being migrated first time (or overwritten), you should log a case and support should not reject that due to some reason unknown to me.