On 5/15/25 07:09, veem v wrote:
Hi,
Its postgres database behind the scenes.
We have a use case in which the customer is planning to migrate data
from an older version (V1) to a newer version (V2). For V2, the tables
will be new, but their structure will be similar to the V1 version
with few changes in relationship might be there. We want to have this
migration approach happen in multiple phases in which each time the
delta data from version V1 will be moved to version- V2 and then final
cutover will happen to V2 if all looks good or else rollback to V1.
The tables are smaller in size like max ~100K records in tables.
My question is, is it a good idea to have an approach in which we will
have procedures created to move the delta data in every phase and
schedule those using some tasks for each table. Or any other strategy
should we follow?
Also another thing to note , we have used sequences as primary keys in
some tables and they have FK relationships with other tables, so the
same sequence number in version V2 will cause issues/conflict, so how
should we handle this scenario? Should we just create new sequences
with higher start values?
Yes, use logical replication, and do thorough testing using a test V1
(publisher) and a test V2 (subscriber).
If it succeeds, as a final step, swap the roles of publisher and
subscriber so that the new V2 becomes the publisher and the old V1 the
subscriber.
After you test everything, you replay your steps on production environment.
By this way, after the switch over, and although V2 is now the
production , if for some reason your tests were inadequate or poor or
something you missed and you face critical problem to stay on V2, you
can still go back to V1 which will have up to date data.
I have done this migrating from 10.* to 16.* , it was the safest upgrade
I have ever done. We kept the old 10 for some months until no one cared
anymore.
Regards
Veem