On 5/14/25 23:09, veem v wrote:
Hi,
Its postgres database behind the scenes.
We have a use case in which the customer is planning to migrate data
from an older version (V1) to a newer version (V2). For V2, the tables
will be new, but their structure will be similar to the V1 version with
few changes in relationship might be there. We want to have this
migration approach happen in multiple phases in which each time the
delta data from version V1 will be moved to version- V2 and then final
cutover will happen to V2 if all looks good or else rollback to V1. The
tables are smaller in size like max ~100K records in tables.
My question is, is it a good idea to have an approach in which we will
have procedures created to move the delta data in every phase and
schedule those using some tasks for each table. Or any other strategy
should we follow?
This is what Sqitch(https://sqitch.org/) was designed for.
The biggest issue is that the data will be incrementing while you do the
structural changes. How you handle that is going to depend on the
question raised by Peter J. Holzer:
Is this being done in place on one Postgres instance or between
separate Postgres instances?
Also another thing to note , we have used sequences as primary keys in
some tables and they have FK relationships with other tables, so the
same sequence number in version V2 will cause issues/conflict, so how
should we handle this scenario? Should we just create new sequences with
higher start values?
Regards
Veem
--
Adrian Klaver
adrian.kla...@aklaver.com