On Thu, May 1, 2025 at 9:06 AM Durumdara <durumd...@gmail.com> wrote:

> Hello!
>
> There is a heavily used server, with older debian, and PG11.
> The data is more than 1,2 TB.
> The PG_Upgrade is not possible because of lesser space and too old debian.
>
> As we see now we have only one way to move this server.
> 1.) Installing a new server with actual debian.
> 2.) Installing the newest PG (17) on it.
> 3.) Stop work on one database. Dump it in the old, restore it in the new
> and start the work with that. So we can move them one by one.
>
> But this seems to be very hard, because we need to do this through an
> internet connection, and the data is too much.
>

The new server isn't in the same data center?


> I have a question about it - is there a better way to do this?
>
> For example we make a new cluster element (a read only slave) with newest
> debian/PG, and use it to move the data in the background (replication).
> And then we rename it to master. But I don't know if it's possible or not.
> Maybe the slaves must be the same version as the master.
>
> The main problem is that debian is too old, and we are afraid to use
> PG_Upgrade because of too many version differences (11 < 17).
>

The fear is justified, since collation changes might corrupt text indices.


> But maybe you have some good advice, how to do this with less complication.
> Users can tolerate short downtimes, but not longer ones.
>

Logical replication, not physical replication.  Works great if there's not
many DDL changes.

Another possibility is COPY TO / COPY FROM in chunks.  This will work if
most of the 1.2TB is inserted, and then never updated or deleted.  That
way, at cutover, you only need to COPY TO / COPY FROM and dump/restore
(which is really a wrapper around COPY TO / COPY FROM) and "pg_restore
--section=post-data".

You (hopefully) know your system better than we do, so you'd have to decide
which is better.

-- 
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Reply via email to