Hi,

Maybe - you can re-use this backup tricks.

"Speeding up dump/restore process"
https://www.depesz.com/2009/09/19/speeding-up-dumprestore-process/

for example:
"""
*Idea was: All these tables had primary key based on serial. We could
easily get min and max value of the primary key column, and then split it
into half-a-million-ids “partitions", then dump them separately using:*
*psql -qAt -c "COPY ( SELECT * FROM TABLE WHERE id BETWEEN x AND y) TO
STDOUT" | gzip -c - > TABLE.x.y.dump*
"""

best,
Imre



Durgamahesh Manne <maheshpostgr...@gmail.com> ezt írta (időpont: 2019. aug.
30., P, 11:51):

> Hi
> To respected international postgresql team
>
> I am using postgresql 11.4 version
> I have scheduled logical dump job which runs daily one time at db level
> There was one table that has write intensive activity for every 40 seconds
> in db
> The size of the table is about 88GB
>  Logical dump of that table is taking more than 7 hours to be completed
>
>  I need to reduce to dump time of that table that has 88GB in size
>
>
> Regards
> Durgamahesh Manne
>
>
>
>

Reply via email to