On Sun, Jan 19, 2025 at 03:57:18PM -0800, Hari Krishna Sunder wrote:
> The restore side speedups suggested by Yang seem reasonable and can
> potentially speed up the process. We can even go a bit further by starting
> the new postgres in a --binary-upgrade mode and skip some of these locks
> comple
The restore side speedups suggested by Yang seem reasonable and can
potentially speed up the process. We can even go a bit further by starting
the new postgres in a --binary-upgrade mode and skip some of these locks
completely.
On Sun, Jan 19, 2025 at 3:43 PM Nathan Bossart
wrote:
> On Mon, Jul
On Mon, Jul 08, 2024 at 03:22:36PM +0800, 杨伯宇(长堂) wrote:
> Besides, https://commitfest.postgresql.org/48/4995/ seems insufficient to
> this situation. Some time-consuming functions like check_for_data_types_usage
> are not yet able to run in parallel. But these patches could be a great
> starting
> Thanks! Since you mentioned that you have multiple databases with 1M+
> databases, you might also be interested in commit 2329cad. That should
> speed up the pg_dump step quite a bit.
Wow, I noticed this commit(2329cad) when it appeared in commitfest. It has
doubled the speed of pg_dump in this s
On Fri, Jul 05, 2024 at 05:24:42PM +0800, 杨伯宇(长堂) wrote:
>> > So, I'm thinking, why not add a "--skip-check" option in pg_upgrade to
>> > skip it?
>> > See "1-Skip_Compatibility_Check_v1.patch".
>>
>> How would a user know that nothing has changed in the cluster between running
>> the check and r
> > So, I'm thinking, why not add a "--skip-check" option in pg_upgrade to skip
> > it?
> > See "1-Skip_Compatibility_Check_v1.patch".
>
> How would a user know that nothing has changed in the cluster between running
> the check and running the upgrade with a skipped check? Considering how
> comp
> On 5 Jul 2024, at 09:12, 杨伯宇(长堂) wrote:
> 1: Skip Compatibility Check In "pg_upgrade"
> =
> Concisely, we've got several databases, each with a million-plus tables.
> Running the compatibility check before pg_dump can eat up like half an hour.
> If I