Committed.
--
nathan
/*
* In older PG versions a sequence will have a pg_type entry, but v14
--
2.25.1
>From 7ff5168f5984865bd405e5d53dc6a190f989e7cd Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Mon, 22 Apr 2024 13:21:18 -0500
Subject: [PATCH v5 2/2] Improve performance of pg_dump --binary-u
nfo *tbinfo)
if (dopt->binary_upgrade)
{
binary_upgrade_set_pg_class_oids(fout, query,
- tbinfo->dobj.catId.oid, false);
+ tbinfo->dobj.catId.oid);
/*
* In older PG versions a sequence will have a pg_type entry, but v14
--
2.25.1
>From 14da726b675e4dd3e69f0b75f256b2757
fo)
if (dopt->binary_upgrade)
{
binary_upgrade_set_pg_class_oids(fout, query,
- tbinfo->dobj.catId.oid, false);
+ tbinfo->dobj.catId.oid);
/*
* In older PG versions a sequence will have a pg_type entry, but v14
--
2.25.1
>From b3ae9df69fdbf386d0925
> On 18 Apr 2024, at 22:28, Nathan Bossart wrote:
>
> On Thu, Apr 18, 2024 at 10:23:01AM -0500, Nathan Bossart wrote:
>> On Thu, Apr 18, 2024 at 09:24:53AM +0200, Daniel Gustafsson wrote:
>>> That does indeed seem like a saner approach. Since we look up the relkind
>>> we
>>> can also remove th
a pg_type entry, but v14
--
2.25.1
>From 1580a7b9896b727925cab364ae9fdc7107d791d4 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Wed, 17 Apr 2024 22:55:27 -0500
Subject: [PATCH v2 2/2] Improve performance of pg_dump --binary-upgrade.
---
src/bin/pg_dump/pg_dump.c| 117 ++
On Thu, Apr 18, 2024 at 09:24:53AM +0200, Daniel Gustafsson wrote:
>> On 18 Apr 2024, at 06:17, Nathan Bossart wrote:
>
>> The attached work-in-progress patch speeds up 'pg_dump --binary-upgrade'
>> for this case. Instead of executing the query in every call to the
>> function, we can execute it
On Thu, Apr 18, 2024 at 02:08:28AM -0400, Corey Huinker wrote:
> Bar-napkin math tells me in a worst-case architecture and braindead byte
> alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be
> about 6.25MB of memory.
That doesn't seem too terrible.
> The obvious low-memor
> On 18 Apr 2024, at 06:17, Nathan Bossart wrote:
> The attached work-in-progress patch speeds up 'pg_dump --binary-upgrade'
> for this case. Instead of executing the query in every call to the
> function, we can execute it once during the first call and store all the
> required information in a
On Thu, Apr 18, 2024 at 02:08:28AM -0400, Corey Huinker wrote:
> Bar-napkin math tells me in a worst-case architecture and braindead byte
> alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be
> about 6.25MB of memory.
>
> The obvious low-memory alternative would be to make
>
> One downside of this approach is the memory usage. This was more-or-less
>
>
Bar-napkin math tells me in a worst-case architecture and braindead byte
alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be
about 6.25MB of memory.
The obvious low-memory alternative would be
[0] https://postgr.es/m/3612876.1689443232%40sss.pgh.pa.us
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 27b4a3249dd97376f13a7c99505330ab7cd78e3f Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Wed, 17 Apr 2024 22:55:27 -0500
Subject: [PATCH v1 1/1] Improve performance of pg_dump --b
12 matches
Mail list logo