On Mon, Sep 18, 2023 at 02:22:32PM -0700, Nathan Bossart wrote:
> For now, I've committed 0001 and 0002. I intend to commit the others soon.
I've committed the rest of the patches.
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
On Mon, Sep 18, 2023 at 09:36:03PM -0400, Tom Lane wrote:
> But in any case, how long are we keeping src/tools/msvc/ ?
>From a skim of [0], it seems like it could be removed now. I see a couple
of work-in-progress patches from Andres [1] that would probably serve as a
good starting point. I won'
Nathan Bossart writes:
> On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:
>> bowerbird is unhappy with this. I suppose you missed out updating
>> the src/tools/msvc/ scripts. (Weren't we about ready to nuke those?)
> I saw that and have attempted to fix it with 83223f5.
Ah, right, sor
On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:
> bowerbird is unhappy with this. I suppose you missed out updating
> the src/tools/msvc/ scripts.
> (Weren't we about ready to nuke those?)
hamerkop seems to be the only buildfarm member that would complain if
these were to be gone today,
On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:
> Nathan Bossart writes:
>> For now, I've committed 0001 and 0002. I intend to commit the others soon.
>
> bowerbird is unhappy with this. I suppose you missed out updating
> the src/tools/msvc/ scripts. (Weren't we about ready to nuke
Nathan Bossart writes:
> For now, I've committed 0001 and 0002. I intend to commit the others soon.
bowerbird is unhappy with this. I suppose you missed out updating
the src/tools/msvc/ scripts. (Weren't we about ready to nuke those?)
regards, tom lane
For now, I've committed 0001 and 0002. I intend to commit the others soon.
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
On Wed, Sep 13, 2023 at 08:01:39PM -0400, Tom Lane wrote:
> Nathan Bossart writes:
>> Upon closer inspection, I found a rather nasty problem. The qsort
>> comparator expects a TocEntry **, but the binaryheap comparator expects a
>> TocEntry *, and we simply pass the arguments through to the qsort
Nathan Bossart writes:
> Upon closer inspection, I found a rather nasty problem. The qsort
> comparator expects a TocEntry **, but the binaryheap comparator expects a
> TocEntry *, and we simply pass the arguments through to the qsort
> comparator. In v9, I added the requisite ampersands.
Ooops
On Wed, Sep 13, 2023 at 11:34:50AM -0700, Nathan Bossart wrote:
> On Sun, Sep 10, 2023 at 12:35:10PM -0400, Tom Lane wrote:
>> Other than those nitpicks, I like v6. I'll mark this RfC.
>
> Great. I've posted a v8 with your comments addressed in order to get one
> more round of cfbot coverage. A
On Sun, Sep 10, 2023 at 12:35:10PM -0400, Tom Lane wrote:
> Hmm ... I do not like v7 very much at all. It requires rather ugly
> changes to all of the existing callers, and what are we actually
> buying? If anything, it makes things slower for pass-by-value items
> like integers. I'd stick with
Nathan Bossart writes:
> I spent some more time on this patch and made the relevant adjustments to
> the rest of the set.
Hmm ... I do not like v7 very much at all. It requires rather ugly
changes to all of the existing callers, and what are we actually
buying? If anything, it makes things slow
On Sat, Sep 02, 2023 at 11:55:21AM -0700, Nathan Bossart wrote:
> I ended up hacking together a (nowhere near committable) patch to see how
> hard it would be to allow using any type with binaryheap. It doesn't seem
> too bad.
I spent some more time on this patch and made the relevant adjustments
On Sun, Sep 03, 2023 at 12:04:00PM +0200, Alvaro Herrera wrote:
> On 2023-Sep-02, Nathan Bossart wrote:
>> I ended up hacking together a (nowhere near committable) patch to see how
>> hard it would be to allow using any type with binaryheap. It doesn't seem
>> too bad.
>
> Yeah, using void * seem
On 2023-Sep-02, Nathan Bossart wrote:
> On Fri, Sep 01, 2023 at 01:52:48PM -0700, Nathan Bossart wrote:
> > Yeah, something similar to simplehash for binary heaps could be nice. That
> > being said, I don't know if there's a strong reason to specialize the
> > implementation for a given C data t
On Fri, Sep 01, 2023 at 01:52:48PM -0700, Nathan Bossart wrote:
> On Fri, Sep 01, 2023 at 04:00:44PM -0400, Robert Haas wrote:
>> In hindsight, I think that making binaryheap depend on Datum was a bad
>> idea. I think that was my idea, and I think it wasn't very smart.
>> Considering that people ha
On Fri, Sep 01, 2023 at 04:00:44PM -0400, Robert Haas wrote:
> In hindsight, I think that making binaryheap depend on Datum was a bad
> idea. I think that was my idea, and I think it wasn't very smart.
> Considering that people have coded to that decision up until now, it
> might not be too easy to
On Tue, Jul 25, 2023 at 2:53 PM Nathan Bossart wrote:
> On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:
> > Here is a sketch of this approach. It required fewer #ifdefs than I was
> > expecting. At the moment, this one seems like the winner to me.
>
> Here is a polished patch set
On Fri, Sep 01, 2023 at 01:41:41PM -0400, Tom Lane wrote:
> I've not actually looked at any of these patchsets after the first one.
> I have added myself as a reviewer and will hopefully get to it within
> a week or so.
Thanks!
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
Nathan Bossart writes:
> I'm hoping to commit these patches at some point in the current commitfest.
> I don't sense anything tremendously controversial, and they provide a
> pretty nice speedup in some cases. Are there any remaining concerns?
I've not actually looked at any of these patchsets a
On Tue, Jul 25, 2023 at 11:53:36AM -0700, Nathan Bossart wrote:
> Here is a polished patch set for this approach. I've also added a 0004
> that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.
> IMHO these patches are in decent shape.
I'm hoping to commit these patches at some po
On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:
> Here is a sketch of this approach. It required fewer #ifdefs than I was
> expecting. At the moment, this one seems like the winner to me.
Here is a polished patch set for this approach. I've also added a 0004
that replaces the op
On Sat, Jul 22, 2023 at 10:57:03PM -0700, Nathan Bossart wrote:
> On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:
>> I wonder whether we can't provide some alternate definition or "skin"
>> for binaryheap that preserves the Datum API for backend code that wants
>> that, while providing a
On Saturday, July 15, 2023 7:47:12 PM CEST Tom Lane wrote:
> I'm not sure how big a deal this is in practice: in most situations
> the individual jobs are larger than they are in this toy example,
> plus the initial non-parallelizable part of the restore is a bigger
> bottleneck anyway with this ma
On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:
> Nathan Bossart writes:
>> I first tried modifying
>> binaryheap to use "int" or "void *" instead, but that ended up requiring
>> some rather invasive changes in backend code, not to mention any extensions
>> that happen to be using it.
I
Nathan Bossart writes:
> On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:
>> One item that requires more thought is binaryheap's use of Datum. AFAICT
>> the Datum definitions live in postgres.h and aren't available to frontend
>> code. I think we'll either need to move the Datum d
On Sat, Jul 22, 2023 at 04:19:41PM -0700, Nathan Bossart wrote:
> In v3, I moved the Datum definitions to c.h. I first tried modifying
> binaryheap to use "int" or "void *" instead, but that ended up requiring
> some rather invasive changes in backend code, not to mention any extensions
> that hap
On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:
> Here is a work-in-progress patch set for converting ready_list to a
> priority queue. On my machine, Tom's 100k-table example [0] takes 11.5
> minutes without these patches and 1.5 minutes with them.
>
> One item that requires more
Here is a work-in-progress patch set for converting ready_list to a
priority queue. On my machine, Tom's 100k-table example [0] takes 11.5
minutes without these patches and 1.5 minutes with them.
One item that requires more thought is binaryheap's use of Datum. AFAICT
the Datum definitions live
On Tue, Jul 18, 2023 at 06:05:11PM +0200, Alvaro Herrera wrote:
> On 2023-Jul-17, Nathan Bossart wrote:
>
>> @@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator
>> compare, void *arg)
>> binaryheap *heap;
>>
>> sz = offsetof(binaryheap, bh_nodes) + sizeof(Datum)
On 2023-Jul-17, Nathan Bossart wrote:
> @@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator
> compare, void *arg)
> binaryheap *heap;
>
> sz = offsetof(binaryheap, bh_nodes) + sizeof(Datum) * capacity;
> +#ifdef FRONTEND
> + heap = (binaryheap *) pg_malloc
On Sun, Jul 16, 2023 at 08:54:24PM -0700, Nathan Bossart wrote:
> This seems worth a try. IIUC you are suggesting making binaryheap.c
> frontend-friendly and expanding its API a bit. If no one has volunteered,
> I could probably hack something together.
I spent some time on the binaryheap change
On Sun, Jul 16, 2023 at 09:45:54AM -0400, Tom Lane wrote:
> Actually, as long as we're talking about approximately-correct behavior:
> let's make the ready_list be a priority heap, and then just make
> pop_next_work_item scan forward from the array start until it finds an
> item that's runnable per
Andrew Dunstan writes:
> On 2023-07-15 Sa 13:47, Tom Lane wrote:
>> I wonder if we could replace the sorted ready-list with a priority heap,
>> although that might be complicated by the fact that pop_next_work_item
>> has to be capable of popping something that's not necessarily the
>> largest rem
On 2023-07-15 Sa 13:47, Tom Lane wrote:
I looked into the performance gripe at [1] about pg_restore not making
effective use of parallel workers when there are a lot of tables.
I was able to reproduce that by dumping and parallel restoring 100K
tables made according to this script:
do $$
begin
Hi,
On 2023-07-15 13:47:12 -0400, Tom Lane wrote:
> I wonder if we could replace the sorted ready-list with a priority heap,
> although that might be complicated by the fact that pop_next_work_item
> has to be capable of popping something that's not necessarily the
> largest remaining job. Anothe
I looked into the performance gripe at [1] about pg_restore not making
effective use of parallel workers when there are a lot of tables.
I was able to reproduce that by dumping and parallel restoring 100K
tables made according to this script:
do $$
begin
for i in 1..10 loop
execute format('c
37 matches
Mail list logo