> On Mon, 23 Sept 2024 at 18:28, Tatsuo Ishii wrote:
>> I agree and made necessary changes. See attached v4 patches.
>
> Looks good to me.
Thank you for the review! I have pushed the patch.
Best reagards,
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://ww
On Mon, 23 Sept 2024 at 18:28, Tatsuo Ishii wrote:
> I agree and made necessary changes. See attached v4 patches.
Looks good to me.
David
> I looked at this and thought that one thing you might want to consider
> is adjusting show_storage_info() to accept the size and type
> parameters so you don't have to duplicate the formatting code in
> show_recursive_union_info().
I agree and made necessary changes. See attached v4 patches.
>
On Thu, 19 Sept 2024 at 22:17, Tatsuo Ishii wrote:
> Attached patch fixes 1 & 2.
I looked at this and thought that one thing you might want to consider
is adjusting show_storage_info() to accept the size and type
parameters so you don't have to duplicate the formatting code in
show_recursive_unio
> 1. It's probably a minor detail, but in show_recursive_union_info(), I
> don't think the tuplestores can ever be NULL.
>
> + if (working_table != NULL)
> + tuplestore_get_stats(working_table, &tempStorageType, &tempSpaceUsed);
> +
> + if (intermediate_table != NULL)
> + tuplestore_get_stats(inte
On Thu, 19 Sept 2024 at 16:21, Tatsuo Ishii wrote:
> Thanks. Attached is a patch for CTE scan, table function scan and
> recursive union scan nodes.
1. It's probably a minor detail, but in show_recursive_union_info(), I
don't think the tuplestores can ever be NULL.
+ if (working_table != NULL)
+
>> > That code could be modified to swap the tuplestores and do a
>> > tuplestore_clear() instead of tuplestore_end() followed by
>> > tuplestore_begin_heap().
>> >
>> Impressive result. I also ran your query with count 1000.
>
> I've pushed that patch. That should now unblock you on the
> nodeRec
On Thu, 19 Sept 2024 at 13:49, Tatsuo Ishii wrote:
>
> > That code could be modified to swap the tuplestores and do a
> > tuplestore_clear() instead of tuplestore_end() followed by
> > tuplestore_begin_heap().
> >
> Impressive result. I also ran your query with count 1000.
I've pushed that patch.
On Thu, 19 Sept 2024 at 13:49, Tatsuo Ishii wrote:
> I also ran your query with count 1000.
>
> without the patch:
> Time: 3.655 ms
> Time: 4.123 ms
> Time: 2.163 ms
>
> wit the patch:
> Time: 3.641 ms
> Time: 2.356 ms
> Time: 2.347 ms
>
> It seems with the patch the performance is slightly better
> That code could be modified to swap the tuplestores and do a
> tuplestore_clear() instead of tuplestore_end() followed by
> tuplestore_begin_heap().
>
> It's likely worthwhile from a performance point of view. Here's a
> small test as an example:
>
> master:
> postgres=# with recursive cte (a)
On Thu, 19 Sept 2024 at 12:01, Tatsuo Ishii wrote:
> > Could you add the two sizes together and take the storage type from
> > the tuplestore with the highest storage size?
>
> I don't think this works because tuplestore_begin/tuplestore_end are
> called while executing the node (ExecRecursiveUnio
> On Thu, 19 Sept 2024 at 00:13, Tatsuo Ishii wrote:
>> Actually there's one more executor node type that uses tuplestore:
>> recursive union (used in "with recursive"). The particular node type
>> uses two tuplestore and we cannot simply apply tuplestore_get_stats()
>> to the node type. We need t
On Thu, 19 Sept 2024 at 00:13, Tatsuo Ishii wrote:
> Actually there's one more executor node type that uses tuplestore:
> recursive union (used in "with recursive"). The particular node type
> uses two tuplestore and we cannot simply apply tuplestore_get_stats()
> to the node type. We need to modi
> Thanks. I have added it and pushed the patch.
So I have created patches to do the same for CTE scan and table
function scan node. Patch attached.
Actually there's one more executor node type that uses tuplestore:
recursive union (used in "with recursive"). The particular node type
uses two tupl
> On Tue, 17 Sept 2024 at 14:40, Tatsuo Ishii wrote:
>> Attached is the v5 patch. The difference from v4 is addtion of two
>> more tests to explain.sql:
>>
>> 1) spils to disk case
>> 2) splis to disk then switch back to memory case
>
> Looks ok to me, aside from the missing "reset work_mem;" aft
On Tue, 17 Sept 2024 at 14:40, Tatsuo Ishii wrote:
> Attached is the v5 patch. The difference from v4 is addtion of two
> more tests to explain.sql:
>
> 1) spils to disk case
> 2) splis to disk then switch back to memory case
Looks ok to me, aside from the missing "reset work_mem;" after you're
d
> Not sure if there's any way to force it in the SQL standard. However
> in term of implementation, PostgreSQL sorts the function
> (generate_series) scan result using a sort key "a.n < 3", which
> results in rows being >= 2 first (as false == 0), then rows being < 3
> (as true == 1). So unless Po
Hi Ashutosh,
Thank you for the review.
> Thanks. This will do. Is there a way to force the larger partition to
> be computed first? That way we definitely know that the last
> computation was done when all the tuples in the tuplestore were in
> memory.
Not sure if there's any way to force it in
On Sat, Sep 14, 2024 at 1:42 PM Tatsuo Ishii wrote:
>
> >> > or the case when the last usage fit in memory but an earlier
> >> > usage spilled to disk.
> >>
> >> In my understanding once tuplestore changes the storage type to disk,
> >> it never returns to the memory storage type in terms of
> >>
>> > or the case when the last usage fit in memory but an earlier
>> > usage spilled to disk.
>>
>> In my understanding once tuplestore changes the storage type to disk,
>> it never returns to the memory storage type in terms of
>> tuplestore_get_stats. i.e. once state->usedDisk is set to true, it
On Fri, Sep 13, 2024 at 3:02 PM Tatsuo Ishii wrote:
>
> > The patch looks fine but it doesn't add a test case where Storage is
> > Disk
>
> We can test the case by setting work_mem to the minimum size (64kB)
> and giving slightly larger "stop" parameter to generate_series.
>
WFM
> > or the case
> The patch looks fine but it doesn't add a test case where Storage is
> Disk
We can test the case by setting work_mem to the minimum size (64kB)
and giving slightly larger "stop" parameter to generate_series.
> or the case when the last usage fit in memory but an earlier
> usage spilled to disk.
The patch looks fine but it doesn't add a test case where Storage is
Disk or the case when the last usage fit in memory but an earlier
usage spilled to disk. Do we want to cover those. This test would be
the only one where those code paths could be tested.
On Fri, Sep 13, 2024 at 11:41 AM Tatsuo I
On Fri, 13 Sept 2024 at 09:11, Tatsuo Ishii wrote:
> Thanks. Attached is the v4 patch. I am going push it if there's no
> objection.
>
Looks good to me. Thank you for your work.
--
Best regards,
Maxim Orlov.
David,
Thank you for your review.
> Yes, I think it's a good idea to move that into a helper function. If
> you do the other node types, without that helper the could would have
> to be repeated quite a few times. Maybe show_storage_info() can be
> moved up with the other helper functions, say be
On Fri, 13 Sept 2024 at 00:12, Tatsuo Ishii wrote:
> In this patch I refactored show_material_info. I divided it into
> show_material_info and show_storage_info so that the latter can be
> used by other node types including window aggregate node. What do you
> think?
Yes, I think it's a good idea
Here is the v3 patch. This time I only include patch for the Window
Aggregate node. Patches for other node types will come after this
patch getting committed or come close to commitable state.
David,
In this patch I refactored show_material_info. I divided it into
show_material_info and show_stora
> I pushed a patch to change the API.
Thank you!
--
Tatsuo Ishii
SRA OSS K.K.
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp
On Thu, 12 Sept 2024 at 14:42, Tatsuo Ishii wrote:
> Sorry, I should have asked you first if you are going to write the API
> change patch.
I pushed a patch to change the API.
David
> On Thu, 12 Sept 2024 at 14:04, Tatsuo Ishii wrote:
>> Are you going to push the changes to tuplestore.c anytime soon? I
>> would like to rebase my patch[1] but the patch could be affected by
>> the tuplestore API change.
>
> Ok, I'll look at that.
Thanks.
I had thought you were taking care of
On Thu, 12 Sept 2024 at 14:04, Tatsuo Ishii wrote:
> Are you going to push the changes to tuplestore.c anytime soon? I
> would like to rebase my patch[1] but the patch could be affected by
> the tuplestore API change.
Ok, I'll look at that. I had thought you were taking care of writing the patch.
> On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat
> wrote:
>> The changes look better. A nitpick though. With their definitions
>> changed, I think it's better to change the names of the functions
>> since their purpose has changed. Right now they report the storage
>> type and size used, respectivel
On Fri, Sep 6, 2024 at 7:21 PM David Rowley wrote:
>
> On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat
> wrote:
> > The changes look better. A nitpick though. With their definitions
> > changed, I think it's better to change the names of the functions
> > since their purpose has changed. Right now t
On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat
wrote:
> The changes look better. A nitpick though. With their definitions
> changed, I think it's better to change the names of the functions
> since their purpose has changed. Right now they report the storage
> type and size used, respectively, at th
> The changes look better. A nitpick though. With their definitions
> changed, I think it's better to change the names of the functions
> since their purpose has changed. Right now they report the storage
> type and size used, respectively, at the time of calling the function.
> With this patch, th
On Fri, Sep 6, 2024 at 11:32 AM Tatsuo Ishii wrote:
>
> > Thanks for making the adjustments to this.
> >
> > I don't think there is any need to call tuplestore_updatemax() from
> > within writetup_heap(). That means having to update the maximum space
> > used every time a tuple is written to disk.
> Thanks for making the adjustments to this.
>
> I don't think there is any need to call tuplestore_updatemax() from
> within writetup_heap(). That means having to update the maximum space
> used every time a tuple is written to disk. That's a fairly massive
> overhead.
>
> Instead, it should be
On Fri, 6 Sept 2024 at 16:21, Tatsuo Ishii wrote:
> However, for 10, 2, 1 partitions. I see large performance
> degradation with the patched version: patched is slower than stock
> master in 1.5% (10 partitions), 41% (2 partitions) and 55.7% (1
> partition). See the attached graph.
Thanks for mak
Hi David,
>> I pushed the changes to WindowAgg so as not to call tuplestore_end()
>> on every partition. Can you rebase this patch over that change?
>>
>> It would be good to do this in a way that does not add any new state
>> to WindowAggState, you can see that I had to shuffle fields around in
Hi,
> hi. I can roughly understand it.
>
> I have one minor issue with the comment.
>
> typedef struct RecursiveUnionState
> {
> PlanStateps;/* its first field is NodeTag */
> boolrecursing;
> boolintermediate_empty;
> Tuplestorestate *working_
Hi,
> On Wed, 10 Jul 2024 at 21:36, Tatsuo Ishii wrote:
>> v2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This
>> adds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)
>> command. Note that if David's proposal
>> https://www.postgresql.org/message-id/CAHoyFK9n-Q
On Wed, 10 Jul 2024 at 21:36, Tatsuo Ishii wrote:
> v2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This
> adds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)
> command. Note that if David's proposal
> https://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxt
On Wed, Jul 10, 2024 at 5:36 PM Tatsuo Ishii wrote:
>
>
> Attached are the v2 patches. As suggested by David, I split them
> into multiple patches so that each patch implements the feature for
> each node. You need to apply the patches in the order of patch number
> (if you want to apply all of th
On Wed, 4 Sept 2024 at 03:07, Tatsuo Ishii wrote:
>
> Agreed. Probably add to explain.sql?
>
Yeah, I think this is an appropriate place.
--
Best regards,
Maxim Orlov.
> Hi!
>
> +1 for the idea of the patch. Consider it useful.
>
> I looked at the patch set and don't see any obvious defects. It applies
> without any problems and looks pretty good for me.
Thank you for reviewing my patch.
> Only one thing is left to do. Add basic tests for the added functional
Hi!
+1 for the idea of the patch. Consider it useful.
I looked at the patch set and don't see any obvious defects. It applies
without any problems and looks pretty good for me.
Only one thing is left to do. Add basic tests for the added functionality
to make it committable. For example, as in th
>>> Yes, I think so. I'd keep each as a separate patch so they can be
>>> considered independently. Doing all of them should hopefully ensure we
>>> strike the right balance of what code to put in explain.c and what
>>> code to put in tuplestore.c.
>> +1
>>
>> + if (es->format != EXPLAIN_FORMAT_TE
>> Yes, I think so. I'd keep each as a separate patch so they can be
>> considered independently. Doing all of them should hopefully ensure we
>> strike the right balance of what code to put in explain.c and what
>> code to put in tuplestore.c.
> +1
>
> + if (es->format != EXPLAIN_FORMAT_TEXT)
> +
On Tue, Jul 9, 2024 at 8:20 AM David Rowley wrote:
>
> On Tue, 9 Jul 2024 at 14:44, Tatsuo Ishii wrote:
> > BTW, it seems these executor nodes (other than Materialize and Window
> > Aggregate node) use tuplestore for their own purpose.
> >
> > CTE Scan
> > Recursive Union
> > Table Function Scan
On Tue, 9 Jul 2024 at 14:44, Tatsuo Ishii wrote:
> BTW, it seems these executor nodes (other than Materialize and Window
> Aggregate node) use tuplestore for their own purpose.
>
> CTE Scan
> Recursive Union
> Table Function Scan
>
> I have already implemented that for CTE Scan. Do you think other
> On Sat, 6 Jul 2024 at 23:23, Tatsuo Ishii wrote:
>> So I wanted to Add memory/disk usage for WindowAgg. Patch attached.
>
> Thanks for working on that.
Thank you for the infrastructure you created in tuplestore.c and explain.c.
BTW, it seems these executor nodes (other than Materialize and Wi
On Sat, 6 Jul 2024 at 23:23, Tatsuo Ishii wrote:
> So I wanted to Add memory/disk usage for WindowAgg. Patch attached.
Thanks for working on that.
> Since WindowAgg node could create multiple tuplestore for each Window
> partition, we need to track each tuplestore storage usage so that the
> max
52 matches
Mail list logo