Robert Haas writes:
> On Fri, Aug 25, 2017 at 5:12 PM, Tom Lane wrote:
>> Here's a reviewed version of the second patch. I fixed one bug
>> (wrong explain group nesting) and made some additional cosmetic
>> improvements beyond the struct relocation.
> The capitalization of TupleSortInstrumentat
On Fri, Aug 25, 2017 at 5:12 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Aug 25, 2017 at 2:54 PM, Tom Lane wrote:
>>> Hmm, I'm not sure why SortInstrumentation belongs naturally to
>>> tuplesort.h but putting an array of them there would be a "gross
>>> abstraction violation". Perhaps
Robert Haas writes:
> On Fri, Aug 25, 2017 at 2:54 PM, Tom Lane wrote:
>> Hmm, I'm not sure why SortInstrumentation belongs naturally to
>> tuplesort.h but putting an array of them there would be a "gross
>> abstraction violation". Perhaps it would help to rename
>> struct SharedSortInfo to Sort
On Fri, Aug 25, 2017 at 2:54 PM, Tom Lane wrote:
>> I think moving SharedSortInfo into tuplesort.h would be a gross
>> abstraction violation, but moving SortInstrumentation into tuplesort.h
>> seems like a modest improvement.
>
> Hmm, I'm not sure why SortInstrumentation belongs naturally to
> tup
Robert Haas writes:
> On Fri, Aug 25, 2017 at 2:01 PM, Tom Lane wrote:
>> I looked through this a little, and feel uncomfortable with the division
>> of typedefs between execnodes.h and tuplesort.h. I'm inclined to push
>> struct SortInstrumentation, and maybe also SharedSortInfo, into
>> tuples
On Fri, Aug 25, 2017 at 2:01 PM, Tom Lane wrote:
> Robert Haas writes:
>> On another note, here's a second patch applying on top of my earlier
>> patch to push down Limit through Gather and Gather Merge.
>
> I looked through this a little, and feel uncomfortable with the division
> of typedefs be
Robert Haas writes:
> On another note, here's a second patch applying on top of my earlier
> patch to push down Limit through Gather and Gather Merge.
I looked through this a little, and feel uncomfortable with the division
of typedefs between execnodes.h and tuplesort.h. I'm inclined to push
st
On Fri, Aug 25, 2017 at 1:31 PM, Tom Lane wrote:
> Robert Haas writes:
>> Awesome. What's the business with enable_hashagg in the regression test
>> stuff?
>
> Just relocating that "reset" so that it only affects the test case it's
> needed for. (I think the 0-worker test was inserted in the w
Robert Haas writes:
> Awesome. What's the business with enable_hashagg in the regression test
> stuff?
Just relocating that "reset" so that it only affects the test case it's
needed for. (I think the 0-worker test was inserted in the wrong place.)
regards, tom lane
-
On Fri, Aug 25, 2017 at 1:18 PM, Tom Lane wrote:
> v4, with a test case and some more comment-polishing. I think this
> is committable.
Awesome. What's the business with enable_hashagg in the regression test stuff?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
v4, with a test case and some more comment-polishing. I think this
is committable.
regards, tom lane
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c
index ce47f1d..ad9eba6 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/ba
On Fri, Aug 25, 2017 at 12:05 PM, Tom Lane wrote:
> Hm. It seemed pretty reliable for me, but we already know that the
> parallel machinery is quite timing-sensitive. I think we'd better
> incorporate a regression test case into the committed patch so we
> can see what the buildfarm thinks. I'l
Robert Haas writes:
> I had the same thought. Done in the attached revision of your
> version. I couldn't consistently reproduce the failure you saw -- it
> failed for me only about 1 time in 10 -- but I don't think it's
> failing any more with this version.
Hm. It seemed pretty reliable for m
On Fri, Aug 25, 2017 at 11:15 AM, Tom Lane wrote:
> The problem I exhibited with the updated patch could probably be resolved
> if the top level logic in a parallel worker knows about tuples_needed
> and doesn't try to pull more than that from its subplan. So what you're
> describing isn't just a
Robert Haas writes:
> However, there's another opportunity for optimization here, which is
> the Limit -> Gather -> Anything case. A worker which has itself
> generated enough tuples to fill the limit can certainly stop working,
> but right now it doesn't know that. The patch I posted before did
On Fri, Aug 25, 2017 at 9:24 AM, Tom Lane wrote:
> Basically, this argument is that we should contort the code in order
> to avoid tail recursion, rather than assuming the compiler will turn
> that recursion into iteration, even in cases where there is really
> zero evidence that we should be worr
Amit Kapila writes:
> On Fri, Aug 25, 2017 at 7:35 PM, Tom Lane wrote:
>> it would be stupid to put a filter on that node rather than its
>> children, but I see this in both nodeGather.c and nodeGatherMerge.c:
>>
>> gatherstate->ps.qual =
>> ExecInitQual(node->plan.qual, (PlanState *) gatherstate
Robert Haas writes:
> I'm inclined to commit both of these after a little more testing and
> self-review, but let me know if anyone else wants to review first.
Attached is an updated version of the first patch, which I rebased
over 3f4c7917b, improved a bunch of the comments for, and fixed a
coup
On Fri, Aug 25, 2017 at 7:35 PM, Tom Lane wrote:
> Robert Haas writes:
>> I'm inclined to commit both of these after a little more testing and
>> self-review, but let me know if anyone else wants to review first.
>
> Looking through the first one, I wondered whether we needed to
> check for a qua
Robert Haas writes:
> I'm inclined to commit both of these after a little more testing and
> self-review, but let me know if anyone else wants to review first.
Looking through the first one, I wondered whether we needed to
check for a qual expression on Gather or GatherMerge. It seems like
it wo
Robert Haas writes:
> On Thu, Aug 24, 2017 at 2:24 PM, Tom Lane wrote:
>> I'd have been okay with changing the entire function if it could still be
>> doing all cases the same way. But the exception for merge-append (and
>> probably soon other cases) means you still end up with a readability
>>
On Thu, Aug 24, 2017 at 2:24 PM, Tom Lane wrote:
> I'd have been okay with changing the entire function if it could still be
> doing all cases the same way. But the exception for merge-append (and
> probably soon other cases) means you still end up with a readability
> problem.
I don't really ag
Douglas Doole writes:
>> TBH I dislike the fact that
>> you did the subquery case randomly differently from the existing cases;
>> it should just have been added as an additional recursive case. Since
>> this is done only once at query startup, worrying about hypothetical
>> micro-performance iss
Douglas Doole writes:
> My first thought was to do a regex over the explain output to mask the
> troublesome value, but I couldn't figure out how to make it work and didn't
> find an example (admittedly didn't spent a huge amount of time hunting
> though). I'd love to see how to put it together.
>
> No. You can't easily avoid recursion for the merge-append case, since
> that has to descend to multiple children.
Agreed. That's why it's not in the while loop in my sketch of the suggested
rework.
> TBH I dislike the fact that
> you did the subquery case randomly differently from the ex
I wrote:
> To get the buildfarm back to green, I agree with the suggestion of
> setting force_parallel_mode=off for this test, at least for now.
Actually, there's an easier way: we can just make the test table be
TEMP. That is a good idea anyway to prevent possible interference
from auto-analyze,
>
> I don't greatly like the way that the regression test case filters
> the output; it hides too much IMO. I'd be inclined to try to return
> the EXPLAIN output with as much detail preserved as possible. Maybe
> we could use regex substitution on lines of the output to get rid of
> stuff that wo
Robert Haas writes:
> Buildfarm members with force_parallel_mode=regress are failing now. I
> haven't had a chance to investigate exactly what's going on here, but
> I think there are probably several issues:
> 1. It's definitely the case that the details about a sort operation
> aren't propagat
Douglas Doole writes:
> Would we be better off moving those cases into the while loop I added to
> avoid the recursion?
No. You can't easily avoid recursion for the merge-append case, since
that has to descend to multiple children. TBH I dislike the fact that
you did the subquery case randomly
Robert Haas writes:
> On Wed, Aug 23, 2017 at 6:55 PM, Douglas Doole wrote:
>> From previous experience, pushing the limit to the workers has the potential
>> to be a big win .
> Here's a not-really-tested draft patch for that.
Both this patch and the already-committed one contain useless (and
>
> Here's a not-really-tested draft patch for that.
>
I don't know the parallel code so I'm not going to comment on the overall
patch, but a thought on ExecSetTupleBound().
That function is getting a number of cases where we're doing recursion for
a single child (result, gather, gather-merge). R
On Wed, Aug 23, 2017 at 6:55 PM, Douglas Doole wrote:
> From previous experience, pushing the limit to the workers has the potential
> to be a big win .
Here's a not-really-tested draft patch for that.
> I haven't played with parallel mode at all yet. Is it possible to disable
> force_parallel_m
>
> Buildfarm members with force_parallel_mode=regress are failing now. I
> haven't had a chance to investigate exactly what's going on here, but
> I think there are probably several issues:
>
Not an auspicious beginning for my first patch :-(
> 3. However, even if it did that, it would only af
On Mon, Aug 21, 2017 at 2:43 PM, Robert Haas wrote:
> Works for me. While I'm sure this won't eclipse previous achievements
> in this area, it still seems worthwhile. So committed with one minor
> change to shorten a long-ish line.
Buildfarm members with force_parallel_mode=regress are failing
On Wed, Aug 23, 2017 at 12:56 PM, Konstantin Knizhnik
wrote:
>
>
> On 22.08.2017 17:27, Konstantin Knizhnik wrote:
>
>
>
> On 18.08.2017 04:33, Robert Haas wrote:
>
>
> It seems like a somewhat ad-hoc approach; it supposes that we can take any
> query produced by deparseSelectStmtForRel() and stic
On 22.08.2017 17:27, Konstantin Knizhnik wrote:
On 18.08.2017 04:33, Robert Haas wrote:
It seems like a somewhat ad-hoc approach; it supposes that we can
take any query produced by deparseSelectStmtForRel() and stick a
LIMIT clause onto the very end and all will be well. Maybe that's
no
On 18.08.2017 04:33, Robert Haas wrote:
It seems like a somewhat ad-hoc approach; it supposes that we can take
any query produced by deparseSelectStmtForRel() and stick a LIMIT
clause onto the very end and all will be well. Maybe that's not a
problematic assumption, not sure. The grammar
On Tue, Aug 22, 2017 at 3:43 AM, Robert Haas wrote:
> Works for me. While I'm sure this won't eclipse previous achievements
> in this area, it still seems worthwhile.
This one is intentional per what happens in the US today? ;)
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers
On Fri, Aug 18, 2017 at 4:08 PM, Douglas Doole wrote:
>> 4. I am pretty doubtful that "Memory: 25kB" is going to be stable
>> enough for us to want that output memorialized in the regression ...
>
> Fair enough. I wanted to be a bit more sophisticated in my check than
> looking for a single value
>
> 1. The header comment for pass_down_bound() could mention "one or more
> levels of subqueries" rather than "a subquery".
>
Fixed
2. The first of the comments in the function body appears to have a
> whitespace issue that needs to be fixed manually or, better yet,
> addressed by pgindent.
>
F
On Fri, Aug 18, 2017 at 11:42 AM, Douglas Doole wrote:
> Thanks for the feedback on my original patch Robert. Here's an updated patch
> that will tunnel through multiple SubqueryScanStates.
Seems reasonable. I have some assorted nitpicks.
1. The header comment for pass_down_bound() could mentio
Thanks for the feedback on my original patch Robert. Here's an updated
patch that will tunnel through multiple SubqueryScanStates.
- Doug
Salesforce
On Thu, Aug 17, 2017 at 6:33 PM Robert Haas wrote:
> On Thu, Aug 17, 2017 at 11:36 AM, Douglas Doole
> wrote:
>
>> I completely agree. The furthe
On Thu, Aug 17, 2017 at 11:36 AM, Douglas Doole wrote:
> I completely agree. The further a limit can be pushed down, the better.
>
> The patch looks good to me.
>
It seems like a somewhat ad-hoc approach; it supposes that we can take any
query produced by deparseSelectStmtForRel() and stick a LI
I completely agree. The further a limit can be pushed down, the better.
The patch looks good to me.
On Thu, Aug 17, 2017 at 8:27 AM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 29.04.2017 00:13, Douglas Doole wrote:
>
> If you add this to the commitfest app, more people might
On 29.04.2017 00:13, Douglas Doole wrote:
If you add this to the commitfest app, more people might look at
it when the next commitfest opens.
I have added it. https://commitfest.postgresql.org/14/1119/
Also, it might help if you can provide a query/ies with numbers
where thi
>
> If you add this to the commitfest app, more people might look at it when
> the next commitfest opens.
I have added it. https://commitfest.postgresql.org/14/1119/
Also, it might help if you can provide a query/ies with numbers where this
> optimization shows improvement.
>
I can't provide th
On Wed, Apr 19, 2017 at 10:51 PM, Douglas Doole wrote:
> Thanks for the feedback Ashutosh.
>
> I disagree that we need to call pass_down_bound() recursively. The whole
> point of the recursive call would be to tunnel through a plan node. Since
> SubqueryScanState only has one child (unlike MergeAp
Thanks for the feedback Ashutosh.
I disagree that we need to call pass_down_bound() recursively. The whole
point of the recursive call would be to tunnel through a plan node. Since
SubqueryScanState only has one child (unlike MergeAppendState) this can be
more efficiently done inline rather than v
The function pass_down_bound() is a recursive function. For
SubqueryScanState we have to do something similar to ResultState i.e.
call pass_down_bound() recursively on subqueryScanState->subplan.
Please add this to the next commitfest.
On Wed, Apr 19, 2017 at 6:09 AM, Douglas Doole wrote:
> We'v
We've hit a case where pass_down_bound() isn't pushing the row count limit
from limit into sort. The issue is that we're getting a subquery scan node
between the limit and the sort. The subquery is only doing column
projection and has no quals or SRFs so it should be safe to push the limit
into the
50 matches
Mail list logo