On Wed, Jan 6, 2016 at 10:29 PM, Robert Haas wrote:
> On Mon, Jan 4, 2016 at 8:52 PM, Dilip Kumar wrote:
> > One strange behaviour, after increasing number of processor for VM,
> > max_parallel_degree=0; is also performing better.
>
> So, you went from 6 vCPUs to 8? In general, adding more CPUs
On Mon, Jan 4, 2016 at 8:52 PM, Dilip Kumar wrote:
> One strange behaviour, after increasing number of processor for VM,
> max_parallel_degree=0; is also performing better.
So, you went from 6 vCPUs to 8? In general, adding more CPUs means
that there is less contention for CPU time, but if you a
On Tue, Jan 5, 2016 at 1:52 AM, Robert Haas wrote:
> On Mon, Jan 4, 2016 at 4:50 AM, Dilip Kumar wrote:
> > I tried to create a inner table such that, inner table data don't fit in
> RAM
> > (I created VM with 1GB Ram).
> > Purpose of this is to make Disk scan dominant,
> > and since parallel jo
On Mon, Jan 4, 2016 at 4:50 AM, Dilip Kumar wrote:
> I tried to create a inner table such that, inner table data don't fit in RAM
> (I created VM with 1GB Ram).
> Purpose of this is to make Disk scan dominant,
> and since parallel join is repeating the Disk Scan and hash table building
> of inner
On Thu, Dec 24, 2015 at 4:45 AM, Robert Haas wrote:
> On Wed, Dec 23, 2015 at 2:34 AM, Dilip Kumar
> wrote:
> > Yeah right, After applying all three patches this problem is fixed, now
> > parallel hash join is faster than normal hash join.
> >
> > I have tested one more case which Amit mentioned
On Wed, Dec 23, 2015 at 2:34 AM, Dilip Kumar wrote:
> Yeah right, After applying all three patches this problem is fixed, now
> parallel hash join is faster than normal hash join.
>
> I have tested one more case which Amit mentioned, I can see in that case
> parallel plan (parallel degree>= 3) is
On Wed, Dec 23, 2015 at 2:34 AM, Dilip Kumar wrote:
>> I think the gather-reader-order patch will fix this. Here's a test
>> with all three patches.
>
> Yeah right, After applying all three patches this problem is fixed, now
> parallel hash join is faster than normal hash join.
Thanks. I've com
On Tue, Dec 22, 2015 at 8:30 PM, Robert Haas wrote:
> On Tue, Dec 22, 2015 at 4:14 AM, Dilip Kumar
> wrote:
> > On Fri, Dec 18, 2015 at 8:47 PM Robert Wrote,
> >>> Yes, you are right, that create_gather_path() sets parallel_safe to
> false
> >>> unconditionally but whenever we are building a non
On Tue, Dec 22, 2015 at 4:14 AM, Dilip Kumar wrote:
> On Fri, Dec 18, 2015 at 8:47 PM Robert Wrote,
>>> Yes, you are right, that create_gather_path() sets parallel_safe to false
>>> unconditionally but whenever we are building a non partial path, that
>>> time
>>> we should carry forward the paral
On Fri, Dec 18, 2015 at 8:47 PM Robert Wrote,
>> Yes, you are right, that create_gather_path() sets parallel_safe to false
>> unconditionally but whenever we are building a non partial path, that
time
>> we should carry forward the parallel_safe state to its parent, and it
seems
>> like that part
On Fri, Dec 18, 2015 at 3:54 AM, Dilip Kumar wrote:
> On Fri, Dec 18, 2015 at 7.59 AM Robert Haas Wrote,
>> Uh oh. That's not supposed to happen. A GatherPath is supposed to
>> have parallel_safe = false, which should prevent the planner from
>> using it to form new partial paths. Is this with
On Fri, Dec 18, 2015 at 7.59 AM Robert Haas Wrote,
> Uh oh. That's not supposed to happen. A GatherPath is supposed to
> have parallel_safe = false, which should prevent the planner from
> using it to form new partial paths. Is this with the latest version
> of the patch? The plan output sugg
On Thu, Dec 17, 2015 at 12:33 AM, Amit Kapila wrote:
> While looking at plans of Q5 and Q7, I have observed that Gather is
> pushed below another Gather node for which we don't have appropriate
> way of dealing. I think that could be the reason why you are seeing
> the errors.
Uh oh. That's not
On Wed, Dec 17, 2015 at 11:03 AM Amit Kapila
wrote:
> While looking at plans of Q5 and Q7, I have observed that Gather is
> pushed below another Gather node for which we don't have appropriate
> way of dealing. I think that could be the reason why you are seeing
> the errors.
Ok
> Also, I thin
On Wed, Dec 16, 2015 at 9:55 PM, Dilip Kumar wrote:
> On Wed, Dec 16, 2015 at 6:20 PM Amit Kapila
> wrote:
>
> >On Tue, Dec 15, 2015 at 7:31 PM, Robert Haas
> wrote:
> >>
> >> On Mon, Dec 14, 2015 at 8:38 AM, Amit Kapila
> wrote:
>
> > In any case,
> >I have done some more investigation of the
On Wed, Dec 16, 2015 at 6:20 PM Amit Kapila wrote:
>On Tue, Dec 15, 2015 at 7:31 PM, Robert Haas wrote:
>>
>> On Mon, Dec 14, 2015 at 8:38 AM, Amit Kapila
wrote:
> In any case,
>I have done some more investigation of the patch and found that even
>without changing query planner related paramet
On Tue, Dec 15, 2015 at 7:31 PM, Robert Haas wrote:
>
> On Mon, Dec 14, 2015 at 8:38 AM, Amit Kapila
wrote:
> > set enable_hashjoin=off;
> > set enable_mergejoin=off;
>
> [ ... ]
>
>
> > Now here the point to observe is that non-parallel case uses both less
> > Execution time and Planning time to
On Mon, Dec 14, 2015 at 8:38 AM, Amit Kapila wrote:
> set enable_hashjoin=off;
> set enable_mergejoin=off;
[ ... ]
> Now here the point to observe is that non-parallel case uses both less
> Execution time and Planning time to complete the statement. There
> is a considerable increase in planni
On Wed, Dec 2, 2015 at 1:55 PM, Robert Haas wrote:
> Oops. The new version I've attached should fix this.
I've been trying to see if parallel join has any effect on PostGIS
spatial join queries, which are commonly CPU bound. (My tests [1] on
simple parallel scan were very positive, though quite
On Thu, Dec 3, 2015 at 3:25 AM, Robert Haas wrote:
>
> On Tue, Dec 1, 2015 at 7:21 AM, Amit Kapila
wrote:
> > It would be better if we can split this patch into multiple patches like
> > Explain related changes, Append pushdown related changes, Join
> > Push down related changes. You can choose
On Wed, Dec 9, 2015 at 11:51 PM, Robert Haas wrote:
>
> On Fri, Dec 4, 2015 at 3:07 AM, Amit Kapila
wrote:
>
> > I think the problem is at Gather node, the number of buffers (read +
hit)
> > are greater than the number of pages in relation. The reason why it
> > is doing so is that in Workers (P
On Fri, Dec 4, 2015 at 3:07 AM, Amit Kapila wrote:
> Do you think it will be useful to display in a similar way if worker
> is not able to execute plan (like before it starts execution, the other
> workers have already finished the work)?
Maybe, but it would clutter the output a good deal. I thi
On 30 November 2015 at 17:52, Robert Haas wrote:
> My idea is that you'd end up with a plan like this:
>
> Gather
> -> Hash Join
> -> Parallel Seq Scan
> -> Parallel Hash
> -> Parallel Seq Scan
>
> Not only does this build only one copy of the hash table instead of N
> copies, but we can
On Thu, Dec 3, 2015 at 3:25 AM, Robert Haas wrote:
>
> On Tue, Dec 1, 2015 at 7:21 AM, Amit Kapila
wrote:
>
> > - There seems to be some inconsistency in Explain's output when
> > multiple workers are used.
>
>
> So the net result of this is that the times and row counts are
> *averages* across a
On Tue, Dec 1, 2015 at 7:21 AM, Amit Kapila wrote:
> Above and changes in add_path() makes planner not to select parallel path
> for seq scan where earlier it was possible. I think you want to change the
> costing of parallel plans based on rows selected instead of total_cost,
> but there seems to
On Tue, Dec 1, 2015 at 5:51 PM, Amit Kapila wrote:
>
> On Thu, Nov 26, 2015 at 8:11 AM, Robert Haas
wrote:
> >
> > Attached find a patch that does (mostly) two things.
> >
>
> I have started looking into this and would like to share few findings
> with you:
>
>
> - There seems to be some inconsis
On Thu, Nov 26, 2015 at 8:11 AM, Robert Haas wrote:
>
> Attached find a patch that does (mostly) two things.
>
I have started looking into this and would like to share few findings
with you:
-
+ /*
+ * Primitive parallel cost model. Assume the leader will do half as much
+ * work as a
regular w
On Mon, Nov 30, 2015 at 12:01 PM, Greg Stark wrote:
> On Mon, Nov 30, 2015 at 4:52 PM, Robert Haas wrote:
>> Not only does this build only one copy of the hash table instead of N
>> copies, but we can parallelize the hash table construction itself by
>> having all workers insert in parallel, whic
On Mon, Nov 30, 2015 at 4:52 PM, Robert Haas wrote:
> Not only does this build only one copy of the hash table instead of N
> copies, but we can parallelize the hash table construction itself by
> having all workers insert in parallel, which is pretty cool.
Hm. The case where you don't want paral
On Thu, Nov 26, 2015 at 3:45 AM, Simon Riggs wrote:
> Sounds like good progress.
Thanks.
> This gives us multiple copies of the hash table, which means we must either
> use N * work_mem, or we must limit the hash table to work_mem / N per
> partial plan.
We use N * work_mem in this patch. The
On 26 November 2015 at 03:41, Robert Haas wrote:
> Attached find a patch that does (mostly) two things. First, it allows
> the optimizer to generate plans where a Nested Loop or Hash Join
> appears below a Gather node. This is a big improvement on what we
> have today, where only a sequential s
Attached find a patch that does (mostly) two things. First, it allows
the optimizer to generate plans where a Nested Loop or Hash Join
appears below a Gather node. This is a big improvement on what we
have today, where only a sequential scan can be parallelized; with
this patch, entire join probl
32 matches
Mail list logo