Hi Jeff,
I repeated the same query with a work_mem of 2000MB. It is faster than
the one with two batches but still slower than hashing the smaller table. So in
this case It makes more sense to hash the smaller table (less execution time
and reduce hash table size).
Explain a
On Tue, Mar 13, 2018 at 4:02 AM, Narendra Pradeep U U <
narendra.prad...@zohocorp.com> wrote:
> Hi,
> Thanks everyone for your suggestions. I would like to add explain
> analyze of both the plans so that we can have broader picture.
>
> I have a work_mem of 1000 MB.
>
Is it possible to rep
On Tue, Mar 13, 2018 at 4:32 PM, Narendra Pradeep U U
wrote:
> Hi,
> Thanks everyone for your suggestions. I would like to add explain
> analyze of both the plans so that we can have broader picture.
>
> I have a work_mem of 1000 MB.
>
> The Plan which we get regularly with table being anal
Hi,
Thanks everyone for your suggestions. I would like to add explain
analyze of both the plans so that we can have broader picture.
I have a work_mem of 1000 MB.
The Plan which we get regularly with table being analyzed .
tpch=# explain analyze select b from tab2 left join tab1 o
On Mon, Mar 12, 2018 at 10:02 PM, Narendra Pradeep U U
wrote:
> Hi ,
>
> Recently I came across a case where the planner choose larger table on
> hash side. I am not sure whether it is an intended behavior or we are
> missing something.
>
> I have two tables (a and b) each with single
Andres Freund writes:
> Not sure I follow. Unless the values are equivalent (i.e. duplicate key
> values), why should non-uniformity in key space translate to hash space?
Duplicates are exactly the problem. See estimate_hash_bucket_stats.
> And if there's duplicates it shouldn't hurt much eithe
On 2018-03-12 12:52:00 -0400, Tom Lane wrote:
> Narendra Pradeep U U writes:
> > Recently I came across a case where the planner choose larger table
> > on hash side. I am not sure whether it is an intended behavior or we are
> > missing something.
>
> Probably the reason is that the sm
Narendra Pradeep U U writes:
> Recently I came across a case where the planner choose larger table on
> hash side. I am not sure whether it is an intended behavior or we are
> missing something.
Probably the reason is that the smaller table has a less uniform
distribution of the hash ke
Hi ,
Recently I came across a case where the planner choose larger table on
hash side. I am not sure whether it is an intended behavior or we are missing
something.
I have two tables (a and b) each with single column in it. One table 'a'
is large with around 30 million