.apache.org/docs/latest/sql-programming-guide.html#other-configuration-options
>
> Thanks,
>
> Jagat Singh
>
> On Sat, Jul 9, 2016 at 9:50 AM, Lalitha MV wrote:
>
>> Hi,
>>
>> 1. What implementation is used for the hash join -- is it classic hash
>> joi
Hi,
1. What implementation is used for the hash join -- is it classic hash join
or Hybrid grace hash join?
2. If the hash table does not fit in memory, does it spill or does it fail?
Are there parameters to control this (for example to set the percentage of
hash table which can spill etc.)
3. Is t
precedence:
> * - BroadcastNestedLoopJoin: if one side of the join could be broadcasted
> * - CartesianProduct: for Inner join
> * - BroadcastNestedLoopJoin
> */
>
>
>
> On Jul 5, 2016, at 13:28, Lalitha MV wrote:
>
> It picks sort merge join, when spark.sql
maropu
>
> On Tue, Jul 5, 2016 at 4:23 AM, Lalitha MV wrote:
>
>> Hi maropu,
>>
>> Thanks for your reply.
>>
>> Would it be possible to write a rule for this, to make it always pick
>> shuffle hash join, over other join implementations(i.e. sort merge a
On Sat, Jul 2, 2016 at 12:58 AM, Takeshi Yamamuro
wrote:
> Hi,
>
> No, spark has no hint for the hash join.
>
> // maropu
>
> On Fri, Jul 1, 2016 at 4:56 PM, Lalitha MV wrote:
>
>> Hi,
>>
>> In order to force broadcast hash join, we can set
>> the
Hi,
In order to force broadcast hash join, we can set
the spark.sql.autoBroadcastJoinThreshold config. Is there a way to enforce
shuffle hash join in spark sql?
Thanks,
Lalitha