I could be wrong, but I'm guessing that it uses UDF as the build 
side<https://en.wikipedia.org/wiki/Hash_join#Classic_hash_join> of a hash join. 
So the hash table is inside the UDF, and the UDF is called to perform the join. 
There are limitations to this approach of course, you can't do all joins this 
way.

Best,

Jianneng
________________________________
From: yeikel valdes <em...@yeikel.com>
Sent: Tuesday, February 25, 2020 5:48 AM
To: Jianneng Li <jianneng...@workday.com>
Cc: user@spark.apache.org <user@spark.apache.org>; genie_...@outlook.com 
<genie_...@outlook.com>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the 
same WholeStageCodegen

Can you please explain what you mean with that? How do you use a udf to replace 
a join? Thanks



---- On Mon, 24 Feb 2020 22:06:40 -0500 jianneng...@workday.com wrote ----

Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF 
likely won't work.

Jianneng
________________________________
From: Liu Genie <genie_...@outlook.com<mailto:genie_...@outlook.com>>
Sent: Monday, February 24, 2020 6:39 PM
To: user@spark.apache.org<mailto:user@spark.apache.org> 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the 
same WholeStageCodegen

I have encountered too many joins problem before. Since the joined dataframe is 
small enough, I convert join to udf operation, which is much faster and didn’t 
generate out of memory problem.

2020年2月25日 10:15,Jianneng Li 
<jianneng...@workday.com<mailto:jianneng...@workday.com>> 写道:

Hello everyone,

WholeStageCodegen generates code that appends 
results<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_spark_blob_v3.0.0-2Dpreview2_sql_core_src_main_scala_org_apache_spark_sql_execution_WholeStageCodegenExec.scala-23L771&d=DwMGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=VEtAA5SS60IF_f_H4BzelvlCoMSY5ifjy9fFlCw_oas&m=3v7ZvuXA3v-_ZDL_l5qNLI9kqbuxdt9iAHBp5-1QE74&s=Mwyiq0QEcEm14_jxWDDammyNHLcF9SPuxY8yD-urWEE&e=>
 into a BufferedRowIterator, which keeps the results in an in-memory linked 
list<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_spark_blob_v3.0.0-2Dpreview2_sql_core_src_main_java_org_apache_spark_sql_execution_BufferedRowIterator.java-23L34&d=DwMGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=VEtAA5SS60IF_f_H4BzelvlCoMSY5ifjy9fFlCw_oas&m=3v7ZvuXA3v-_ZDL_l5qNLI9kqbuxdt9iAHBp5-1QE74&s=MqqKu5Lncn4eWxPccnxtzNe61wTzCrrYvW-Zgh7mMiM&e=>.
 Long story short, this is a problem when multiple joins (i.e. 
BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen 
- results keep on accumulating in the linked list, and do not get consumed fast 
enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions 
include making BufferedRowIterator spill the linked list, or make it bounded, 
but I'd imagine that this would have been done a long time ago if it were 
necessary.

Thanks,

Jianneng


Reply via email to