Can you please explain what you mean with that? How do you use a udf to replace 
a join? Thanks




---- On Mon, 24 Feb 2020 22:06:40 -0500 jianneng...@workday.com wrote ----


Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF 
likely won't work.


Jianneng
From: Liu Genie <genie_...@outlook.com>
Sent: Monday, February 24, 2020 6:39 PM
To: user@spark.apache.org <user@spark.apache.org>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the 
same WholeStageCodegen
 
I have encountered too many joins problem before. Since the joined dataframe is 
small enough, I convert join to udf operation, which is much faster and didn’t 
generate out of memory problem.



2020年2月25日 10:15,Jianneng Li <jianneng...@workday.com> 写道:


Hello everyone,


WholeStageCodegen generates code that appends results into a 
BufferedRowIterator, which keeps the results in an in-memory linked list. Long 
story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) 
that can blow up get planned into the same WholeStageCodegen - results keep on 
accumulating in the linked list, and do not get consumed fast enough, 
eventually causing the JVM to run out of memory.


Does anyone else have experience with this problem? Some obvious solutions 
include making BufferedRowIterator spill the linked list, or make it bounded, 
but I'd imagine that this would have been done a long time ago if it were 
necessary.


Thanks,


Jianneng

Reply via email to