Are you using IvyVPN which causes this problem? If the VPN software changes
the network URL silently you should avoid using them.
Regards.
On Wed, Dec 22, 2021 at 1:48 AM Pralabh Kumar
wrote:
> Hi Spark Team
>
> I am building a spark in VPN . But the unit test case below is failing.
> This is p
16000 joins is never going to work out, though you can do it all at once
and avoid the immediate issue. If they really are the same rows in the same
order, maybe you can read them as lines of text and use zip()
On Tue, Dec 21, 2021, 8:48 AM Andrew Davidson
wrote:
> Hi Jun
>
> Thank you for your
You would have to make it available? This doesn't seem like a spark issue.
On Tue, Dec 21, 2021, 10:48 AM Pralabh Kumar wrote:
> Hi Spark Team
>
> I am building a spark in VPN . But the unit test case below is failing.
> This is pointing to ivy location which cannot be reached within VPN . Any
Hi Spark Team
I am building a spark in VPN . But the unit test case below is failing.
This is pointing to ivy location which cannot be reached within VPN . Any
help would be appreciated
test("SPARK-33084: Add jar support Ivy URI -- default transitive = true") {
*sc *= new SparkContext(new
Spar
Hi Jun
Thank you for your reply. My question is what is best practices? My for
loop run over 16000 joins. I get an out of memory exception.
What is the indented use of createOrReplaceTempView if I need to manage the
cache or create a uniq name each time
Kind regards
Andy
On Tue, Dec 21, 2021
Hi
As far as I know. The warning should be caused by create same temp view
names.rawCountsSDF.createOrReplaceTempView( "rawCounts" )
You create a view "rawCounts", then in for loop, second round, you create a
new view with name "rawCounts", spark3 would uncache the
previous "rawCounts".
Correct m