Hi,
I encountered some issue to run a spark SQL query, and will happy to some
advice.
I'm trying to run a query on a very big data set (around 1.5TB) and it
getting failures in all of my tries. A template of the query is as below:
insert overwrite table partition(part)
select /*+ BROADCAST(c) */
Great work!
On Sun, Aug 25, 2019 at 6:03 AM Xiao Li wrote:
> Thank you for your contributions! This is a great feature for Spark
> 3.0! We finally achieve it!
>
> Xiao
>
> On Sat, Aug 24, 2019 at 12:18 PM Felix Cheung
> wrote:
>
>> That’s great!
>>
>> --
>> *From:* ☼
That's Awesome !!!
Thanks to everyone that made this possible :cheers:
Hichame
From: cloud0...@gmail.com
Sent: August 25, 2019 10:43 PM
To: lix...@databricks.com
Cc: felixcheun...@hotmail.com; ravishankar.n...@gmail.com;
dongjoon.h...@gmail.com; d...@spark.apache.org; user@spark.apache.org
Subj