Maybe you could try “--conf spark.sql.statistics.fallBackToHdfs=true"
On 2019/05/11 01:54:27, V0lleyBallJunki3 wrote:
> Hello,>
>I have set spark.sql.autoBroadcastJoinThreshold=1GB and I am running the>
> spark job. However, my application is failing with:>
>
> at sun.reflect.NativeMetho
Lantao Jin shared an issue with you
Hi all,
Do you think is it a bug?
Should we keep the current behavior still?
> Ignore to load default properties file is not a good choice from the
> perspective of
Lantao Jin shared an issue with you
> Spark-sql do not support for void column datatype of view
> -
>
> Key: SPARK-20680
> URL: https://issues.
40GB
2016-10-14 14:20 GMT+08:00 Felix Cheung :
> How big is the metrics_moveing_detection_cube table?
>
>
>
>
>
> On Thu, Oct 13, 2016 at 8:51 PM -0700, "Lantao Jin"
> wrote:
>
> sqlContext <- sparkRHive.init(sc)
> sqlString<-
> "SELECT
sqlContext <- sparkRHive.init(sc)
sqlString<-
"SELECT
key_id,
rtl_week_beg_dt rawdate,
gmv_plan_rate_amt value
FROM
metrics_moveing_detection_cube
"
df <- sql(sqlString)
rdd<-SparkR:::toRDD(df)
#hang on case one: take from rdd
#take(rdd,3)
#hang on case two: convert back to dataframe
#df1<-create
Hi,
Our Spark is deployed on YARN and I found there were lots of spark-assembly
jars in the Spark heavy user filecache directory (aka
/usercache/username/filecache), and you know the assembly jar is bigger
than 100 MB before Spark v2. So all of them take 26GB (1/4 reserved
space) in most of Datanod