hi all,
my spark version is spark1.2,and i use saveAsNewAPIHadoopFile for my
job,but after execute many times ,it may be give me follow error one times
i think we may be lost some operation like: add
SparkHadoopUtil.get.addCredentials(hadoopConf) in
saveAsHadoopDataset(conf: JobConf)(SPARK-120
hi,when i run a query in spark sql ,there give me follow error,what's
processible reason can casuse this problem
ava.io.EOFException
at
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:148)
at
org.apache.spark.sql.hbase.HBasePartitioner$$an
Hi, all
When i running an app with this cmd: ./bin/spark-sql --master
yarn-client --num-executors 2 --executor-cores 3, i noticed that yarn
resource manager ui shows the `vcores used` in cluster metrics is 3. It
seems `vcores used` show wrong num (should be 7?)? Or i miss something?
Tha
Hi, all
I have (maybe a clumsy) question about executor recovery num in
yarn-client mode. My situation is as follows:
We have a 1(resource manager) + 3(node manager) cluster, a app is
running with one driver on the resource manager and 12 executors on all
the node managers,
and there are