Feel free to correct me if I am wrong.
But I believe this isn't a feature yet:
"create a new Spark context within a single JVM process (driver)"
A few questions for you:
1) Is Kerberos setup correctly for you (the user)
2) Could you please add the command/ code you are executing?
Checking to see
ubmit.deployMode', u'client'), (u'spark.ssl.enabled',
> u'true'), (u'spark.authenticate', u'true'), (u'spark.ssl.trustStore',
> u'xxx.truststore')]
>
> I am not really familiar with "spark
k.ssl.enabled',
> u'true'), (u'spark.authenticate', u'true'), (u'spark.ssl.trustStore',
> u'xxx.truststore')]
>
> I am not really familiar with "spark.yarn.credentials.file" and had
> thought it was created automatically a
7;spark.serializer.objectStreamReset', u'100'),
(u'spark.history.fs.logDirectory',
u'hdfs://xxx-001:9000/user/hadoop/sparklogs'), (u'spark.yarn.isPython',
u'true'), (u'spark.submit.deployMode', u'client'), (u'spark.ssl.e
ls for Ted Yu ---11/11/2015 01:55:02 PM---Looks
> like the delegation token should be renewed. Mind trying the]Ted Yu
> ---11/11/2015 01:55:02 PM---Looks like the delegation token should be
> renewed. Mind trying the following ?
>
> From: Ted Yu
> To: Michael V Le/Watson/IBM@IBMUS
&
Michael V Le/Watson/IBM@IBMUS
Cc: user
Date: 11/11/2015 01:55 PM
Subject:Re: Creating new Spark context when running in Secure YARN
fails
Looks like the delegation token should be renewed.
Mind trying the following ?
Thanks
diff --git
a/yarn/src/main/scala/org/
Looks like the delegation token should be renewed.
Mind trying the following ?
Thanks
diff --git
a/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
b/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerB
index 20771f6..e3c4a5a 100644
-