text**.**SPARK_UNKNOWN_USER*
>
> *}*
>
>
>
> Thanks
>
> Jerry
>
>
>
> *From:* Asaf Lahav [mailto:asaf.la...@gmail.com]
> *Sent:* Thursday, April 10, 2014 8:15 PM
> *To:* user@spark.apache.org
> *Subject:* Executing spark jobs with predefined Hadoop u
k.apache.org
Subject: Executing spark jobs with predefined Hadoop user
Hi,
We are using Spark with data files on HDFS. The files are stored as files for
predefined hadoop user ("hdfs").
The folder is permitted with
* read write, executable and read permission for the hdfs user
an
avito wrote
> Thanks Adam for the quick answer. You are absolutely right.
> We are indeed using the entire HDFS URI. Just for the post I have removed
> the name node details.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executing-spark-j
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:601)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>
> at $Proxy7.mkdirs(Unknown Source)
>
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1426)
>
> ... 17 more
>
>
>
>
>
> Apparently is the Spark context is initiated with the user on the local
> machine.
>
> Is there a way to start the Spark Context with another user then the one
> configured on the local machine?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executing-spark-jobs-with-predefined-Hadoop-user-tp4059p4061.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
We are using Spark with data files on HDFS. The files are stored as files
for predefined hadoop user ("hdfs").
The folder is permitted with
· read write, executable and read permission for the hdfs user
· executable and read permission for users in the group
· just