Hi all,

I encounter a strange issue when using spark 1.0 to access hdfs with Kerberos
I just have one spark test node for spark and HADOOP_CONF_DIR is set to the 
location containing the hdfs configuration files(hdfs-site.xml and 
core-site.xml)
When I use spark-shell with local mode, the access to hdfs is successfully .
However, If I use spark-shell which connects to the stand alone cluster (I 
configured the spark as standalone cluster mode with only one node).
The access to the hdfs fails with the following error: “Can't get Master 
Kerberos principal for use as renewer”

Anyone have any ideas on this ?
Thanks a lot.

Regards,
Xiaowei

Reply via email to