[ 
https://issues.apache.org/jira/browse/HIVE-15767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16085625#comment-16085625
 ] 

Peter Cseh commented on HIVE-15767:
-----------------------------------

The Spark driver will get the correct tokens from the parent application - it's 
in the local folder created for it's container. I'm not sure how it get's them, 
but they are there. 
The driver will pick it up from the correct container_tokens file using the 
HADOOP_TOKEN_FILE_LOCATION env variable or something like that. The issue is 
that Hadoop's TokenCache is looking for the mapreduce.job.credentials.binary 
property as well, while it's not needed and this invalid reference causes the 
job to fail.

> Hive On Spark is not working on secure clusters from Oozie
> ----------------------------------------------------------
>
>                 Key: HIVE-15767
>                 URL: https://issues.apache.org/jira/browse/HIVE-15767
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1, 2.1.1
>            Reporter: Peter Cseh
>            Assignee: Peter Cseh
>         Attachments: HIVE-15767-001.patch, HIVE-15767-002.patch
>
>
> When a HiveAction is launched form Oozie with Hive On Spark enabled, we're 
> getting errors:
> {noformat}
> Caused by: java.io.IOException: Exception reading 
> file:/yarn/nm/usercache/yshi/appcache/application_1485271416004_0022/container_1485271416004_0022_01_000002/container_tokens
>         at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:188)
>         at 
> org.apache.hadoop.mapreduce.security.TokenCache.mergeBinaryTokens(TokenCache.java:155)
> {noformat}
> This is caused by passing the {{mapreduce.job.credentials.binary}} property 
> to the Spark configuration in RemoteHiveSparkClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to