Thanks for the reply, I filled an issue in JIRA 
https://issues.apache.org/jira/browse/SPARK-21819

I submitted the job from Java API, not by the spark-submit command line as we 
want to make spark processing as a service .

                    Configuration hc = new  Configuration(false);
                    String yarnxml=String.format("%s/%s", 
ConfigLocation,"yarn-site.xml");
                    String corexml=String.format("%s/%s", 
ConfigLocation,"core-site.xml");
                    String hdfsxml=String.format("%s/%s", 
ConfigLocation,"hdfs-site.xml");
                    String hivexml=String.format("%s/%s", 
ConfigLocation,"hive-site.xml");

                    hc.addResource(yarnxml);
                    hc.addResource(corexml);
                    hc.addResource(hdfsxml);
                    hc.addResource(hivexml);

                    //manually set all the Hadoop config in sparkconf
                    SparkConf sc = new SparkConf(true);
                    hc.forEach(entry-> {
                                         if(entry.getKey().startsWith("hive")) {
                                                   sc.set(entry.getKey(), 
entry.getValue());
                                         }else {
                                                   
sc.set("spark.hadoop."+entry.getKey(), entry.getValue());
                                         }
                               });

              UserGroupInformation.setConfiguration(hc);
              UserGroupInformation.loginUserFromKeytab(Principal, Keytab);

                    SparkSession sparkSessesion= SparkSession
                                         .builder()
                                         .master("yarn-client") 
//"yarn-client", "local"
                                         .config(sc)
                                         .appName(SparkEAZDebug.class.getName())
                                         .enableHiveSupport()
                                         .getOrCreate();


Thanks very much.
Keith

From: 周康 [mailto:zhoukang199...@gmail.com]
Sent: 2017年8月22日 20:22
To: Sun, Keith <ai...@ebay.com>
Cc: user@spark.apache.org
Subject: Re: A bug in spark or hadoop RPC with kerberos authentication?

you can checkout Hadoop**credential class in  spark yarn。During spark submit,it 
will use config on the classpath.
I wonder how do you reference your own config?

Reply via email to