mahesh kumar behera created HIVE-26098:
------------------------------------------

             Summary: Duplicate path/Jar in hive.aux.jars.path or 
hive.reloadable.aux.jars.path causing IllegalArgumentException
                 Key: HIVE-26098
                 URL: https://issues.apache.org/jira/browse/HIVE-26098
             Project: Hive
          Issue Type: Bug
          Components: Hive, HiveServer2
            Reporter: mahesh kumar behera
            Assignee: mahesh kumar behera


 hive.aux.jars.path and hive.reloadable.aux.jars.path  are used for providing 
auxiliary jars which are used doing query processing. These jars are copied to 
Tez temp path so that the Tez jobs have access to these jars while processing 
the job. There are a duplicate check to avoid copying the same jar multiple 
times. This check assumes the jar to be in local file system. But in real, the 
jars path can be anywhere. So this duplicate check fails, when the source path 
is not in local path.
{code:java}
ERROR : Failed to execute tez graph.
java.lang.IllegalArgumentException: Wrong FS: 
hdfs://localhost:53877/tmp/test_jar/identity_udf.jar, expected: file:///
    at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) 
~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:86) 
~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636)
 ~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930)
 ~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
 ~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) 
~[hadoop-common-3.1.0.jar:?]
    at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.checkPreExisting(DagUtils.java:1392)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1411)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:1295)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:1177)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.ensureLocalResources(TezSessionState.java:636)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:283)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.openInternal(TezSessionPoolSession.java:124)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:241)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.tez.TezTask.ensureSessionHasResources(TezTask.java:448)
 ~[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:215) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:245) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:106) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:348) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:204) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:153) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:148) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185) 
[hive-exec-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:233)
 [hive-service-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88)
 [hive-service-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336)
 [hive-service-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_282]
    at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_282]
    at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
 [hadoop-common-3.1.0.jar:?]
    at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:356)
 [hive-service-4.0.0-alpha-1.jar:4.0.0-alpha-1]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_282]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_282]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_282]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_282]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282] {code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to