I am not sure whether it works but try this: Put your ADD JAR commands into a file and invoke hive with -i file option.
or insert your ADD JAR commands in your $HOME/.hiverc file and start hive. -Ramki. On Fri, Mar 8, 2013 at 11:55 PM, Edward Capriolo <edlinuxg...@gmail.com>wrote: > Essentially anything that is part of the InputFormat needs to be in > auxlib/auxpath. Anything part of a UDF can be added with 'add jar'. > > > On Fri, Mar 8, 2013 at 1:01 PM, Dean Wampler < > dean.wamp...@thinkbiganalytics.com> wrote: > >> --auxpath adds more jars to Hive's classpath before invoking Hive. ADD >> JARS copies jars around the cluster and adds them to the task classpath, so >> the jars you add aren't visible to hive itself. Annoying, but... >> >> On Fri, Mar 8, 2013 at 11:53 AM, java8964 java8964 >> <java8...@hotmail.com>wrote: >> >>> This is in HIVE-0.9.0 >>> >>> hive> list jars; >>> /nfs_home/common/userlibs/google-collections-1.0.jar >>> /nfs_home/common/userlibs/elephant-bird-hive-3.0.7.jar >>> /nfs_home/common/userlibs/protobuf-java-2.3.0.jar >>> /nfs_home/common/userlibs/elephant-bird-core-3.0.7.jar >>> file:/usr/lib/hive/lib/hive-builtins-0.9.0-cdh4.1.2.jar >>> hive> desc table; >>> java.lang.NoClassDefFoundError: >>> com/twitter/elephantbird/mapreduce/io/ProtobufConverter >>> at >>> com.twitter.elephantbird.hive.serde.ProtobufDeserializer.initialize(ProtobufDeserializer.java:45) >>> at >>> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:203) >>> at >>> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260) >>> at >>> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253) >>> at >>> org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490) >>> at >>> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162) >>> at >>> org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:930) >>> at >>> org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:844) >>> at >>> org.apache.hadoop.hive.ql.exec.DDLTask.describeTable(DDLTask.java:2545) >>> at >>> org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:309) >>> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) >>> at >>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) >>> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1331) >>> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1117) >>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:950) >>> at >>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) >>> at >>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215) >>> at >>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406) >>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:744) >>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:607) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:208) >>> Caused by: java.lang.ClassNotFoundException: >>> com.twitter.elephantbird.mapreduce.io.ProtobufConverter >>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at java.net.URLClassLoader.findClass(URLClassLoader.java:190) >>> at java.lang.ClassLoader.loadClass(ClassLoader.java:307) >>> at java.lang.ClassLoader.loadClass(ClassLoader.java:248) >>> ... 25 more >>> FAILED: Execution Error, return code -101 from >>> org.apache.hadoop.hive.ql.exec.DDLTask >>> hive> exit; >>> [y130zhan@daca2 userlibs]$ jar tvf >>> /nfs_home/common/userlibs/elephant-bird-core-3.0.7.jar | grep >>> ProtobufConverter >>> 4825 Mon Mar 04 16:50:46 UTC 2013 >>> com/twitter/elephantbird/mapreduce/io/ProtobufConverter.class >>> 732 Mon Mar 04 16:50:46 UTC 2013 >>> com/twitter/elephantbird/mapreduce/io/ProtobufConverter$1.class >>> >>> >>> ------------------------------ >>> From: vkavul...@outlook.com >>> To: user@hive.apache.org >>> Subject: RE: difference between add jar in hive session and hive >>> --auxpath >>> Date: Thu, 7 Mar 2013 16:44:41 -0800 >>> >>> >>> If properly done, "add jar <jar-file>" should work the same as passing >>> the jar with --auxpath. Can you run "list jars;" command from CLI or Hue >>> and check if you see the jar file. >>> >>> ------------------------------ >>> From: java8...@hotmail.com >>> To: user@hive.apache.org >>> Subject: difference between add jar in hive session and hive --auxpath >>> Date: Thu, 7 Mar 2013 17:47:26 -0500 >>> >>> Hi, >>> >>> I have a hive table which uses the jar file provided from the >>> elephant-bird, which is a framework integrated between lzo and google >>> protobuf data and hadoop/hive. >>> >>> If I use the hive command like this: >>> >>> hive --auxpath path_to_jars, it works fine to query my table, >>> >>> but if I use the add jar after I started the hive session, I will get >>> ClassNotFoundException in the runtime of my query of the classes in those >>> jars. >>> >>> My questions are: >>> >>> 1) What is the different between hive --auxpath and "add jar" in the >>> hive session? >>> 2) This problem makes it is hard to access my table in the HUE, as it >>> only supports "add jar", but not --auxpath option. Any suggestions? >>> >>> >>> Thanks >>> >>> Yong >>> >> >> >> >> -- >> *Dean Wampler, Ph.D.* >> thinkbiganalytics.com >> +1-312-339-1330 >> >> >