I fix this by putting both guava-12.0.jar & guava-18.0.jar into storm/lib, But 
I don’t think it is a good solution.
Any why both jar in classpath won’t cause confliction?


====================
Thanks,
lujinhong

> 在 2016年4月26日,09:22,jinhong lu <[email protected]> 写道:
> 
> I use storm-0.10 to put data to hbase-1.0.1, and storm use guava-12.0 which 
> hbase use guava-18.0, both are load into classpath, it leads to my job fail.
> 
> How to ensure storm and hbase use the correct version jar?
> 
> Here is my pom.xml:
> 
> <dependencies>
>     <dependency>
>         <groupId>org.apache.hbase</groupId>
>         <artifactId>hbase-client</artifactId>
>         <version>1.0.0-cdh5.4.5</version>
>     </dependency>
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-hdfs</artifactId>
>         <version>2.3.0</version>
>     </dependency>
> 
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-common</artifactId>
>         <version>2.3.0</version>
>     </dependency>
> 
>     <dependency>
>         <groupId>org.apache.hadoop</groupId>
>         <artifactId>hadoop-client</artifactId>
>         <version>2.3.0</version>
>     </dependency>
> 
>     <dependency>
>         <groupId>org.apache.storm</groupId>
>         <artifactId>storm-core</artifactId>
>         <version>0.10.0</version>
> 
>     </dependency>
> 
>     <dependency>
>         <groupId>org.apache.storm</groupId>
>         <artifactId>storm-kafka</artifactId>
>         <version>0.10.0</version>
> 
>     </dependency>
>     <dependency>
>         <groupId>org.apache.kafka</groupId>
>         <artifactId>kafka_2.10</artifactId>
>         <version>0.8.2.1</version>
>         <exclusions>
>             <exclusion>
>                 <groupId>org.apache.zookeeper</groupId>
>                 <artifactId>zookeeper</artifactId>
>             </exclusion>
>             <exclusion>
>                 <groupId>log4j</groupId>
>                 <artifactId>log4j</artifactId>
>             </exclusion>
>         </exclusions>
>     </dependency>
> 
>     <dependency>
>         <groupId>org.json</groupId>
>         <artifactId>org.json</artifactId>
>         <version>2.0</version>
>     </dependency>
> </dependencies>
> and exception:
> 
> java.lang.IllegalAccessError: tried to access method 
> com.google.common.base.Stopwatch.<init>()V from class 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1122)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1261)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1125)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369) 
> ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320) 
> ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>  ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1513) 
> ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107) 
> ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
> at 
> com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:182)
>  ~[stormjar.jar:?]
> at 
> com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:175)
>  ~[stormjar.jar:?]
> at 
> com.lujinhong.demo.storm.kinit.stormkinitdemo.PrepaidFunction.execute(PrepaidFunction.java:79)
>  ~[stormjar.jar:?]
> at 
> storm.trident.planner.processor.EachProcessor.execute(EachProcessor.java:65) 
> ~[storm-core-0.10.0.jar:0.10.0]
> at 
> storm.trident.planner.SubtopologyBolt$InitialReceiver.receive(SubtopologyBolt.java:206)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at storm.trident.planner.SubtopologyBolt.execute(SubtopologyBolt.java:146) 
> ~[storm-core-0.10.0.jar:0.10.0]
> at 
> storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:370)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.daemon.executor$fn__5694$tuple_action_fn__5696.invoke(executor.clj:690)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke(executor.clj:436)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.disruptor$clojure_handler$reify__5189.onEvent(disruptor.clj:58)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:132)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:106)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at 
> backtype.storm.daemon.executor$fn__5694$fn__5707$fn__5758.invoke(executor.clj:819)
>  ~[storm-core-0.10.0.jar:0.10.0]
> at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) 
> [storm-core-0.10.0.jar:0.10.0]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
> at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67]
> 
> ========================
> Thanks,
> lujinhong
> 

Reply via email to