Flavio is right: Flink should not expose Guava at all. Make sure you build
it following this trick:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/building.html#dependency-shading

On Tue, Jan 17, 2017 at 11:18 AM, Flavio Pompermaier <pomperma...@okkam.it>
wrote:

> I had very annoying problem in deploying a Flink job for Hbase 1.2 on
> cloudera cdh 5.9.0....the problem was caused by the fact that with maven <
> 3.3 you could build flink dist just using mvn clean install, with maven >=
> 3.3 you should do another mvn clean install from the flink-dist directory
> (I still don't know why....).
> See https://ci.apache.org/projects/flink/flink-docs-
> release-1.1/setup/building.html#dependency-shading for more details...
>
> I hope this could help,
> Flavio
>
>
>
> On 17 Jan 2017 02:31, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
>> Logged FLINK-5517 for upgrading hbase version to 1.3.0
>>
>> On Mon, Jan 16, 2017 at 5:26 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>>> hbase uses Guava 12.0.1 and Flink uses 18.0 where Stopwatch.<init>()V
>>> is no longer accessible.
>>> HBASE-14963 removes the use of Stopwatch at this location.
>>>
>>> hbase 1.3.0 RC has passed voting period.
>>>
>>> Please use 1.3.0 where you wouldn't see the IllegalAccessError
>>>
>>> On Mon, Jan 16, 2017 at 4:50 PM, Giuliano Caliari <
>>> giuliano.cali...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I'm trying to use HBase on one of my stream transformations and I'm
>>>> running into the Guava/Stopwatch dependency problem
>>>>
>>>> java.lang.IllegalAccessError: tried to access method 
>>>> com.google.common.base.Stopwatch.<init>()V from class 
>>>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
>>>>
>>>>
>>>> Reading on the problem it seems that there is a way to avoid it using
>>>> shading:
>>>> https://ci.apache.org/projects/flink/flink-docs-release-1.1/
>>>> setup/building.html#dependency-shading
>>>>
>>>> But I can't get it to work.
>>>> I followed the documented steps and it builds but when I try to run the
>>>> newly built version it fails when trying to connect to the Resource 
>>>> Manager:
>>>>
>>>> 2017-01-17 00:42:05,872 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>>>                   - Using values:
>>>> 2017-01-17 00:42:05,872 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>>>                   - TaskManager count = 4
>>>> 2017-01-17 00:42:05,873 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>>>                   - JobManager memory = 1024
>>>> 2017-01-17 00:42:05,873 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>>>                   - TaskManager memory = 32768
>>>> 2017-01-17 00:42:05,892 INFO  org.apache.hadoop.yarn.client.RMProxy
>>>>                       - Connecting to ResourceManager at /0.0.0.0:8032
>>>> 2017-01-17 00:42:07,023 INFO  org.apache.hadoop.ipc.Client
>>>>                      - Retrying connect to server: 0.0.0.0/0.0.0.0:8032.
>>>> Already tried 0 time(s); retry policy is 
>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1000 MILLISECONDS)
>>>> 2017-01-17 00:42:08,024 INFO  org.apache.hadoop.ipc.Client
>>>>                      - Retrying connect to server: 0.0.0.0/0.0.0.0:8032.
>>>> Already tried 1 time(s); retry policy is 
>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1000 MILLISECONDS)
>>>>
>>>>
>>>> I'm currently building version 1.1.4 of Flink based on the github repo.
>>>> Building it without shading (not executing `mvn clean install` on the
>>>> flink-dist sub-project) works fine until I try to use HBase, at which point
>>>> I get the Stopwatch exception.
>>>>
>>>> Has anyone been able to solve this?
>>>>
>>>> Thanks you,
>>>>
>>>> Giuliano Caliari
>>>> --
>>>> --
>>>> Giuliano Caliari (+55 11 984898464 <+55%2011%2098489-8464>)
>>>> <http://www.facebook.com/giuliano.caliari>+Google
>>>> <https://plus.google.com/u/0/104857507547056767808/posts>
>>>> Twitter <https://twitter.com/gcaliari>
>>>>
>>>> Master Software Engineer by Escola Politécnica da USP
>>>> Bachelor in Computer Science by Instituto de Matemática e Estatística
>>>> da USP
>>>>
>>>>
>>>
>>

Reply via email to