Hi Ashish,

Are you using Flink 1.4? If so, what does the “hadoop classpath” command return 
from the command line where you’re trying to start the job?

Asking because I’d run into issues with 
https://issues.apache.org/jira/browse/FLINK-7477 
<https://issues.apache.org/jira/browse/FLINK-7477>, where I had a old version 
of Hadoop being referenced by the “hadoop" command.

— Ken


> On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <ashish...@yahoo.com> wrote:
> 
> Hi All,
> 
> Looks like we are out of the woods for now (so we think) - we went with 
> Hadoop free version and relied on client libraries on edge node. 
> 
> However, I am still not very confident as I started digging into that stack 
> as well and realized what Till pointed out (traces leads to a class that is 
> part of 2.9). I did dig around env variables and nothing was set. This is a 
> brand new clustered installed a week back and our team is literally the first 
> hands on deck. I will fish around and see if Hortonworks back-ported 
> something for HDP (dots are still not completely connected but nonetheless, 
> we have a test session and app running in our brand new Prod)
> 
> Thanks, Ashish
> 
>> On Mar 22, 2018, at 4:47 AM, Till Rohrmann <trohrm...@apache.org 
>> <mailto:trohrm...@apache.org>> wrote:
>> 
>> Hi Ashish,
>> 
>> the class `RequestHedgingRMFailoverProxyProvider` was only introduced with 
>> Hadoop 2.9.0. My suspicion is thus that you start the client with some 
>> Hadoop 2.9.0 dependencies on the class path. Could you please check the logs 
>> of the client what's on its class path? Maybe you could also share the logs 
>> with us. Please also check whether HADOOP_CLASSPATH is set to something 
>> suspicious.
>> 
>> Thanks a lot!
>> 
>> Cheers,
>> Till
>> 
>> On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <ashish...@yahoo.com 
>> <mailto:ashish...@yahoo.com>> wrote:
>> Hi Piotrek,
>> 
>> At this point we are simply trying to start a YARN session. 
>> 
>> BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has 
>> experienced similar issues. 
>> 
>> We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
>> 
>> I guess we are left with getting non-hadoop binaries and set 
>> HADOOP_CLASSPATH then?
>> 
>> -- Ashish
>> 
>> On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
>> <pi...@data-artisans.com <mailto:pi...@data-artisans.com>> wrote:
>> Hi,
>> 
>> > Does some simple word count example works on the cluster after the upgrade?
>> 
>> If not, maybe your job is pulling some dependency that’s causing this 
>> version conflict?
>> 
>> Piotrek
>> 
>>> On 21 Mar 2018, at 16:52, ashish pok <ashish...@yahoo.com 
>>> <mailto:ashish...@yahoo.com>> wrote:
>>> 
>>> Hi Piotrek,
>>> 
>>> Yes, this is a brand new Prod environment. 2.6 was in our lab.
>>> 
>>> Thanks,
>>> 
>>> -- Ashish
>>> 
>>> On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
>>> <pi...@data-artisans.com <mailto:pi...@data-artisans.com>> wrote:
>>> Hi,
>>> 
>>> Have you replaced all of your old Flink binaries with freshly downloaded 
>>> <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure 
>>> that something hasn't mix in the process?
>>> 
>>> Does some simple word count example works on the cluster after the upgrade?
>>> 
>>> Piotrek
>>> 
>>>> On 21 Mar 2018, at 16:11, ashish pok <ashish...@yahoo.com 
>>>> <mailto:ashish...@yahoo.com>> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> We ran into a roadblock in our new Hadoop environment, migrating from 2.6 
>>>> to 2.7. It was supposed to be an easy lift to get a YARN session but 
>>>> doesnt seem like :) We definitely are using 2.7 binaries but it looks like 
>>>> there is a call here to a private methos which screams runtime 
>>>> incompatibility. 
>>>> 
>>>> Anyone has seen this and have pointers?
>>>> 
>>>> Thanks, Ashish
>>>> Exception in thread "main" java.lang.IllegalAccessError: tried to access 
>>>> method 
>>>> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
>>>>  from class 
>>>> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider <>
>>>>             at 
>>>> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>>>>             at 
>>>> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>>>>             at 
>>>> org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>>>>             at 
>>>> org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>>>>             at 
>>>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>>>>             at 
>>>> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>>>>             at 
>>>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
>>>>             at 
>>>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
>>>>             at 
>>>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
>>>>             at 
>>>> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
>>>>             at 
>>>> org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
>>>>             at 
>>>> org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
>>>>             at java.security.AccessController.doPrivileged(Native Method)
>>>>             at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>             at 
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>>>>             at 
>>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>>             at 
>>>> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)
>>>> 
>>> 
>> 
>> 
> 

--------------------------------------------
http://about.me/kkrugler
+1 530-210-6378

Reply via email to