Hi Ashish,
Yeah, we also had this problem before.
It can be solved by recompiling Flink with HDP version of Hadoop
according to instruction here:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/start/building.html#vendor-specific-versions
Regards,
Kien
On 3/22/2018 12:25 AM, ashish pok wrote:
Hi Piotrek,
At this point we are simply trying to start a YARN session.
BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone
has experienced similar issues.
We actually pulled 2.6 binaries for the heck of it and ran into same
issues.
I guess we are left with getting non-hadoop binaries and set
HADOOP_CLASSPATH then?
-- Ashish
On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski
<pi...@data-artisans.com> wrote:
Hi,
> Does some simple word count example works on the cluster after
the upgrade?
If not, maybe your job is pulling some dependency that’s causing
this version conflict?
Piotrek
On 21 Mar 2018, at 16:52, ashish pok <ashish...@yahoo.com
<mailto:ashish...@yahoo.com>> wrote:
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,
-- Ashish
On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski
<pi...@data-artisans.com <mailto:pi...@data-artisans.com>> wrote:
Hi,
Have you replaced all of your old Flink binaries with freshly
downloaded <https://flink.apache.org/downloads.html> Hadoop
2.7 versions? Are you sure that something hasn't mix in the
process?
Does some simple word count example works on the cluster
after the upgrade?
Piotrek
On 21 Mar 2018, at 16:11, ashish pok <ashish...@yahoo.com
<mailto:ashish...@yahoo.com>> wrote:
Hi All,
We ran into a roadblock in our new Hadoop environment,
migrating from 2.6 to 2.7. It was supposed to be an easy
lift to get a YARN session but doesnt seem like :) We
definitely are using 2.7 binaries but it looks like there is
a call here to a private methos which screams runtime
incompatibility.
Anyone has seen this and have pointers?
Thanks, Ashish
Exception in thread "main" java.lang.IllegalAccessError:
tried to access method
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
from class
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
at
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
at
org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
at
org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
at
org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
at
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
at
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at
org.apache.flink.yarn.AbstractYarnClusterDescriptor.getYarnClient(AbstractYarnClusterDescriptor.java:314)
at
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:417)
at
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:367)
at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:679)
at
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:514)
at
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:511)
at java.security.AccessController.doPrivileged(Native
Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:511)