Hi Ali,

Could you please also post the Hadoop version output of the task
manager log files? It looks like the task managers are running a
different Hadoop version.

Thanks,
Max

On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <ali.kash...@emc.com> wrote:
> Hi Robert,
>
> I found the version in the job manager log file:
>
> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>       -  Hadoop version: 2.6.0
>
> But the Hadoop installation I have is saying this:
>
> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
> Hadoop 2.6.0
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> Compiled by jenkins on 2014-11-13T21:10Z
> Compiled with protoc 2.5.0
> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> This command was run using
> /home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0
> .jar
>
>
> So one of them is lying to me? :)
>
> Ali
>
> On 2015-12-22, 10:16 AM, "Robert Metzger" <rmetz...@apache.org> wrote:
>
>>Hi Ali,
>>
>>the TaskManagers and the JobManager is logging the Hadoop version on
>>startup.
>>
>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <ali.kash...@emc.com> wrote:
>>
>>> Hello,
>>>
>>> I¹m trying to use HDFS as store for Flink checkpoints so I downloaded
>>>the
>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up
>>>HDFS
>>> on a separate machine. When I start Flink I get the following error:
>>>
>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
>>> Processing) changed to FAILED.
>>> java.lang.Exception: Call to registerInputOutput() of invokable failed
>>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.io.IOException: The given file URI (hdfs://
>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode
>>>at
>>> 10.13.182.171:9000, but the File System could not be initialized with
>>> that address: Server IPC version 9 cannot communicate with client
>>>version 4
>>>         at
>>>
>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>>stem.java:337)
>>>         at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>>ckend.java:142)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>>ckend.java:101)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createFro
>>>mConfig(FsStateBackendFactory.java:48)
>>>         at
>>>
>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(St
>>>reamTask.java:517)
>>>         at
>>>
>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(S
>>>treamTask.java:171)
>>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>>         ... 1 more
>>> Caused by:
>>>
>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$VersionMi
>>>smatch):
>>> Server IPC version 9 cannot communicate with client version 4
>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at
>>>
>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
>>>:62)
>>>         at
>>>
>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
>>>mpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>>         at
>>>
>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc
>>>ationHandler.java:85)
>>>         at
>>>
>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH
>>>andler.java:62)
>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>>         at
>>>
>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSy
>>>stem.java:100)
>>>         at
>>>
>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>>stem.java:321)
>>>         ... 8 more
>>>
>>> I searched for this error online and it indicates that the client which
>>>is
>>> Flink in this case is at a much lower version. Is there a way to check
>>>the
>>> version of Hadoop packaged with my Flink installation?
>>>
>>> Thanks,
>>> Ali
>>>
>

Reply via email to