the 2.0.5 release was always considered alpha -there were some explicitly
added mandatory fields to HDFS, lots of java level changes too. One of the
reasons for that was to have a stable API from 2.2+, which is why 2.2
clients can work with 2.3 and 2.4 clusters today -and later versions of
hadoop 2.x will also make that wire compatibility a goal


the compatibility policy is discussed here
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/Compatibility.html

On 6 June 2014 10:47, 张鹏 <peng.zh...@xiaomi.com> wrote:

> Also I tested using hdfs client 2.0 access hdfs server 2.4, this also got
> no compatibility:
> "Incorrect header or version mismatch from 10.2.201.245:59310 got version
> 7 expected version 9"
>
> Does this mean if we update our production cluster from 2.0 to 2.4, all
> clients must be re-build?
>


Afraid so.


>
> Any suggestions on updating like this?
>
> --
> Thanks,
> Peng
>
>
change your build to use the new versions. As gordon warns, protobuf JAR
versions is a troublespot -if you are creating your own protobuf-based IPC
protocols, updating takes some work. Otherwise -that is, for simpler Hadoop
clients- updating dependencies will take care of most things, handling java
API changes the other.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to