Hi yongjun,
Thanks a lot for your reply..
Yes, It really N/W issue..will raise new jira.
Thanks And RegardsBrahma Reddy Battula
> From: yzh...@cloudera.com
> Date: Sat, 30 Jul 2016 10:22:19 -0700
> Subject: Re: Issue in handling checksum errors in write pipeline
> To: brahmareddy.batt...@huawei.
Hi Brahma,
Thanks for reporting the issue.
If your problem is really a network issue, then your proposed solution
sounds reasonable to me, and it's different than what HDFS-6937 intends to
solve. I think we can create a new jira for your issue. Here is why:
HDFS-6937's scenario is that we keep r
+1 (non-binding)
- Downloaded the tar ball-Installed HA Cluster- Run basic dfs, distcp, ACL,
webhdfs commands- Run MapReduce workcount and pi examples
Thanks And RegardsBrahma Reddy Battula
> From: vino...@apache.org
> Subject: [VOTE] Release Apache Hadoop 2.7.3 RC0
> Date: Fri, 22 Jul 2016 19:
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/118/
[Jul 29, 2016 5:26:11 PM] (junping_du) YARN-5434. Add -client|server argument
for graceful decommmission.
[Jul 30, 2016 2:45:12 AM] (aajisaka) MAPREDUCE-6746. Replace
org.apache.commons.io.Charsets with
[J
Hello
We had come across one issue, where write is failed even 7 DN's are available
due to network fault at one datanode which is LAST_IN_PIPELINE. It will be
similar to HDFS-6937 .
Scenario : (DN3 has N/W Fault and Min repl=2).
Write pipeline:
DN1->DN2->DN3 => DN3 Gives ERROR_CHECKSUM ack.