HalfLegend created HDFS-12694: --------------------------------- Summary: Wrong data node registered in docker Key: HDFS-12694 URL: https://issues.apache.org/jira/browse/HDFS-12694 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.8.1, 2.7.4, 2.8.0 Environment: There hosted machines and each one have a docker. Hosted machines: host1, host2, host3 dockers : docker1, docker2, docker3
I installed HDFS in each docker. (version 2.8.1) Docker1 and Docker2 have name node. Docker1,2,3 all have data node. I have a weave network of 10.240.1.0/24 Somebody else installed another HDFS in hosted machine outside docker. (Theirs version is 2.7.0) Reporter: HalfLegend Somebody else installed another HDFS in hosted machine outside the dockers. But the data node in the hosted machine appears in the docker machine. | Node | Http Address | Capacity | Blocks | Block pool used | Version| | docker1:50010 (10.240.1.101:50010) | docker1:50075 | 931.06 GB | 8420 | 116.64 GB (12.53%) | 2.8.1 | | docker2:50010 (10.240.1.102:50010) | docker2:50075 | 916.77 GB | 8420 | 116.64 GB (12.72%) | 2.8.1 | | docker3:50010 (10.240.1.64:50010) | docker3:50075 | 916.77 GB | 0 | 28 KB (0%) | 2.8.1 | The IP address of docker3 should be 10..240.1.103, but it is 10.240.1.64 here which is the IP address of host3. They also have a data node running on host3. We can see in docker3, the used blocks is 0, no data is added here. The replication is 3. If I use fsck to check the files, many replication blocks are missing. I think it is because of the IP address. Further more, if I stop the data node in docker3, the heart beat will stop. This is correct. So it is a weird bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org