To test hadoop's fault tolerence I tried the following node A -- name node
and secondaryname node
nodeB  - datanode
nodeC  - datanode

replica set to 2.
When A, B and C are running I'm able to make a round trip for a wav file.

Now to test fault tolerence I brought nodeB down and tried to write a file.
Writing failed even though nodeC was up and running with following msg.
More interestingly the file of size was listed in the name node.
I would have expected hadoop to write the file to NodeB

##############error msg###################
[had...@cancunvm1 testfiles]$ hadoop fs -copyFromLocal
9979_D4FE01E0-DD119BDE-3000CB83-EB857348.wav
jukebox/9979_D4FE01E0-DD119BDE-3000CB83-EB857348_21.wav

09/01/16 01:47:09 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.SocketTimeoutException
09/01/16 01:47:09 INFO hdfs.DFSClient: Abandoning block
blk_4025795281260753088_1216
09/01/16 01:47:09 INFO hdfs.DFSClient: Waiting to find target node:
10.0.3.136:50010
09/01/16 01:47:18 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.NoRouteToHostException: No route to host
09/01/16 01:47:18 INFO hdfs.DFSClient: Abandoning block
blk_-2076345051085316536_1216
09/01/16 01:47:27 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.NoRouteToHostException: No route to host
09/01/16 01:47:27 INFO hdfs.DFSClient: Abandoning block
blk_2666380449580768625_1216
09/01/16 01:47:36 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.NoRouteToHostException: No route to host
09/01/16 01:47:36 INFO hdfs.DFSClient: Abandoning block
blk_742770163755453348_1216
09/01/16 01:47:42 WARN hdfs.DFSClient: DataStreamer Exception:
java.io.IOException: Unable to create new block.
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2723)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)

09/01/16 01:47:42 WARN hdfs.DFSClient: Error Recovery for block
blk_742770163755453348_1216 bad datanode[0] nodes == null
09/01/16 01:47:42 WARN hdfs.DFSClient: Could not get block locations.
Aborting...
copyFromLocal: No route to host
Exception closing file
/user/hadoop/jukebox/9979_D4FE01E0-DD119BDE-3000CB83-EB857348_21.wav
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
        at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
        at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)

Reply via email to