[ https://issues.apache.org/jira/browse/HADOOP-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ravi Prakash resolved HADOOP-14319. ----------------------------------- Resolution: Invalid Please send your queries to hdfs-user mailing list. https://hadoop.apache.org/mailing_lists.html To answer your query please look at dfs.namenode.replication.max-streams , dfs.namenode.replication.max-streams-hard-limit, dfs.namenode.replication.work.multiplier.per.iteration etc. > Under replicated blocks are not getting re-replicated > ----------------------------------------------------- > > Key: HADOOP-14319 > URL: https://issues.apache.org/jira/browse/HADOOP-14319 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 2.7.2 > Reporter: Anil > > Under replicated blocks are not getting re-replicated > In production Hadoop cluster of 5 Manangement + 5 Data Nodes, under > replicated blocks are not re-replicated even after 2 days. > Here is quick view of required configurations; > Default replication factor: 3 > Average block replication: 3.0 > Corrupt blocks: 0 > Missing replicas: 0 (0.0 %) > Number of data-nodes: 5 > Number of racks: 1 > After bringing one of the DataNodes down, the replication factor for the > blocks allocated on the Data Node became 2. It is observed that, even after 2 > days the replication factor remains as 2. Under replicated blocks are not > getting re-replicated to another DataNodes in the cluster. > If a Data Node goes down, HDFS will try to replicate the blocks from Dead DN > to other nodes and the priority. Are there any configuration changes to speed > up the re-replication process for the under replicated blocks? > When tested for blocks with replication factor 1, the re-replication happened > to 2 overnight in around 10 hours of time. But blocks with 2 replication > factor are not being re-replicated to default replication factor 3. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org