A running instance of HDFS does not recognize changes in configuration files made after it's been started. So you need to restart the system if you want it to use new values of dfs.datanode.du.reserved or other config values.
> Would Hadoop realize that the HDFS was taking > more than its allotted space, and redistribute the data automatically, or is > there something else I'd have to do? HDFS will not automatically redistribute excessive blocks that are over the allocated limit. But it will not assign new blocks to this node. You can use Balancer to redistribute blocks between the nodes or play decommission games, there were several threads about the latter. http://hadoop.apache.org/core/docs/current/hdfs_user_guide.html#Rebalancer --Konstantin Roger Donahue wrote:
Hello All, I was hoping someone could answer a question about dfs.datanode.du.reserved. Basically, I want to use this property to limit HDFS use on my nodes. This seems like it would work fine, except if I change my hadoop-site.xml on an in-service node to a value less than what HDFS already is taking up. Would Hadoop realize that the HDFS was taking more than its allotted space, and redistribute the data automatically, or is there something else I'd have to do? Thank you in advance! Roger Donahue
