[ https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Weiwei Yang resolved HDFS-11917. -------------------------------- Resolution: Not A Problem Assignee: Weiwei Yang > Why when using the hdfs nfs gateway, a file which is smaller than one block > size required a block > ------------------------------------------------------------------------------------------------- > > Key: HDFS-11917 > URL: https://issues.apache.org/jira/browse/HDFS-11917 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs > Affects Versions: 2.8.0 > Reporter: BINGHUI WANG > Assignee: Weiwei Yang > > I use the linux shell to put the file into the hdfs throuth the hdfs nfs > gateway. I found that if the file which size is smaller than one block(128M), > it will still takes one block(128M) of hdfs storage by this way. But after a > few minitues the excess storage will be released. > e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it > will takes one block(128M) at first. After a few minitues the excess > storage(68M) will > be released. The file only use 60M hdfs storage at last. > Why is will be this? -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org