E. Sammer created HADOOP-8502:
---------------------------------

             Summary: Quota accounting should be calculated based on actual 
size rather than block size
                 Key: HADOOP-8502
                 URL: https://issues.apache.org/jira/browse/HADOOP-8502
             Project: Hadoop Common
          Issue Type: Bug
            Reporter: E. Sammer


When calculating quotas, the block size is used rather than the actual size of 
the file. This limits the granularity of quota enforcement to increments of the 
block size which is wasteful and limits the usefulness (i.e. it's possible to 
violate the quota in a way that's not at all intuitive.

{code}
[esammer@xxx ~]$ hadoop fs -count -q /user/esammer/quota-test
        none             inf         1048576         1048576            1       
     2                  0 hdfs://xxx/user/esammer/quota-test
[esammer@xxx ~]$ du /etc/passwd
4       /etc/passwd
esammer@xxx ~]$ hadoop fs -put /etc/passwd /user/esammer/quota-test/
12/06/09 13:56:16 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: org.apache.hadoop.hdf
s.protocol.DSQuotaExceededException: The DiskSpace quota of 
/user/esammer/quota-test is exceeded: quota=1048576 diskspace consumed=384.0m
...
{code}

Obviously the file in question would only occupy 12KB, not 384MB, and should 
easily fit within the 1MB quota.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to