daimin created HDFS-16430:
-----------------------------

             Summary: Validate maximum blocks in EC group when adding an EC 
policy
                 Key: HDFS-16430
                 URL: https://issues.apache.org/jira/browse/HDFS-16430
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: ec, erasure-coding
    Affects Versions: 3.3.1, 3.3.0
            Reporter: daimin
            Assignee: daimin


HDFS EC adopts the last 4 bits of block ID to store the block index in EC block 
group. Therefore maximum blocks in EC block group is 2^4=16, and which is 
defined here: HdfsServerConstants#MAX_BLOCKS_IN_GROUP.

Currently there is no limitation or warning when adding a bad EC policy with 
numDataUnits + numParityUnits > 16. It only results in read/write error on EC 
file with bad EC policy. To users this is not very straightforward.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to