Is there a good way to emulate or determine the blocks that would exist on
HDFS for a given file. if I have a 135mb file and my block size is 128,
does it stand to say I would have 2 blocks, block A is byte 0-134217728 and
Block b is 134217729-141557761 deterministically? I am attempting to
calcula
Hans Uhlig created HDFS-4220:
Summary: Augment AccessControlException to include both affected
inode and attempted operation
Key: HDFS-4220
URL: https://issues.apache.org/jira/browse/HDFS-4220
Project
Hans Uhlig created HDFS-4192:
Summary: LocalFileSystem does not implement getFileChecksum()
Key: HDFS-4192
URL: https://issues.apache.org/jira/browse/HDFS-4192
Project: Hadoop HDFS
Issue Type
This seems to return null always, despite the fact that it should return a
checksum for the file. Has this been disabled since the doc was written?