Abhishek Das created HDFS-16309:
---
Summary: ViewFileSystem.setVerifyChecksum should not initialize
all target filesystems
Key: HDFS-16309
URL: https://issues.apache.org/jira/browse/HDFS-16309
Project
Hi,
Is there any way I can get the size of each compressed (Gzip) block without
actually compressing it. For example, I have 200mb uncompressed data in
HDFS and the block size is 64 MB. I want to get the size of each of the 4
compressed blocks. The result might look like, the first block is 15 MB,
Hi,
I am trying to install libhdfs from the source. Can anyone point me to the
proper instruction. I am using hadoop 2.3.
Regards,
Abhishek
?
Regards,
Abhishek Das
On Tue, Feb 17, 2015 at 2:25 AM, Vinayakumar B
wrote:
> Hi abhishek,
> Is Your partitions of same sizes? If yes, then you can set that as block
> size.
>
> If not you can use the latest feature.. variable block size.
> To verify your use case.
> You
create the blocks where data is represented from each partition of the
file. Is it possible to introduce the new policy ? If yes, what would the
starting point in the code I should look at.
Regards,
Abhishek Das