I have dfs.datanode.data.dir=/var/vcap/store/hadoop/hdfs/data,
dfs.datanode.failed.volumes.tolerated=0. [I don't have
dfs.datanode.du.reserved: thanks for mentioning it, I'll set it to 10G going
forward]
The CF has compression=SNAPPY. I have only hbase.cluster.distributed=true,
hbase.rootdir=hdfs://n.n.n.n/hbase & hbase.zookeeper.quorum=n.n.n.n in
hbase-site.xml (nothing else)
Here's the tune2fs o/p, pdfs-site.xml & abase-site.xml: (The /var/vcap disk
is shared with other processes, so I can't have any fs params like noatime,
etc..)
-sathya
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name: <none>
Last mounted on: /var/vcap
Filesystem UUID: 7eca2b17-c0df-4e55-811f-ed549a2c1663
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent flex_bg sparse_super large_file huge_file
uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 16384000
Block count: 65535744
Reserved block count: 3276787
Free blocks: 63203358
Free inodes: 16381182
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1008
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Tue Jan 6 12:34:14 2015
Last mount time: Thu Jan 8 22:04:49 2015
Last write time: Thu Jan 8 22:04:49 2015
Mount count: 2
Maximum mount count: -1
Last checked: Tue Jan 6 12:34:14 2015
Check interval: 0 (<none>)
Lifetime writes: 31 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 37e6cced-a9fb-465d-bcc2-5c54d5c6533e
Journal backup: inode blocks
===
hbase(main):002:0> describe 'tsdb'
'tsdb', {NAME => 't', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW',
REPLICATION_SCOPE => '0', COMPRESSION => 'SNAPPY', VERSIONS => '1', TTL =>
'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE =>
'65536', IN_MEMORY => 'false',
BLOCKCACHE => 'true'}
1 row(s) in 3.5200 seconds
==
hdfs-site.xml
===
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/var/vcap/store/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/var/vcap/store/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:50070</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/var/vcap/store/hadoop/hdfs/secondarynn</value>
</property>
<property>
<name>dfs.datanode.failed.volumes.tolerated</name>
<value>0</value>
</property>
</configuration>
===
hbase-site.xml
===
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://n.n.n.n/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>n.n.n.n</value>
</property>
</configuration>
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-hdfs-space-tp4067577p4067600.html
Sent from the HBase User mailing list archive at Nabble.com.