Hello sathya,

Those files under .tmp are created as part of the normal operations from
HBase since it needs to compact the existing store files into into a new
larger file.  From your description seems that your VM doesn't have enough
space for HDFS. Have you tried to increase the space allocated space for
HDFS first?

regards,
esteban.


--
Cloudera, Inc.


On Mon, Jan 12, 2015 at 10:02 PM, sathyafmt <[email protected]> wrote:

> CDH5.1.2 (hbase 0.98.1) (running on a vm - vmware esx)
>
> We use opentsdb(2.1.0RC) with hbase & after ingesting 200-300MB of data,
> hbase tries to compact the table, and ends up creating a .tmp file which
> grows to fill up the entire hdfs space.. and dies eventually. I tried to
> remove this .tmp file & restart hbase, but it goes back to creating this
> gigantic .tmp file & ends up dying again...
>
> Any ideas, what could be causing this? I thought this was a one-off thing &
> tried again by clearing out hbase & starting over again. But it ends up
> back
> in this same state after ingesting 200-300MB of data.
>
> Thanks,
> sathya
>
> ==
> 2015-01-07 16:22:27,855 INFO
> [regionserver60020-smallCompactions-1420676547848] regionserver.HStore:
> Starting compaction of 3 file(s) in t of
> tsdb,,1419996612897.b881b798f5a7a766932a1e59bc2bd738. into
>
> tmpdir=hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/.tmp,
> totalSize=300.4 M
> 2015-01-07 16:22:27,856 DEBUG [RS_OPEN_REGION-XXXX:60020-1]
> zookeeper.ZKAssign: regionserver:60020-0x14ac6e25f5c0003,
> quorum=localhost:2181, baseZNode=/hbase Transitioned node
> 15abb3e48aeb8b0b43e8cabb4db459bf from M_ZK_REGION_OFFLINE to
> RS_ZK_REGION_OPENING
> 2015-01-07 16:22:27,857 DEBUG
> [regionserver60020-smallCompactions-1420676547848]
> compactions.Compactor:Compacting
>
> hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/t/19fb0a1e6fb54b868c0845ce6589ca05,
> keycount=75, bloomtype=ROW, size=24.8 M, encoding=NONE, seqNum=98187,
> earliestPutTs=1420224086959
> 2015-01-07 16:22:27,855 INFO
> [regionserver60020-smallCompactions-1420676547848] regionserver.HStore:
> Starting compaction of 3 file(s) in t of
> tsdb,,1419996612897.b881b798f5a7a766932a1e59bc2bd738. into
>
> tmpdir=hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/.tmp,
> totalSize=300.4 M
> 2015-01-07 16:22:27,856 DEBUG [RS_OPEN_REGION-XXXX:60020-1]
> zookeeper.ZKAssign: regionserver:60020-0x14ac6e25f5c0003,
> quorum=localhost:2181, baseZNode=/hbase Transitioned node
> 15abb3e48aeb8b0b43e8cabb4db459bf from M_ZK_REGION_OFFLINE to
> RS_ZK_REGION_OPENING
> 2015-01-07 16:22:27,857 DEBUG
> [regionserver60020-smallCompactions-1420676547848]
> compactions.Compactor:Compacting
>
> hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/t/19fb0a1e6fb54b868c0845ce6589ca05,
> keycount=75, bloomtype=ROW, size=24.8 M, encoding=NONE, seqNum=98187,
> earliestPutTs=1420224086959
> 2015-01-07 16:22:27,855 INFO
> [regionserver60020-smallCompactions-1420676547848] regionserver.HStore:
> Starting compaction of 3 file(s) in t of
> tsdb,,1419996612897.b881b798f5a7a766932a1e59bc2bd738. into
>
> tmpdir=hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/.tmp,
> totalSize=300.4 M
> 2015-01-07 16:22:27,856 DEBUG [RS_OPEN_REGION-XXXX:60020-1]
> zookeeper.ZKAssign: regionserver:60020-0x14ac6e25f5c0003,
> quorum=localhost:2181, baseZNode=/hbase Transitioned node
> 15abb3e48aeb8b0b43e8cabb4db459bf from M_ZK_REGION_OFFLINE to
> RS_ZK_REGION_OPENING
> 2015-01-07 16:22:27,857 DEBUG
> [regionserver60020-smallCompactions-1420676547848]
> compactions.Compactor:Compacting
>
> hdfs://localhost/hbase/data/default/tsdb/b881b798f5a7a766932a1e59bc2bd738/t/19fb0a1e6fb54b868c0845ce6589ca05,
> keycount=75, bloomtype=ROW, size=24.8 M, encoding=NONE, seqNum=98187,
> earliestPutTs=1420224086959
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-hdfs-space-tp4067577.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Reply via email to