You didn't tell us which version of HBase.
HBase is pretty smart about how long it needs to hold locks.
For example the flush to the WAL is done without the row lock held.
The row lock is only help to create the WAL edit and to add the edit to
memstore, then it is released.
After that we sync th
Hello everyone,
I have been running some experiments to see the effect of locking overhead
in HBase. For this, I am comparing the throughput difference between these
two schema.
Schema1:
rowkey->
columnkey-> cf:
value-> blob of 2000 character
Schema2:
rowkey->:
columnkey-> cf:order
value-> blob
Hi,
We have a use case that involves ETL on data coming from several different
sources using pig.
We plan to store the final output table in HBase.
What will be the performance impact if we do a join with an external CSV
table using pig?.
Regards,
Krishna
Do any map-reduce jobs work on this cluster ?
On Fri, Sep 26, 2014 at 5:13 PM, iain wright wrote:
> Forgot to include the actual copy table command:
> [hbase@master2 bin]$ ./hbase org.apache.hadoop.hbase.mapreduce.CopyTable
> --starttime=1409128964 --peer.adr=master0.hbasex1.test.cloud.domain.tv
Forgot to include the actual copy table command:
[hbase@master2 bin]$ ./hbase org.apache.hadoop.hbase.mapreduce.CopyTable
--starttime=1409128964 --peer.adr=master0.hbasex1.test.cloud.domain.tv,
master1.hbasex1.test.cloud.v.tv,master2.hbasex1.test.cloud.domain.tv:2181:/hbase
moderation
Replication
Hi folks,
I'm having trouble using copyTable to seed an existing tables data to a
replication peer. Surely its an oversight in configuration on our part, but
I've scoured the web and doc's for a couple days now.
We have been able to run these jobs with success (perhaps they don't
require localiza
Hi all,
I'm happy to announce the 0.3.0 release of Tephra.
This release is a renaming of the project from Continuuity Tephra to
Cask Tephra, and includes the following changes:
* All packages have changed from com.continuuity.tephra to co.cask.tephra
* The Maven group ID has changed from com.con
Prior to major compaction, region server may need to read HFile(s) written
by other region server(s).
This could be due to region movement, etc.
Cheers
On Fri, Sep 26, 2014 at 11:43 AM, Thomas Kwan
wrote:
> Thanks Ted for the pointer. A follow-up question.
>
> Will the region server always writ
Thanks Ted for the pointer. A follow-up question.
Will the region server always write data to its local hdfs? I am
seeing logs that said the region server is trying to get data from
another data nodes. Under what scenario will a region server needs to
get data from a non-local data node?
thanks
This is a good writeup that should probably go to refguide.
bq. example would be password reset attempts
In some systems such information would have long retention period (maybe to
conform to certain regulation).
Cheers
On Fri, Sep 26, 2014 at 9:10 AM, Wilm Schumacher wrote:
> Hi,
>
> your ma
Hi,
your mail got me thinking about a general answer.
I think a good answer would be: all data that are only usefull for a
specific time AND are possibly generated infinitely for a finite number
of users should have a ttl. OR when the space is very small compared to
the number of users.
An examp
I wrote a cookie store for node.js using hbase. By this method the
sessions are deleted "regularly" after a specific time nothing happens
on a specific session.
Am 26.09.2014 um 17:20 schrieb yonghu:
> Hello,
>
> Can anyone give me a concrete use case for ttl deletions? I mean in which
> situatio
Hello,
Can anyone give me a concrete use case for ttl deletions? I mean in which
situation we should set ttl property?
regards!
Yong
Region server should be installed on server which is a data node.
Region server count may be lower than data node count. So some data nodes
would not have region server running.
See http://hbase.apache.org/book.html#regions.arch.locality
Cheers
On Fri, Sep 26, 2014 at 7:02 AM, Thomas Kwan wrot
Hi there,
To get good read performance, should the region server be installed on
every data node? (Thinking about data locality here)
thanks
thomas
15 matches
Mail list logo