Hello,
My HBase version is 0.92.0. And I find that when I use minor
compaction and major compaction to compact a table, there are no
differences. In the minor compaction, it will remove the deleted cells
and discard the exceeding data versions which should be the task of
major compaction. I wonder
We use ec2 and cdh as well and have around 80 Hadoop/hbase nodes deployed
across a few different clusters. We use a combination of puppet for package
management and fabric scripts for pushing configs and managing services.
Our base AMI is a pretty bare centos6 install and puppet handles most of
Hi,
as
far as I know TTL as well as deletions just take effect on major
compaction. (see http://hbase.apache.org/book.html#regions.arch ->
8.7.5.5)
regards
Christian
Von: ajay.bhosle
An: user@hbase.apache.org
Gesendet: 14:33 Mittwoch, 25.April 2012
Betreff: Regions not cleared
Hi,
2012-04-14 klockan 21:07 skrev Rob Verkuylen:
> As far as I understand sequential keys with a timerange scan have the best
> read performance possible, because of the HFile metadata, just as N
> indicates. Maybe adding Bloomfilters can further up the performance.
As far I understand it, Bloom
On Wed, Apr 25, 2012 at 11:14 AM, David Charle wrote:
> As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90
> (only need to run upgrade script if upgrading to 0.92).
> http://hbase.apache.org/book/upgrading.html#upgrade0.90
>
> Anyone has experience in upgrading from 0.20 to 0.9
check out this too seems to make it work, do what tariq has suggested too
http://ria101.wordpress.com/2010/01/28/setup-hbase-in-pseudo-distributed-mode-and-connect-java-client/
On Thu, Apr 26, 2012 at 1:05 AM, Mohammad Tariq wrote:
> Change 127.0.1.1 in your /etc/hosts file to 127.0.0.1...als
Change 127.0.1.1 in your /etc/hosts file to 127.0.0.1...also add the
hadoop-core.jar from hadoop folder and commons-configuration.jar from the
hadoob/lib to the hbae/lib folder.
On Apr 25, 2012 11:59 PM, "shashwat shriparv"
wrote:
> just foll this
>
> http://hbase.apache.org/book/standalone_dist.
just foll this
http://hbase.apache.org/book/standalone_dist.html
On Wed, Apr 25, 2012 at 7:05 PM, Nitin Pawar wrote:
> any error msg?
>
> On Wed, Apr 25, 2012 at 7:02 PM, shehreen >wrote:
>
> >
> > Hi
> >
> > am new to hbase and hadoop. I want to install hbase and to work with
> hbase
> > writi
As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90
(only need to run upgrade script if upgrading to 0.92).
http://hbase.apache.org/book/upgrading.html#upgrade0.90
Anyone has experience in upgrading from 0.20 to 0.90 or something similar
with major upgrade ? Do we need to upgrad
Thanks yonghu.
That is HBASE-4241.
One small point: The deleted rows are not deleted from the memstore, but rather
not included when the memstore is flushed to disk.
-- Lars
- Original Message -
From: yonghu
To: user@hbase.apache.org; lars hofhansl
Cc:
Sent: Wednesday, April 25,
Thank you Gary..! Now i understood the actual method.
On Wed, Apr 25, 2012 at 11:36 AM, Gary Helmling wrote:
> Hi Vamshi,
>
> See the ConstraintProcessor coprocessor that was added for just this
> kind of case:
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/constraint/package-summary.
Hi there-
In addition to what was said about GC, you might want to double-check
this...
http://hbase.apache.org/book.html#performance
... as well as this case-study for performance troubleshooting
http://hbase.apache.org/book.html#casestudies.perftroub
On 4/24/12 9:58 PM, "Michael Segel"
any error msg?
On Wed, Apr 25, 2012 at 7:02 PM, shehreen wrote:
>
> Hi
>
> am new to hbase and hadoop. I want to install hbase and to work with hbase
> writing mapreduce jobs for data in hbase. I installed hbase. It works well
> in standalone mode but dont start master and zookeeper properly on
>
Hi
am new to hbase and hadoop. I want to install hbase and to work with hbase
writing mapreduce jobs for data in hbase. I installed hbase. It works well
in standalone mode but dont start master and zookeeper properly on
pseudodistributed mode.
kindly help to resolve this problem.
Thanks
--
Hi,
I have set TTL in hbase table due to which the data is cleared after
specified time, but the regions are not cleared even as the data inside the
regions are cleared. Can someone please let me know if I am missing
anything.
Thanks
Ajay
Uhm... Not exactly Lars...
Just my $0.02 ...
While I don't disagree w Lars, I think the question you have to ask is why is
the time stamp important?
Is it an element of the data or is it an artifact?
This kind of gets in to your Schema design and taking short cuts. You may want
to instead create
I guess Sesame Street isn't global... ;-) oh and of course I f'd the joke by
saying Grover and not Oscar so it's my bad. :-(. [Google Oscar the groutch, and
you'll understand the joke that I botched]
Its most likely GC and a mis tuned cluster.
The OP doesn't really get in to detail, except to sa
As Lars mentioned, the row is not physically deleted. The way which
Hbase uses is to insert a cell called "tombstone" which is used to
mask the deleted value, but value is still there (if the deleted value
is in the same memstore with tombstone, it will be deleted in the
memstore, so you will not f
18 matches
Mail list logo