Thanks,
I think these limiting does not optimize for my millions of records. And it
is better to change my design.
Don't worry for the language ;)
I don't think there is any mecanism today to limit the number of columns
into a column family.
There might be multiple options but they will all have some drawback.
On option is to have a daily mapreduce job looking at each row and doing
the cleanup. This can work
any cell in the same row.
Sorry because of my poor language!
On Thu, Sep 19, 2013 at 9:28 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi MBE,
>
> When you are saying "cells with least timestamp being removed" you mean
> versions of the same cell? Or any cell in the same row/cf?
Hi MBE,
When you are saying "cells with least timestamp being removed" you mean
versions of the same cell? Or any cell in the same row/cf?
JM
2013/9/18 M. BagherEsmaeily
> Hi,
> I have a column family that I want the number of columns on it has a
> specific limit, and when this number become
Hi,
I have a column family that I want the number of columns on it has a
specific limit, and when this number becomes greater than the limit, cells
with least timestamp being removed, like TTL on count not time.
Please guide me to find best optimized way.
Thanks.
MBE
In http://hbase.apache.org/book/hbase_metrics.html we see
"""
15.4.2. Warning To Ganglia Users
Warning to Ganglia Users: by default, HBase will emit a LOT of metrics per
RegionServer which may swamp your installation. Options include either
increasing Ganglia server capacity, or configuring HBase
But...
if you can't update, then you will have to checkout the 0.94.3 version from
SVN, apply the patch manually, build and re-deploy. Patch might be pretty
easy to apply.
JM
2013/9/18 Ted Yu
> The fix is in 0.94.4
>
> It would be easier for you to upgrade to newer release since rolling
> res
The fix is in 0.94.4
It would be easier for you to upgrade to newer release since rolling
restart is supported.
Cheers
On Wed, Sep 18, 2013 at 12:24 PM, Jason Huang wrote:
> Hello,
>
> We are using hadoop 1.1.2 and HBase 0.94.3 and we found the following
> entries appear every minute in namen
Did you ever find a resolution to this issue?
Thanks,
Nick
On Thu, Apr 4, 2013 at 12:56 AM, vbogdanovsky wrote:
> I have hfiles after MR-job and when I import them to my table I often get
> Exceptions like this:
> ==**==
> hadoop jar /usr/lib/hbase/hbase
How Are vendor specific versions of hbase running on yarn? Are they using Hoya?
On Sep 18, 2013, at 4:21 PM, Steve Loughran wrote:
> Right now you are going to have to run HBase outside YARN, and on those
> nodes with HBase configure YARN to offer less capacity -CPU and RAM- than
> your (static
Right now you are going to have to run HBase outside YARN, and on those
nodes with HBase configure YARN to offer less capacity -CPU and RAM- than
your (static) HBase demands will be.
The Hoya stuff is still immature -which currently offers the advantage that
I can make big changes to bits of the
Hello,
We are using hadoop 1.1.2 and HBase 0.94.3 and we found the following
entries appear every minute in namenode's log:
2013-09-17 14:00:25,710 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 54310, call delete(/hbase/.archive/mytable, false)
from **.**.**.**:42912 error: java.io.I
Don't forget to look at this section for hbase schema design examples.
http://hbase.apache.org/book.html#schema.casestudies
On 9/17/13 1:52 PM, "Adrian CAPDEFIER" wrote:
>Thanks for the tip. In the data warehousing world I used to call them
>surrogate keys - I wonder if there's any dif
Hi Pedro,
Many thanks for the suggestion. It's working for me now.
-Michael
From: Pedro Assis
To: user@hbase.apache.org; Michael Kintzer
Sent: Tuesday, September 17, 2013 10:28 AM
Subject: Re: HBase and Hive
Hi Michael,
One way to solve that is creating
Different from the RDBMS, the data in HBase is stored as key-value pair in
HDFS. Hence, for every data version in a cell, the row key will appear.
On Tue, Sep 17, 2013 at 7:53 PM, Ted Yu wrote:
> w.r.t. Data Block Encoding, you can find some performance numbers here:
>
>
> https://issues.apache
15 matches
Mail list logo