The second hbase 0.92.0 release candidate is available for download:
http://people.apache.org/~stack/hbase-0.92.0-candidate-1/
I've posted a secure and an insecure tarball.
As said previous, HBase 0.92.0 includes a raft of new features
including: coprocessors, security, a new (self-migrating) f
Hi..
I solved my above problem related to zookeeper.keeperException and other
errors.
The solution is, i added zookeeper-3.3.2.jar and log4j-1.2.5.jar on
classpath of HBASE i.e i set HBASE_CLASSPATH in the
{HBASE_HOME}/conf/hbase-env.sh file with the above two jars. That solved my
problem. I did
I believe it's the same amount of work.
On Wed, Dec 14, 2011 at 3:37 PM, Stuart Smith wrote:
> Ah. Thanks for clarifying my wrong answer.. !
>
> The only time I had to deal with timestamps I had to go through the thrift
> API ...
> Never noticed the setTimeRange in the Scan() java API :)
>
> So
Sorry to join late...
SPoF is a real problem if your planning to serve data realtime from your
cluster.
( Yes you can do this w HBase ...)
Then, regardless of data loss, you have to bring up the cluster.
Down time can be significant enough to kill your business, depending on your
use case.
Sure
Ah. Thanks for clarifying my wrong answer.. !
The only time I had to deal with timestamps I had to go through the thrift API
...
Never noticed the setTimeRange in the Scan() java API :)
So now I'm curious.. If I use this and it can't skip HFiles.. is there any
performance gain from doing this v
That is an interesting comment. How would you enforce this in practice
? Can you give more details.
On Wed, Dec 14, 2011 at 10:29 AM, Carson Hoffacker wrote:
> The timerange scan is able to leverage metadata in each of the HFiles. Each
> HFile should store information about the timerange associat
The timerange scan is able to leverage metadata in each of the HFiles. Each
HFile should store information about the timerange associated with the data
within the HFile. If the the timerange associated with the HFile is
different than the timerange you are interested in, that hfile will be
skipped
Hello Thomas,
Someone here could probably provide more help, but to start you off, the
only way I've filtered timestamps is to do a scan, and just filter out rows one
by one. This definitely sounds like something coprocessors could help with, but
I don't really understand those yet, so someo
Hello,
How did you query base via a statement object? Are you using Hive?
Or is this some new interface I don't know about.. I always had to use Get() or
Scan().
And hbase stores everything as bytes, not strings.. unlike C, in java, there is
a difference ;)
Take care,
-stu
_
Sorry for coming this late to the thread.
Isn't it also an issue to set -XX:CMSInitiatingOccupancyFraction=60 with a
blockcache of 0.5.
I was under the assumption that means to trigger a gc at 60%, so as soon as
the cache fills up it should always trigger the gc.
We are using a lower blockcache of
Thank you, that helped. So hbase actually does not need to be the owner
but have write access to the files.
Am 13.12.2011 12:56, schrieb Paul Mackles:
If you can chmod a+w the directory /user/dorner/bulkload/output/Tsp, hbase
should be able to do what it needs to do (I am assuming the error is
Thank you Lars.
STOPROW did work in my hbase shell as you suggested
- Original Message -
From: lars hofhansl
To: "user@hbase.apache.org" ; Sreeram K
Cc:
Sent: Tuesday, December 13, 2011 3:56 PM
Subject: Re: HBase- Scan with wildcard character
The shell lets you only do that much.
HB
Hi, thank you. all these days i am coding in eclipse and trying to run that
program from eclipse only, but never i saw that program running on the
cluster , only it is running on the LocalJobRunner, even though i set
config.set("mapred.job.tracker", "jthost:port");
Now i realized on thing. just co
re: "Though hbase stores everything as string"
Not so, Hbase stores everything as bytes and you are responsible for
conversion.
http://hbase.apache.org/book.html#supported.datatypes
http://hbase.apache.org/book.html#data_model_operations
On 12/14/11 6:26 AM, "neuron005" wrote:
>
>Hii the
Otis,
You could co-locate RS' with TT and DN for the most part as long as you are
not really serving "real time" requests. Just tweak your task configs and
give HBase enough RAM. You get the benefit of data locality and that could
improve performance. But you should definitely try out your approac
Hi Otis,
Perhaps I am getting this totally wrong, but here's how I look at it.
Let's say your problem as a whole needs X spindles + Y CPU cores + Z amount of
RAM to make everything work out. Then, would it matter whether you divide that
amount of resources (XYZ) over heterogeneous of homogeneou
Hii there,
I just created a simple hbase table from my java program, Inserted value in
it. But I got into an issue that whenever I store an integer value in habse
and retrieve it by my java program , It gives a different value.
For eg.
I inserted an int value in my hbase table '1234'
When I queri
What do you actually want to do? Do you want to run your java program and
execute queries from there or what? Can you explore your question a bit?
silvia90 wrote:
>
> Hi,
> i'm trying to connect eclipse with hbase on ubuntu, but i can't find any
> guide for do it.
> Some one can explain me how i
Hi Andy,
Thanks for the pointer to the 0.20.205.0 release.
We are constrained to use Scientific Linux for our cluster and although SciLin
is close to Red Hat from experience CDH does not just drop into place. We've
got used to managing Hadoop/HBase for our cluster. So I'd like to avoid CDH3
fo
Hello,
can anybody share some insights on how timerange/timestamp filters are
processed?
Basically we intend to use timerange/timestamp filters to process rather
new data from an insertion timestamp POV
- How does the process of skipping records and/or regions work, if one
use timerange filters?
20 matches
Mail list logo