Hi all, i am facing so much problems with configuring eclipse with hadoop. i
downloaded hadoop(0.20.203) and eclipse-helios. It worked well with Michael
Noll's steps without using eclipse. But when i started with eclipse, during
building i am getting many errors with eclipse plugin jar . somany ti
First off, thanks for you response. 26 seconds seems a bit short to time
outs o what are some more reasonable timeouts I should set?
This is probably the root cause since my job was pretty hefty.
Make
sure you are not CPU starving the RegionServer thread. For example, if
you are running a MapR
> Are there performance hits for running in
> INFO/DEBUG/? What do most people suggest?
DEBUG until you get your HBase config under control
>> 5 of our HBase region servers were killed. First off, when this happens and
>> there are only 2 servers is there a possibility of data corruption and/or
>
As a side note, I obviously never changed the logger level from the
default cloudera installation. Are there performance hits for running in
INFO/DEBUG/? What do most people suggest?
Thanks
On 8/24/11 5:19 PM, Mark wrote:
I noticed that after running some hefty jobs on our cluster that 3 out
I noticed that after running some hefty jobs on our cluster that 3 out
of 5 of our HBase region servers were killed. First off, when this
happens and there are only 2 servers is there a possibility of data
corruption and/or loss? Secondly and more importantly, why does this
happen and how can I
On Aug 24, 2011, at 11:37 AM, Sujee Maniyam wrote:
> sounds like even I created an HTablePool and shared it among threads (which
> seems safe to do as pointed out here), I won't see much improvements for
> accessing the SAME table in multiple threads.
>
> correct?
That depends on how many reg
Hi Jimson,
I only did some experiments with bulk loading data with MapReduce jobs, my
tests write HFiles directly, though (not doing live puts to an HBase cluster,
and you are interested in get performance anyway.)
If you see extremely bad performance you might have another problem. You might
sounds like even I created an HTablePool and shared it among threads (which
seems safe to do as pointed out here), I won't see much improvements for
accessing the SAME table in multiple threads.
correct?
http://sujee.net
Hi Lars,
Thanks for your info. Our data is dense and no compression is used.
We saw a blog on HBASE architecture at
http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html.
It looks |'hbase org.apache.hadoop.hbase.io.hfile.HFile|' can provide
more detailed info for each
Hi,
I want to do bulk loads by following http://hbase.apache.org/bulk-loads.html to
create HFiles, and then using LoadIncrementalHFiles to load the data into a
table. Suppose the data I'm loading is being written to a new column that
hasn't been used, and the rows are a superset of the rows al
Hi ,
Anyone has any idea about this error ?
-Original Message-
From: Stuti Awasthi
Sent: Wednesday, August 24, 2011 9:42 AM
To: user@hbase.apache.org
Subject: RE: java.lang.RuntimeException:
org.apache.hadoop.hbase.TableNotFoundException: api
Hi,
My ruby code looks like this :
...
We had a similar OOME problem and and we solved it by allocating more heap
space. The underlying cause for us was as the table grew, the StoreFileIndex
grew taking up a larger and larger chunk of heap.
What caused this to be a problem is that Memstore grows rapidly during inserts
and its size
All
I have set the following property in my hbase-site.xml
hbase.tmp.dir
/Users/me/deployments/current/data/hbase
The directory shared by RegionServers.
But when hbase starts up, in the log I see the following
2
Hi Lars, Li Pi,
Thank you for the response.
Well, I was wondering what map reduce could do here.
Can anyone give me some insight onto using mapreduce for doing parallel get
operations, so that we can avoid the partial serialization?
In general what is your opinion about using mapreduce for bu
Thanks Lars,
This certainly helps . I will try solution.
-Original Message-
From: lars hofhansl [mailto:lhofha...@yahoo.com]
Sent: Wednesday, August 24, 2011 11:46 AM
To: user@hbase.apache.org
Subject: Re: Search query in Hbase
Hi Stuti,
one of the main design tasks in HBase is to stru
Thanks for your feedback.
The point is that once we restart hbase memory footprint is far below 4 GB.
The system runs well for couple of days and then the heap reaches 4GB which
causes the region to crash.
This may indicate on memory leak since once we restart hbase the problem is
solved (or mayb
16 matches
Mail list logo