Must be a deadlock if the dumb JVM can figure it out. What version of
hbase please so I can dig into source code?
Thanks,
St.Ack
On Wed, Apr 27, 2011 at 9:08 PM, Zhoushuaifeng wrote:
> Logs are below, is it a deadlock of hbase? How it happens and how to avoid?
>
> Found one Java-level deadlock:
Logs are below, is it a deadlock of hbase? How it happens and how to avoid?
Found one Java-level deadlock:
=
"IPC Server handler 9 on 60020":
waiting to lock monitor 0x409f3908 (object 0x7fe7cbacbd48, a
org.apache.hadoop.hbase.regionserver.MemStoreFlusher
Thanks J-D, I have opened a jira here:
https://issues.apache.org/jira/browse/HBASE-3826
On Thu, Apr 28, 2011 at 12:55 AM, Jean-Daniel Cryans wrote:
> That make sense, would you mind opening a jira?
>
> Thx,
>
> J-D
>
> On Tue, Apr 26, 2011 at 8:52 PM, Schubert Zhang wrote:
> > After more test,
On Wed, Apr 27, 2011 at 8:13 PM, Jean-Daniel Cryans wrote:
> I have a hard time digesting this... You ran the script, didn't change
> anything else, ran the test and everything was back to normal, right?
> Did you restart HBase or moved .META. around? The reason I'm asking is
> that this script do
I have a hard time digesting this... You ran the script, didn't change
anything else, ran the test and everything was back to normal, right?
Did you restart HBase or moved .META. around? The reason I'm asking is
that this script doesn't have any effect until .META. is reopened so I
would be quite f
I am using CDH3U0. It is HBase 0.90.1, I think.
Thanks
Weihua
2011/4/28 Stack :
> On Tue, Apr 26, 2011 at 6:02 PM, Weihua JIANG wrote:
>> I tried to enable HPROF on RS, but failed. If I added the HPROF agent
>> in hbase-env.sh, RS startup reports an error said HPROF can't be
>> loaded twice. But
Note that we have a compaction recursive enqueue patch on our internal
branches, but we wanted to give it some run time before contributing back
to make sure it was safe. I'll port that to trunk.
On 4/27/11 9:55 AM, "Jean-Daniel Cryans" wrote:
>That make sense, would you mind opening a jira?
>
On Wed, Apr 27, 2011 at 12:04 PM, Eric Ross wrote:
> I'm not running it on a cluster but on my local machine in pseudo
> distributed mode.
>
> The jobtracker address in mapred-site.xml is set to localhost and changing
> it to my system's ip didn't make any difference.
>
The importtsv program does
I'm not running it on a cluster but on my local machine in pseudo distributed
mode.
The jobtracker address in mapred-site.xml is set to localhost and changing it
to my system's ip didn't make any difference.
Do you have suggestions for any other features/options that I should check?
--- On Mo
Hi Alex,
Before answering I made sure it was working for me and it does. In
your master log after killing the -ROOT- region server you should see
lines like this:
INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker:
RegionServer ephemeral node deleted, processing expiration
[servername]
DE
On Wed, Apr 27, 2011 at 11:00 AM, Joe Pallas wrote:
>
> On Apr 26, 2011, at 11:54 PM, Himanshu Vashishtha wrote:
>
> > HBase uses utf-8 encoding to store the row keys, so it can store
> non-ascii
> > characters too (yes they will be larger than 1 byte).
>
> That statement may be misleading. HBas
On Wed, Apr 27, 2011 at 11:08 AM, Dave Latham wrote:
> The HBase book ( http://hbase.apache.org/book/upgrading.html ) states,
>
>> This version of 0.90.x HBase can be started on data written by HBase 0.20.x
>> or HBase 0.89.x. There is no
>> need of a migration step. HBase 0.89.x and 0.90.x does
The HBase book ( http://hbase.apache.org/book/upgrading.html ) states,
> This version of 0.90.x HBase can be started on data written by HBase 0.20.x
> or HBase 0.89.x. There is no
> need of a migration step. HBase 0.89.x and 0.90.x does write out the name of
> region directories differently --
>
On Tue, Apr 26, 2011 at 8:43 PM, Pete Tyler wrote:
> P.S. This is a pure development system, I have no interest in any security
> features. If 'chmod 777 *' will fix this issue then that is great for me.
> Thanks.
>
Try it.
St.Ack
On Tue, Apr 26, 2011 at 8:34 PM, Pete Tyler wrote:
> "When you upgrade to CDH3, two new Unix user accounts called hdfs and
> mapred are automatically created to support security:"
>
> makes no sense to me as my entire installation process involved unzipping
> your tar ball.
>
OK. So, the state
On Tue, Apr 26, 2011 at 8:34 PM, Pete Tyler wrote:
> "When you upgrade to CDH3, two new Unix user accounts called hdfs and
> mapred are automatically created to support security:"
>
> makes no sense to me as my entire installation process involved unzipping
> your tar ball.
>
OK. So, the state
On Apr 26, 2011, at 11:54 PM, Himanshu Vashishtha wrote:
> HBase uses utf-8 encoding to store the row keys, so it can store non-ascii
> characters too (yes they will be larger than 1 byte).
That statement may be misleading. HBase doesn't use any encoding at all,
because row keys are simply arr
You might want to bring this issue to Cloudera as for the moment they
have the only Hadoop release that supports security.
J-D
On Tue, Apr 26, 2011 at 8:34 PM, Pete Tyler wrote:
> Thanks for the links but I'm having trouble applying them. I'm trying to
> upgrade my OS X single node pseudo distri
That make sense, would you mind opening a jira?
Thx,
J-D
On Tue, Apr 26, 2011 at 8:52 PM, Schubert Zhang wrote:
> After more test, a obvious issue/problem is, the complete of a minor
> compaction does not check if current storefiles need more minor compaction.
>
> I think this may be a bug or l
On Tue, Apr 26, 2011 at 6:02 PM, Weihua JIANG wrote:
> I tried to enable HPROF on RS, but failed. If I added the HPROF agent
> in hbase-env.sh, RS startup reports an error said HPROF can't be
> loaded twice. But, I am sure I only enabled it once. I don't know
> where the problem is.
>
This sounds
Seems like pretty basic "I can't find a config.' issue. See
http://hbase.apache.org/xref/org/apache/hadoop/hbase/zookeeper/ZKConfig.html#219.
Seems like when its being run its not finding the hbase-default.xml
(as you surmise).
I checked 0.90.1 and 0.90.2 jars and both seem to have
hbase-default
I don't remember ever seeing this :|
Was your secondary namenode running on a different host or storing its
data in a different folder? Was that wiped out too?
J-D
On Wed, Apr 27, 2011 at 8:28 AM, Jonathan Bender
wrote:
> So it's definitely a case of HDFS not being able to recover the image.
>
On Wed, Apr 27, 2011 at 2:30 AM, Stan Barton wrote:
>
> Hi,
>
> what means increase? I checked on the client machines and the nproc limit is
> around 26k, that seems to be as sufficient. The same limit applies on the db
> machines...
>
The nproc and ulimits are 26k for the user who is running the
No I do not manage my own zookeeper. I use the default zookeeper that comes
with HBase. I work on pseudo-distributed mode so far. So I don't have a zoo.cfg.
However this works in HBase 0.90.1.
Although I don't set the zookeeper port in any configuration file it is set in
the hbase-default.xml i
I took a quick look through 0.90.2 list of issues and only the below
explicitly references:
HBASE-3591 completebulkload doesn't honor generic -D options
Seems unrelated though.
Do you manage your own zk ensemble? Do you have a zoo.cfg going on or
are you using hbase configs?
St.Ack
2011/
I downloaded HBase 0.90.1 again and it works perfectly.
Is there anything wrong with HBase 0.90.2 and completebulkload?
> From: antonopoulos...@hotmail.com
> To: user@hbase.apache.org
> Subject: Problem with zookeeper port while using completebulkupload
> Date: Wed, 27 Apr 2011 18:51:38 +0300
>
Hello,
I am trying to use completebulkload in HBase 0.90.2 and I get the following
exception:
11/04/27 18:45:51 ERROR zookeeper.ZKConfig: no clientPort found in zoo.cfg
Exception in thread "main"
org.apache.hadoop.hbase.ZooKeeperConnectionException: java.io.IOException:
Unable to determine Zoo
Since the attachment didn't make it, here it is again:
http://shortText.com/jp73moaesx
-eran
On Wed, Apr 27, 2011 at 16:51, Eran Kutner wrote:
> Hi Josh,
>
> The connection pooling code is attached AS IS (with all the usual legal
> disclaimers), note that you will have to modify it a bit to ge
So it's definitely a case of HDFS not being able to recover the image.
Maybe this is better directed toward another list, but has anyone had
issues with this, or any suggestions for trying to eradicate this?
2011-04-26 17:15:56,898 INFO org.apache.hadoop.hdfs.server.common.Storage:
Recovering
Hi,
I am trying failover cases on a small 3-node fully-distributed cluster
of the following topology:
- master node - NameNode, JobTracker, QuorumPeerMain, HMaster;
- slave nodes - DataNode, TaskTracker, QuorumPeerMain, HRegionServer.
ROOT and META are initially served by two different nodes.
I
Hi Josh,
The connection pooling code is attached AS IS (with all the usual legal
disclaimers), note that you will have to modify it a bit to get it to
compile because it depends on some internal libraries we use. In particular,
DynamicAppSettings and Log are two internal classes that do what their
I must say the more I play with it the more baffled I am with the
results. I ran the read test again today after not touching the
cluster for a couple of days and now I'm getting the same high read
numbers (10-11K reads/sec per server with some server reaching even
15K r/s) if I read 1, 10, 100 or
Hi,
I am trying failover cases on a small 3-node fully-distributed cluster
of the following topology:
- master node - NameNode, JobTracker, QuorumPeerMain, HMaster;
- slave nodes - DataNode, TaskTracker, QuorumPeerMain, HRegionServer.
ROOT and META are initially served by two different nodes.
I
Hi,
what means increase? I checked on the client machines and the nproc limit is
around 26k, that seems to be as sufficient. The same limit applies on the db
machines...
Stan
ajay.gov wrote:
>
> Hi,
>
> I posted the same message on the user@hbase.apache.org mailing list and
> Jean-Daniel Cr
34 matches
Mail list logo