Hi Stack,
The NPE is this:
10/12/18 15:39:07 WARN hdfs.StateChange: DIR* FSDirectory.unprotectedSetTimes:
failed to setTimes
/hbase/inrdb_ris_update_rrc00/fe5090c366e326cf2b123502e2d4bcce/data/1350525083587292896
because source does not exist
10/12/18 15:39:07 WARN hdfs.StateChange: DIR* FSDire
Hmm ... that is strange. I see that you are using Gentoo ... I'm not that
familiar with Gentoo way of setting java-config etc, so hope someone else
can suggest Gentoo specific troubleshooting of your java environment.
Some additional things you can try are:
1) Switch JDK back to IcedTea and see if
Hi Shuaifeng
What about your hdfs's version?
Mybe this can solve this problem:
This is the current list of patches we recommend you apply to your running
Hadoop cluster:
- HDFS-630: *"In DFSOutputStream.nextBlockOutputStream(), the client can
exclude specific datanodes when locating the n
Hi,
I have a cluster of 8 hdfs datanodes and 8 hbase regionservers. When I
shutdown one node(a pc with one datanode and one regionserver running), all
hbase regionservers shutdown after a while.
Other 7 hdfs datanodes is OK.
I think it's not reasionable. Hbase is a distribute system that
Hi Suraj
Thank you !
hbase-env.sh is follow:
# The java implementation to use. Java 1.6 required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/etc/java-config-2/current-system-vm
In addition ,
$ java-config -L
The following VMs are available for generation-2:
1) IcedTea6-bi
Can you check if your conf/hbase-env.sh is explicitly setting the JAVA_HOME
to the below 1.6.0_20 JRE? This overrides anything in the environment.
ClassFormatError suggests that either an older JRE is trying to read a
.class file compiled with a newer JRE. Or your library jar is somehow
corrupt.
Hi ryan
There is only one copy of hbase*.jar,and the md5sum is:
$ md5sum /app/cloud/hbase/hbase-0.20.6.jar
cb1af1f5df2e93ae58099a1608b22876 /app/cloud/hbase/hbase-0.20.6.jar
In addition, i wonder if is the question on which I used
zookeeper-3.3.2.jar.
On Mon, Dec 20, 2010 at 9:25 AM, Ryan
If you are getting something like "java.lang.NoSuchMethodError"
usually this is due to multiple jars or misdeployed hbase.
check your hbase dir again, there should only be 1 copy of hbase*.jar,
if there are 2 and they are different (you can use md5sum to compare)
that could be the cause of the iss
On Fri, Dec 17, 2010 at 7:37 PM, Sandy Pratt wrote:
> -XX:MaxDirectMemorySize=100m
Yep, I always leave that at the default, whatever that might be.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
Thanks for the info there I thought it was something like
that...sweet.
Dean
-Original Message-
From: Ted Dunning [mailto:tdunn...@maprtech.com]
Sent: Sunday, December 19, 2010 12:54 PM
To: user@hbase.apache.org
Subject: Re: partitioning and map/reduce &hbase hashcodes
One of the ke
One of the key motivators for this strategy is to allow range queries to be
fast.
On Sun, Dec 19, 2010 at 11:33 AM, Jonathan Gray wrote:
> HBase doesn't hashcode anything. It does strict lexicographical ordering
> of the row keys themselves. So yes, keys with similar prefixes may be in
> the s
HBase doesn't hashcode anything. It does strict lexicographical ordering of
the row keys themselves. So yes, keys with similar prefixes may be in the same
partition / next to each other.
Rather than using a hashcode modulo some number, we use the META table to
determine which partition (regio
On Sun, Dec 19, 2010 at 1:23 AM, Friso van Vollenhoven
wrote:
> Right now, however, I am in the unpleasant situation that my NN won't come up
> anymore after a restart (throws NPE), so I need to get that fixed first
> (without formatting, because I am not very keen on running the 6 day job
> ag
That was a great thread Todd... it almost got somewhere. Looks like
you were owed a response by the hotspot-gc crew.
Friso, I wonder if u23 is better? There are a bunch of G1 fixes in
it: http://www.oracle.com/technetwork/java/javase/2col/6u23bugfixes-191074.html
St.Ack
On Sat, Dec 18, 2010 at
We happen to be looking at gigaspaces and hbase/hadoop. I read this in
the gigaspaces documentation...
Target partition space ID = hashcode % (# of partitions)
Is it me or isn't that bad unless you write a special String hashcode
that not only hashcodes it but makes sure the Strings hashco
Swappiness is set to zero, but swap is not disabled all together, so when RAM
gets over utilized the boxes will start swapping.
I am running G1 with defaults (no max pause given) on JVM 1.6.0_21. When not
swapping, it shows pauses of around 10s for full GCs on a 16GB heap, which do
not happen v
16 matches
Mail list logo