Re: ZK-related issue when updating from 0.94.6 to 0.94.8

2013-07-12 Thread Ted Yu
w.r.t. the strange error mentioned at the bottom of the email, it came from connectionEvent(): if (this.recoverableZooKeeper == null) { LOG.error("ZK is null on connection event -- see stack trace " + "for the stack trace when constructor was called on this zkw",

Re: small hbase doubt

2013-07-12 Thread Ted Yu
bq. Do you think prefix compression can also be utilized here? In your use case, prefix compression would help in reducing bandwidth consumption. On Thu, Jul 11, 2013 at 9:11 PM, Asaf Mesika wrote: > Do you think prefix compression can also be utilized here? In our use case > we sent a list of

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Azuryy Yu
David, you can set -Xmx1g if your JDK is 6 or above. dont need to set specify bytes. On Jul 13, 2013 12:16 AM, "David Koch" wrote: > Hello, > > This is the command that is used to launch the region servers: > > /usr/java/jdk1.7.0_25/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m > -Djava.

ZK-related issue when updating from 0.94.6 to 0.94.8

2013-07-12 Thread Adrien Mogenet
Hi there, I'm trying to upgrade from 0.94.6 (distributed mode) to 0.94.8 and I'm seeing strange WARN messages leading in region-less regionserver once updated. Here is the kind of lines I can find: > WARN org.apache.hadoop.hbase.zookeeper.ZKAssign: regionserver:60020-0x23d207e751d20c4 Attempt to

Re: problem in testing coprocessor endpoint

2013-07-12 Thread Gary Helmling
Kim, Asaf, I don't know where this conception comes from that endpoint coprocessors must be loaded globally, but it is simply not true. If you would like to see how endpoints are registered, see RegionCoprocessorHost.java: @Override public RegionEnvironment createEnvironment(Class implClass,

Re: problem in testing coprocessor endpoint

2013-07-12 Thread Kim Chew
No, Endpoint processor can be deployed via configuration only. In hbase-site.xml, there should be an entry like this, hbase.coprocessor.region.classes myEndpointImpl Also, you have to let HBase know where to find your class, so in hbase-env.sh export HBASE_CLASSPATH=${HBASE_HOME}/lib/A

Re: problem in testing coprocessor endpoint

2013-07-12 Thread Gary Helmling
Endpoint coprocessors can be loaded on a single table. They are no different from RegionObservers in this regard. Both are instantiated per region by RegionCoprocessorHost. You should be able to load the coprocessor by setting it as a table attribute. If it doesn't seem to be loading, check the

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread David Koch
Hello, This is the command that is used to launch the region servers: /usr/java/jdk1.7.0_25/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=7

Scanner problem after bulk load hfile

2013-07-12 Thread Rohit Kelkar
I am having problems while scanning a table created using HFile. This is what I am doing - Once Hfile is created I use following code to bulk load LoadIncrementalHFiles loadTool = new LoadIncrementalHFiles(conf); HTable myTable = new HTable(conf, mytablename.getBytes()); loadTool.doBulkLoad(new Pa

Re: MapReduce job with mixed data sources: HBase table and HDFS files

2013-07-12 Thread S. Zhou
Ted & Azurry, after some investigation on log files, I figured out why it happens. In the code, I use the same path "inputPath1" for two inputs (see below) since I thought the input path is not effective for HBase table. But it turns out that the input path of HBase table could affect the input

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Azuryy Yu
I do think your JVM on the RS crashed. do you have GC log? do you set MR *mapred*.map.tasks.*speculative.execution=false *when you using map jobs to read or write HBASE? and if you have a heavy read/write load, how did you tune the HBase? such as block cache size, compaction, memstore etc. On F

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread David Koch
Thank you for your responses. With respect to the version of Java I found that Cloudera recommend1.7.x for CDH4.3. On Fri, Jul 12, 2013 at 1:32 PM, Jean-Marc S

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Azuryy Yu
David, java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) for this error, generally client always ask for bytes from the stream, but sever has been shut down, so there maybe network issue or JVM crashed or some others. I don't think this

Re: if i wirte myself endpoint ,how can i load it onto server?

2013-07-12 Thread Asaf Mesika
Dump the jar in the lib directory in hbase , for ever region server. On Friday, July 12, 2013, ch huang wrote: > ATT >

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Jean-Marc Spaggiari
Might want to run memtest also, just to be sure there is no memory issue. It should not since it was working fine with 0.92.4, but costs nothing... the last version of Java 6 is 45... Might also worst to give it a try if you are running with 1.6. 2013/7/12 Asaf Mesika > You need to see the jvm

Re: problem in testing coprocessor endpoint

2013-07-12 Thread Asaf Mesika
You can't register and end point just for one table. It's like a stored procedure - you choose to run it and pass parameters to it. On Friday, July 12, 2013, ch huang wrote: > what your describe is how to load endpoint coprocessor for every region in > the hbase, what i want to do is just load it

Re: org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching handler for protocol MyTestProtocol in region

2013-07-12 Thread Ted Yu
You used createTable() API but the log said: Table already exists On Jul 12, 2013, at 2:22 AM, ch huang wrote: > hi,all: > i spend all day for the problem ,and now totally exhausted,hope > anyone can help me > > i code myself endpoint ,the logic is sample run the scan in some region >

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Asaf Mesika
You need to see the jvm crash in .out log file and see if maybe its the .so native Hadoop code that making the problem. In our case we Downgraded from jvm 1.6.0-37 to 33 and it solved the issue. On Friday, July 12, 2013, David Koch wrote: > Hello, > > NOTE: I posted the same message in the the C

Re: HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread Jean-Marc Spaggiari
Hi David, I will recommand you to run: - FSCK from your os (fsck.ext4) on this node; - FSCK from Hadoop on your HDFS - HBCK from HBase Seems your node has some troubles to read something, just want to see if there is related issues. JM 2013/7/12 David Koch > Hello, > > NOTE: I posted the same

HBase issues since upgrade from 0.92.4 to 0.94.6

2013-07-12 Thread David Koch
Hello, NOTE: I posted the same message in the the Cloudera group. Since upgrading from CDH 4.0.1 (HBase 0.92.4) to 4.3.0 (HBase 0.94.6) we systematically experience problems with region servers crashing silently under workloads which used to pass without problems. More specifically, we run about

how to add hdfs path into hbase table attribute?

2013-07-12 Thread ch huang
i want set hdfs path ,AND add the path into hbase,here is my code Path path = new Path("hdfs:192.168.10.22:9000/alex/test.jar"); System.out.println(": "+path.toString()+"|"+TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER); htd.setValue("COPROCESSOR$1", path.toString()+"|"

org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching handler for protocol MyTestProtocol in region

2013-07-12 Thread ch huang
hi,all: i spend all day for the problem ,and now totally exhausted,hope anyone can help me i code myself endpoint ,the logic is sample run the scan in some region with a filter and count the found records, i do not want my endpoint work for each region,i just need it work for my test tabl

if i wirte myself endpoint ,how can i load it onto server?

2013-07-12 Thread ch huang
ATT