My command:
hadoop jar $HBASE_HOME/hbase-0.90.4-cdh3u2.jar importtsv
-Dimporttsv.separator=, -Dimporttsv.bulk.output=/tmp/output
-Dimporttsv.columns=HBASE_ROW_KEY,e:a,e:b,e:c t1 /tmp/1
Usage: importtsv -Dimporttsv.columns=a,b,c
Imports the given input directory of TSV data into the specifie
On Thu, Dec 8, 2011 at 9:32 PM, Dou Xiaofeng wrote:
> The table t1 not exist.
> If I create it by hbase client manually, the importtsv does not throw error,
> but I assign the bulk.output in the command, it should be not to create the
> table.
>
Sorry, I don't follow the last bit of the sentenc
Hi harsh,
ya, i no jobs are seen in that jobtracker page, under RUNNING JOBS it is
none, under FINISHED JOBS it is none,FAILED JOBS it is none . its just like
no job is running. In eclipse i could see during mapreduce program running,
as you said "LOcalJobRunner", may be Eclipse is merely launchin
The table t1 not exist.
If I create it by hbase client manually, the importtsv does not throw error,
but I assign the bulk.output in the command, it should be not to create the
table.
-邮件原件-
发件人: saint@gmail.com [mailto:saint@gmail.com] 代表 Stack
发送时间: 2011年12月9日 12:29
收件人: user
2011/12/8 Dou Xiaofeng :
> Hi All:
> I am a newbie for testing the hadoop suite. When I am test the hbase
> importtsv command,
>
> hadoop jar $HBASE_HOME/hbase-0.90.4-cdh3u2.jar importtsv
> -Dimporttsv.separator=, -Dimporttsv.bulk.output=/tmp/output
> -Dimporttsv.columns=HBASE_ROW_KEY,e:
Hi
Did you check your zookeeper? Check the zookeeper logs. May be some network
fluctuation between your HBase cluster and Zookeeper.
Also read the trouble shooting section of HBase w.r.t Zookeeper. May be the
load on ZK is too heavy and there is some long Full GC happening.
Regards
Ram
-O
Hi All:
I am a newbie for testing the hadoop suite. When I am test the hbase
importtsv command,
hadoop jar $HBASE_HOME/hbase-0.90.4-cdh3u2.jar importtsv
-Dimporttsv.separator=, -Dimporttsv.bulk.output=/tmp/output
-Dimporttsv.columns=HBASE_ROW_KEY,e:a,e:b,e:c t1 /tmp/1
I got below erro
On Thu, Dec 8, 2011 at 4:05 PM, Ben West wrote:
> We have a cluster with four region servers and about 2,000 regions. We're
> using the REST server, and we've noticed that whatever region is hosting META
> gets 3-5x the number of requests that the other regions do.
>
> It's my understanding that
Hey all,
We have a cluster with four region servers and about 2,000 regions. We're using
the REST server, and we've noticed that whatever region is hosting META gets
3-5x the number of requests that the other regions do.
It's my understanding that the client should cache the row start/end loca
Just committed HBASE-4682.
Should be able to make this work now with deletes.
From: lars hofhansl
To: "d...@hbase.apache.org" ; "user@hbase.apache.org"
Sent: Wednesday, December 7, 2011 10:38 AM
Subject: Re: Backup HBase to S3
>From the blog:
"However, i
My hbase cluster completely stopped working this morning. When i looked at
the log files, I saw the below. I am wondering why this happened and what
can be done to avoid this in future. I restarted the master and
regionserver and things look ok now. but i don't know how much data i must
have lost i
Jorn,
Did you restart mapreduce (the task trackers in particular) after the
change? Hopefully when you restart you can check the TT's to make sure
that /etc/zookeeper/* is not in its class path.
Jon.
On Thu, Dec 8, 2011 at 12:59 AM, Jorn Argelo - Ephorus <
jorn.arg...@ephorus.com> wrote:
> Hi
Hi all,
To follow up on this,
org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication is
having exactly the same behaviour as CopyTable.
Jorn
-Oorspronkelijk bericht-
Van: Jorn Argelo - Ephorus [mailto:jorn.arg...@ephorus.com]
Verzonden: donderdag 8 december 2011 9:59
Aan: user
Hi,
I have also tried copyTable with different clusters. It worked for me fine. I
set the hbase.zookeeper.quorum property in Hbase conf file. I used
Hadoop-0.20.2.
Thanks
-Original Message-
From: Jorn Argelo - Ephorus [mailto:jorn.arg...@ephorus.com]
Sent: Thursday, December 08, 2011 2:
Hi Jon / J-D,
Yeah, I had a bunch of additional stuff in my classpath which we needed
for other M/R jobs:
/etc/zookeeper:/etc/hadoop-0.20/conf:/usr/lib/hadoop-0.20/*:/usr/lib/had
oop-0.20/lib/*:/usr/lib/zookeeper/*:/usr/lib/zookeeper/lib/*
I tried just removing /etc/zookeeper from the classpath b
15 matches
Mail list logo