Hi Gautham,
Well, there are few options to around that OS limitation. One is as you
mentioned to modify ImportTSV and accept a mappings file. The second option
without changing anything in the HBase code is to split your CSV input in
multiple files and keep the same key column on all the split
I'm trying to use the ImportTSV utility to generate HFiles and move them into
an instance using the CompleteBulkLoad tool.
Right now an error I'm running into is that the arg list is too long - I have
over 50,000 columns to specify in my CS, and the bash shell throws an error. Is
the
I am trying to import data into Hbase table from a csv file.
The version of Hbase is 1.2.6
This used to work in the older version of Hbase
$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.separator=','
-Dimporttsv.columns="HBASE_ROW_KEY,tock_daily:stock,stock_daily:
I am getting this error when tablename includes namespace!
org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=','
-Dimporttsv.columns="HBASE_ROW_KEY,price_info:ticker,price_info:timecreated,price_info:price"
"tradeData:marketDataHbaseBatch"
Error: org.apache.hadoop.hbase.client.Ret
It is based on the number of live regions.
Jerry
On Fri, Oct 21, 2016 at 7:50 AM, Vadim Vararu
wrote:
> Hi guys,
>
> I'm trying to run the importTSV job and to write the result into a remote
> HDFS. Isn't it supposed to write data concurrently? Asking cause i get the
&g
Hi guys,
I'm trying to run the importTSV job and to write the result into a
remote HDFS. Isn't it supposed to write data concurrently? Asking cause
i get the same time with 2 and 4 nodes and i can see that there is only
1 reduce running.
Where is the bottleneck?
Thanks, Vadim.
Hi Mahesha,
1.) HBase stores all values as byte arrays, so there's no typing to speak
of. ImportTsv is simply ingesting what it sees, quotes included (or not).
2.) ImportTsv doesn't support escaping, if I'm reading the code correctly. (
https://github.com/apache/hbase/blob/mast
I am using importtsv tool to ingest data. I have some doubts. I am using
hbase 1.1.5.
First does it ingest non-string/numeric values? I was referring this link
<http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/>
detailing importtsv in cloudera distribution. I
usually happens
> when number of job's tasks exceeds capacity of a cluster.
>
> -Vlad
>
> On Thu, Mar 5, 2015 at 3:03 PM, Siva wrote:
>
> > Hi All,
> >
> >
> >
> > I’m loading data to Hbase by using Hbase ImportTsv utility. When I kick
>
> I’m loading data to Hbase by using Hbase ImportTsv utility. When I kick off
> this process simultaneously for different tables in different sessions,
> both the process starts in parallel till it reaches the map reduce program.
> Once one of the process kicks off map reduce job for on
Hi All,
I’m loading data to Hbase by using Hbase ImportTsv utility. When I kick off
this process simultaneously for different tables in different sessions,
both the process starts in parallel till it reaches the map reduce program.
Once one of the process kicks off map reduce job for one table
On Wed, Dec 3, 2014 at 10:00 PM, jackie wrote:
> dear all!
> I used the importtsv tools trnaslate the txt file to the Hfile
> (hadoop 2.2.0 ,hbase 0.96.2).Map procedure have many the concurent
> progress,but reduce procedure has always 1 progress.How do i improv the
> reduc
dear all!
I used the importtsv tools trnaslate the txt file to the Hfile (hadoop
2.2.0 ,hbase 0.96.2).Map procedure have many the concurent progress,but reduce
procedure has always 1 progress.How do i improv the reduce the concurent
progress?
Thank u very much!
Best regards
dear all!
I used the importtsv tools trnaslate the txt file to the Hfile (hadoop
2.2.0 ,hbase 0.96.2).Map procedure have many the concurent progress,but reduce
procedure has always 1 progress.How do i improv the reduce the concurent
progress?
Thank u very much!
Best regards
Ch:
HFileV2 is the file format used by 0.96
It is not a bug w.r.t. HFileV1 support.
Cheers
On Jul 22, 2014, at 1:27 AM, Esteban Gutierrez wrote:
> Hi,
>
> Are you getting any kind of error? even if ImportTsv
> uses HFileOutputFormat.configureIncrementalLoad() method, in
Hi,
Are you getting any kind of error? even if ImportTsv
uses HFileOutputFormat.configureIncrementalLoad() method, internally the
HFileOutputFormat2 is used:
https://github.com/apache/hbase/blob/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L360
hi,maillist:
i view the code of importTsv in hbase 0.96, when it do bulk output
,it still use HFileOutputFormat not HFileOutputFormat2 ,however ,in hbase
0.96 ,it dose not support HFileV1,
Is it a bug of importTsv ?
Hello,
Seems that you have a typo in the command line: -Dmporttsv.separator it
should be -Dimporttsv.separator
cheers,
esteban.
--
Cloudera, Inc.
On Tue, Jul 22, 2014 at 1:01 AM, ch huang wrote:
> hi,maillist:
>
> i test hbase 0.96.1.1 importtsv tool ,find it do not work wit
hi,maillist:
i test hbase 0.96.1.1 importtsv tool ,find it do not work with non tab
field separator
# sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,myco1,mycol2
"-Dmporttsv.separator=|" alex:mymy2 /tmp/alex_test
2014-07-22 15:55:5
The solution to the ".../staging/job.jar not found" error was to pass the
-libjars option to the hadoop command. In this case:
-libjars $HBASE_HOME/lib/hbase-server-0.96.0.jar
This is the job.jar, containing
org.apache.hadoop.hbase.mapreduce.
Hello, I am trying to run HBase's ImportTsv against Yarn (Hadoop 2.2.0).
I can run the Hadoop TestDFSIO Yarn job with no problems:
hadoop jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
TestDFSIO -write -nrFiles 20 -fileSize 10
I cannot run the
Hello,
I am trying to run HBase's ImportTsv against Yarn (Hadoop 2.2.0).
I can run the Hadoop TestDFSIO Yarn job with no problems:
hadoop jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
TestDFSIO -write -nrFiles 20 -fileSize 10
I cannot run the
Hello,
I am trying to run HBase's ImportTsv against Yarn (Hadoop 2.2.0).
I can run the Hadoop TestDFSIO Yarn job with no problems:
hadoop jar
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
TestDFSIO -write -nrFiles 20 -fileSize 10
I cannot run the
Hi all
version HBase 0.94.11
Can I use importtsv tool to import data file (data file is too LZO
compressed, file.txt.lzo) from HDFS to HBase do? I opened the LZO
compression algorithm in HBase
--
In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can
I refer importtsv ,i can success input data from HDFS to HBase,The problem
is when I use debug importtsv the argunment not effect,thanks Ted Yu
2013/8/14 Ted Yu
> Please refer to http://hbase.apache.org/book.html#importtsv
>
> On Tue, Aug 13, 2013 at 6:52 PM, 闫昆 wrote:
>
> >
Please refer to http://hbase.apache.org/book.html#importtsv
On Tue, Aug 13, 2013 at 6:52 PM, 闫昆 wrote:
> Hi all
> I use maven compile hbase source and import to eclipse (remote java
> application) to debug hbase ,when debug hbase importtsv I input argument
> like this format
Hi all
I use maven compile hbase source and import to eclipse (remote java
application) to debug hbase ,when debug hbase importtsv I input argument
like this format
hadoop jar hbase.jar importtsv -Dimporttsv.columns=some clumns
-Dimporttsv.separator=,
but when running to here
// Make sure
Hi all
I use maven compile hbase source and import to eclipse (remote java
application) to debug hbase ,when debug hbase importtsv I input argument
like this format
hadoop jar hbase.jar importtsv -Dimporttsv.columns=some clumns
-Dimporttsv.separator=,
but when running to here
// Make sure
t; > > Did you happen to take snapshot before the loading ?
> > >
> > > Was the table empty before loading ?
> > >
> > > Cheers
> > >
> > > On Thu, Aug 1, 2013 at 5:54 PM, 闫昆 wrote:
> > >
> > > > Hi all
> > > >
was taken after the loading, right ?
> >
> > Did you happen to take snapshot before the loading ?
> >
> > Was the table empty before loading ?
> >
> > Cheers
> >
> > On Thu, Aug 1, 2013 at 5:54 PM, 闫昆 wrote:
> >
> > > Hi
The following snapshot was taken after the loading, right ?
>
> Did you happen to take snapshot before the loading ?
>
> Was the table empty before loading ?
>
> Cheers
>
> On Thu, Aug 1, 2013 at 5:54 PM, 闫昆 wrote:
>
> > Hi all
> > I use importtsv tools load data t
The following snapshot was taken after the loading, right ?
Did you happen to take snapshot before the loading ?
Was the table empty before loading ?
Cheers
On Thu, Aug 1, 2013 at 5:54 PM, 闫昆 wrote:
> Hi all
> I use importtsv tools load data to HBase ,but I just load data of about 5GB
Hi all
I use importtsv tools load data to HBase ,but I just load data of about 5GB
and HDFS like this
NodeLast
ContactAdmin StateConfigured
Capacity (GB)Used
(GB)Non DFS
Used (GB)Remaining
(GB)Used
(%)Used
(%)Remaining
(%)BlocksBlock Pool
Used (GB)Block Pool
Used (%)> BlocksFai
Hi Anoop,
Actually, I got confused after reading the doc. - I thought a simple importtsv
command(which also takes table name as the argument) would suffice. But as you
pointed out, completebulkload is required.
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop
jar
Hi
Have you used the tool, LoadIncrementalHFiles after the
ImportTSV?
-Anoop-
From: Omkar Joshi [omkar.jo...@lntinfotech.com]
Sent: Tuesday, April 16, 2013 12:01 PM
To: user@hbase.apache.org
Subject: Data not loaded in table via ImportTSV
Hi,
The background thread is this :
http://mail-archives.apache.org/mod_mbox/hbase-user/201304.mbox/%3ce689a42b73c5a545ad77332a4fc75d8c1efbd80...@vshinmsmbx01.vshodc.lntinfotech.com%3E
I'm referring to the HBase doc.
http://hbase.apache.org/book/ops_mgt.html#importtsv
Accordingly, my co
>
> http://ccp.cloudera.com/display/KB/Importing+Bulk+Data+from+HDF<http://ccp.cloudera.com/display/KB/Importing+Bulk+Data+from+HDFS+to+HBase+using+importtsv>
>
> $ bin/hbase org.apache.hadoop.hbase.mapreduce.Import
>
> S+to+HBase+using+importtsv<http://ccp.cloudera.com/displ
Please let me know if this helps you out:
http://ccp.cloudera.com/display/KB/Importing+Bulk+Data+from+HDFS+to+HBase+using+importtsv
On Thu, Jul 26, 2012 at 2:58 AM, iwannaplay games <
funnlearnfork...@gmail.com> wrote:
> how to use importtsv and complete bulk load
>
> Does anyb
Have you changed the owner of your directory to hduser???
On Friday, July 20, 2012, iwannaplay games
wrote:
> I ran this command
>
> ./hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
> -Dimporttsv.columns=HBASE_ROW_KEY,startip,endip,countryname IPData
> /usr/ipdata.txt
>
>
> It says :
>
> INFO
importtsv runs as an M/R job, so the file needs to exist in HDFS (unless you're
running in local mode, in which case you can try to use a file URL:
file:///usr/ipdata.txt, although I have not tried that).
See here: http://hadoop.apache.org/common/docs/r0.17.2/hdfs_shell.html
specifi
I ran this command
./hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,startip,endip,countryname IPData
/usr/ipdata.txt
It says :
INFO mapred.JobClient: Cleaning up the staging area
hdfs://master:54310/app/hadoop/tmp/hadoop/mapred/staging/hduser/.staging/job_20
mpt_201207031124_0022_m_02_0, Status : FAILED
> java.lang.RuntimeException: java.net.UnknownHostException: unknown host:
> honeywel-4a7632
>
> I had same issue when I ran HBase client API code from my laptop. I added
> this hostname in my hosts file. Then I could run client code
java.lang.RuntimeException: java.net.UnknownHostException: unknown host:
honeywel-4a7632
I had same issue when I ran HBase client API code from my laptop. I added
this hostname in my hosts file. Then I could run client code and retrieve
data.
Still importtsv map reduce job alone fails. I added an entry for
der.
>
> ./hadoop jar /usr/local/hbase-0.92.1-security/hbase-0.92.1-security.jar
> importtsv -Dimporttsv.columns=HBASE_ROW_KEY,report:path,report:time
> tempptmd hdfs://:9000/user/hadoop/temp/cbm/XYZ.tsv
>
> My TSV file has three columns. I want first column to be used as ro
hi,
I am running the following command from hadoop bin folder.
./hadoop jar /usr/local/hbase-0.92.1-security/hbase-0.92.1-security.jar
importtsv -Dimporttsv.columns=HBASE_ROW_KEY,report:path,report:time
tempptmd hdfs://:9000/user/hadoop/temp/cbm/XYZ.tsv
My TSV file has three columns. I want
Hi,
Can you also share how exactly you invoke the import-tsv command?
On Tue, Jun 19, 2012 at 9:02 PM, AnandaVelMurugan Chandra Mohan
wrote:
> Hi,
>
> I am trying to use importtsv map-reduce job to load data into HBase.
>
> I am creating TSV file after fetching data from MyS
Thanks Yifeng. Well thought input :) and it works.
On Sun, Apr 29, 2012 at 1:43 PM, Yifeng Jiang wrote:
> Hi Sambit,
>
> Are you specifying a local file system path on the command line?
> Before invoking importtsv, you will need to copy your tsv files to HDFS at
> first.
>
&g
Hi Sambit,
Are you specifying a local file system path on the command line?
Before invoking importtsv, you will need to copy your tsv files to HDFS at
first.
-Yifeng
On Apr 27, 2012, at 6:08 PM, Sambit Tripathy wrote:
> I am able to run this command but it goes on forever. I don't
am able to run this.
>
> HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` hadoop jar
> ${HBASE_HOME}/hbase-0.92.1.jar importtsv
> -Dimporttsv.bulk.output=/user/hadoop/input/bulk
> -Dimporttsv.columns=HBASE_ROW_KEY,ns: -Dimporttsv.separator=, testTable
> /opt/hadoop/raw
>
&g
Thanks all for the reply.
I am able to run this.
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` hadoop jar
${HBASE_HOME}/hbase-0.92.1.jar importtsv
-Dimporttsv.bulk.output=/user/hadoop/input/bulk
-Dimporttsv.columns=HBASE_ROW_KEY,ns: -Dimporttsv.separator=, testTable
/opt/hadoop/raw
; On Thu, Apr 26, 2012 at 2:25 PM, slim tebourbi wrote:
>
>> Hi Sambit,
>> I think that you should add google guava jar to your job classpath.
>>
>> Slim.
>>
>> Le 26 avril 2012 10:50, Sambit Tripathy a écrit :
>>
>> > Hi All,
>> >
>
As you use a hbase client in the importer you should have the zookeeper
dependency.
So add it to the job classpath.
I think that you should also add the hbase/zookeeper confs into your
classpath.
For your question on guava, it's used in the parser (the guava splitter).
Slim.
Le 26 avril 2012 11:
On Thu, Apr 26, 2012 at 10:40 AM, Sambit Tripathy wrote:
> Slim,
>
>
> That exception is gone now after adding guava jar. (I wonder why do we need
> a Google Data Java Client !!!)
>
> Well there is something more, I am getting the following exception now.
>
> Exception in thread "main" java.lang.r
ying to import data from csv files into HBase.
> >
> > As per my understanding the process is
> >
> > 1. Import as HFile using *importtsv *tool provided by HBase
> > 2. Bulkupload the data from those HFiles into HBase using
> > *completebulkupload
> > *too
rstanding the process is
>
> 1. Import as HFile using *importtsv *tool provided by HBase
> 2. Bulkupload the data from those HFiles into HBase using
> *completebulkupload
> *tool.
>
> However when I issue the following command, I encounter exception.
>
> hadoop@srtidev0
The stack trace you sent has:
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
Which means it's not using your JobTracker. It means either two things:
- you don't have one, in which case you need one
- you have one but you run importtsv via HBase
: user@hbase.apache.org
Subject: Re: importtsv bulk upload fail
Same answer as last time this was asked: http://search-hadoop.com/m/rUV9on6kWA1
You can't do this without a fully distributed setup.
J-D
On Tue, Nov 22, 2011 at 10:33 AM, Ales Penkava
wrote:
> Hello, I am on CDH3 trying to perf
Same answer as last time this was asked: http://search-hadoop.com/m/rUV9on6kWA1
You can't do this without a fully distributed setup.
J-D
On Tue, Nov 22, 2011 at 10:33 AM, Ales Penkava
wrote:
> Hello, I am on CDH3 trying to perform bulk upload but following error occurs
> each time
>
> WARN map
gt; On Thu, Nov 17, 2011 at 12:23 PM, Denis Kreis wrote:
>>
>> > Hi,
>> >
>> > i'm getting this error when trying to use the importtsv tool with
>> > hadoop-0.20.205.0 and hbase-0.92.0
>> >
>> > hadoop jar ../../hbase-0.92.0-SNAPSHOT/
tting this error when trying to use the importtsv tool with
> > hadoop-0.20.205.0 and hbase-0.92.0
> >
> > hadoop jar ../../hbase-0.92.0-SNAPSHOT/hbase-0.92.0-SNAPSHOT.jar
> importtsv
> > Exception in thread "main" java.
Shouldn't "hadoop jar hbase-version.jar importtsv" , execute OK (displaying
the usage of the tool) irrespective of classpath being set correctly or
not. Atleast that used to be the behavior some time back. No?
I spent some time going through recent changes in LoadIncrementalHFile
Make sure guava.jar is in your classpath.
On Thu, Nov 17, 2011 at 12:23 PM, Denis Kreis wrote:
> Hi,
>
> i'm getting this error when trying to use the importtsv tool with
> hadoop-0.20.205.0 and hbase-0.92.0
>
> hadoop jar ../../hbase-0.92.0-SNAPSHOT/hbase-0.92.0
Hi,
i'm getting this error when trying to use the importtsv tool with
hadoop-0.20.205.0 and hbase-0.92.0
hadoop jar ../../hbase-0.92.0-SNAPSHOT/hbase-0.92.0-SNAPSHOT.jar importtsv
Exception in thread "main" java.lang.NoClassDefFoundError:
com/google/common/coll
On Thu, Nov 3, 2011 at 3:29 PM, ChongQing Xiao wrote:
> Does anyone know if ImportTsv supports importing binary data?
> From the source code, it seems it only support text in the .tsv file for the
> column value.
>
If I were to guess, a tool named importtsv probably doesn't
Hi,
Does anyone know if ImportTsv supports importing binary data?
>From the source code, it seems it only support text in the .tsv file for the
>column value.
It seems pretty simple to add encoding (say using BASE64) to support binary row
key or binary data.
Thanks
Chong
This worked, thanks!
- Original Message -
From: Ravikumar MAV
To: user@hbase.apache.org; Ben West
Cc:
Sent: Monday, October 24, 2011 2:51 PM
Subject: Re: Importtsv error
You can try
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`hbase classpath`
in your shell
On Mon, Oct 24, 2011 at
You can try
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`hbase classpath`
in your shell
On Mon, Oct 24, 2011 at 11:24 AM, Ben West wrote:
> Hey all,
>
> I'm getting this error:
>
> $ hadoop jar /usr/lib/hbase/hbase-0.90.3-cdh3u1.jar importtsv -libjars
> /usr/lib/
Hey all,
I'm getting this error:
$ hadoop jar /usr/lib/hbase/hbase-0.90.3-cdh3u1.jar importtsv -libjars
/usr/lib/hbase/lib/guava-r06.jar
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/zookeeper/KeeperException
...
I found a few threads [1,2] which seem
amohan
> wrote:
> > ImportTSV internally uses HFileOutputFormat.configureIncrementalLoad(job,
> > table);
> >
> > However, for newly created tables there would not be any keys available.
> > Hence, it launches 1 reducer by default.
> >
> > Is there a wa
Do you know your keyspace roughly? Try creating a pre-split table
with as many regions as you want reducers.
St.Ack
On Wed, Sep 14, 2011 at 8:25 PM, rajesh balamohan
wrote:
> ImportTSV internally uses HFileOutputFormat.configureIncrementalLoad(job,
> table);
>
> However, for n
ImportTSV internally uses HFileOutputFormat.configureIncrementalLoad(job,
table);
However, for newly created tables there would not be any keys available.
Hence, it launches 1 reducer by default.
Is there a way to increase the number of reducers for high volume imports
like 500+ GB.
~Rajesh.B
Hi All,
ImportTSV is a great tool for bulk loading the data into HBASE.
I have close to 500+GB of raw data which I would like to import into a newly
created HTABLE. If I go ahead with ImportTSV, it creates only one reducer
which is a bottleneck in terms of sorting and shuffling.
Are there any
HBase is a bit greedy and expects to own all files it touches. What
would be a better behavior in your opinion for this case?
J-D
On Mon, Sep 12, 2011 at 3:56 PM, Sateesh Lakkarsu wrote:
> I used importTsv to create the hfiles, which say end up in:
> - /user/slakkarsu/table/F1 with perm
I used importTsv to create the hfiles, which say end up in:
- /user/slakkarsu/table/F1 with perms rwxr-xr-x slakkarsu supergroup
time hadoop jar /usr/lib/hbase/hbase-0.90.1-cdh3u0.jar importtsv
-Dimporttsv.columns=F1:C1,F1:C2,HBASE_ROW_KEY,F1:C4
-Dimporttsv.bulk.output=/tmp/hbase/bulk_out
H Seigal,
importtsv tool is not applicable to your case.For advanced usage of
bulkload, please dig into ImportTsv.java and check the JavaDoc for
HFileOutputFormat. And
https://issues.apache.org/jira/browse/HBASE-1861 is helpful if
multi-family support is required.
On Wed, Aug 3, 2011 at 8:13 AM
I just noticed that all keyvalues written from a single map instance for
importtsv have the same version timestamp. This I think will not produce
multiple versions of the same row keys are located in the same mapper chunk.
Why not use a new version timestamp for every put ? Is there a specific
Hi All,
I am using the importtsv tool to load some data into an hbase cluster. Some
of the row keys + cf:qualifier might occur more than once with a different
value in the files I have generated. I would expect this to just create two
versions of the record with the different values. However, I
t 12:13 PM, Prashant Sharma
> wrote:
>>
>> Did you write the table name ? and remove an extra space after hbase row
>> key. I think that must be th reason
>> ( I am not an expert , but have struggled alot with it. )
>> Thanks,
>> Prashant
>&
think that must be th reason
>> ( I am not an expert , but have struggled alot with it. )
>> Thanks,
>> Prashant
>> On Wed, Jun 15, 2011 at 11:59 AM, James Ram wrote:
>>
>> > Hi,
>> >
>> > I'm having trouble with using the i
table name ? and remove an extra space after hbase row
> key. I think that must be th reason
> ( I am not an expert , but have struggled alot with it. )
> Thanks,
> Prashant
> On Wed, Jun 15, 2011 at 11:59 AM, James Ram wrote:
>
> > Hi,
> >
> > I'm havi
Put) this.table.put(new Put((Put)value));
> else if (value instanceof Delete) this.table.delete(new
> Delete((Delete)value));
> else throw new IOException("Pass a Delete or a Put");
> }
> .
> CommandLine:
> bin/hadoop jar ../../hbase/hbase-0.90.
IOException("Pass a Delete or a Put");
}
.
CommandLine:
bin/hadoop jar ../../hbase/hbase-0.90.3/hbase-0.90.3.jar importtsv
-Dimporttsv.columns=HBASE_ROW_KEY,year,name movies /user/hadoop/movies/
Thanks,
Prashant
On Wed, Jun 15, 2011 at 6:55 AM, Todd Lipcon wrote:
> Plus, I'
Did you write the table name ? and remove an extra space after hbase row
key. I think that must be th reason
( I am not an expert , but have struggled alot with it. )
Thanks,
Prashant
On Wed, Jun 15, 2011 at 11:59 AM, James Ram wrote:
> Hi,
>
> I'm having trouble with using the
>
> > Hi,
> >
> > I'm having trouble with using the importtsv tool.
> > I ran the following command:
> >
> > hadoop jar hadoop_sws/hbase-0.90.0/hbase-0.90.0.jar importtsv
> > -Dimporttsv.columns=HBASE_ROW_KEY ,b_info:name, b_info:contactNo,
> >
Try removing the spaces in the column list, i.e. commas only.
On Tue, Jun 14, 2011 at 11:29 PM, James Ram wrote:
> Hi,
>
> I'm having trouble with using the importtsv tool.
> I ran the following command:
>
> hadoop jar hadoop_sws/hbase-0.90.0/hbase-0.90.0.jar importts
Hi,
I'm having trouble with using the importtsv tool.
I ran the following command:
hadoop jar hadoop_sws/hbase-0.90.0/hbase-0.90.0.jar importtsv
-Dimporttsv.columns=HBASE_ROW_KEY ,b_info:name, b_info:contactNo,
b_info:dob, b_info:email, b_info:marital_status, b_info:p_address,
b_info:
riginal Message-
> From: Prashant Sharma [mailto:meetprashant...@gmail.com]
> Sent: Tuesday, June 14, 2011 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: Problem with importtsv on trsnferring data from HDFS to hbase
> table:
>
> My input file is a CSV with 3 fields..
>
Maybe because you misspelled an input parameter: importtsv.columns
-Original Message-
From: Prashant Sharma [mailto:meetprashant...@gmail.com]
Sent: Tuesday, June 14, 2011 10:39 AM
To: user@hbase.apache.org
Subject: Re: Problem with importtsv on trsnferring data from HDFS to hbase
Hi Prasant,
You should try to analyze the error message first. It says no delimiter found
becuase importtsv expects tsv file ( tab separated values ) and not csv( comma
separated values) so if you just replace the commas with tab in the file and
then try it should work.
Exceptions always
47483647', BLOCK
> SIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =
> > 'true'}]}
> 1 row(s) in 0.1820 seconds
>
>
> hbase(main):006:0> scan 'movies'
> ROW COLUMN+CELL
> 1c
7;, BLOCKCACHE =
> 'true'}]}
1 row(s) in 0.1820 seconds
hbase(main):006:0> scan 'movies'
ROW COLUMN+CELL
1column=name:, timestamp=1308044917482, value=new
1column=year:, timestamp=1308044926957, value=2055
1 row
On Wed, Apr 27, 2011 at 12:04 PM, Eric Ross wrote:
> I'm not running it on a cluster but on my local machine in pseudo
> distributed mode.
>
> The jobtracker address in mapred-site.xml is set to localhost and changing
> it to my system's ip didn't make any differ
check?
--- On Mon, 4/25/11, Todd Lipcon wrote:
> From: Todd Lipcon
> Subject: Re: importtsv
> To: user@hbase.apache.org, ericdross_2...@yahoo.com
> Date: Monday, April 25, 2011, 12:42 PM
> Hi Eric,
>
> Unfortunately, the LocalJobRunner is missing a feature that
> is
wrote:
> Hi all,
>
> I'm having some trouble running the importtsv tool on CDH3B4 configured in
> pseudo distributed mode.
> The tool works fine unless I add the option importtsv.bulk.output.
>
> Does importtsv with the option importtsv.bulk.output work in pseudo
> d
Hi all,
I'm having some trouble running the importtsv tool on CDH3B4 configured in
pseudo distributed mode.
The tool works fine unless I add the option importtsv.bulk.output.
Does importtsv with the option importtsv.bulk.output work in pseudo distributed
mode or do I maybe have some
> >> profiling with ten inputs or one million? Is this on a single node or
> >> a thousand node cluster? What version of HBase?
> >>
> >> Thank you,
> >> St.Ack
> >>
> >> On Wed, Apr 6, 2011 at 7:54 PM, Gan, Xiyun wrote:
> >
of HBase?
>>
>> Thank you,
>> St.Ack
>>
>> On Wed, Apr 6, 2011 at 7:54 PM, Gan, Xiyun wrote:
>> > Hi,
>> > I need to use bulk load functionality in HBase. I have read the
>> > documentation on HBase wiki page, but the ImportTsv tool does not
r
> a thousand node cluster? What version of HBase?
>
> Thank you,
> St.Ack
>
> On Wed, Apr 6, 2011 at 7:54 PM, Gan, Xiyun wrote:
> > Hi,
> > I need to use bulk load functionality in HBase. I have read the
> > documentation on HBase wiki page, but the
k load functionality in HBase. I have read the
> documentation on HBase wiki page, but the ImportTsv tool does not meet my
> need, so I added some code to the map() function in ImportTsv.java.
> Originally, that map() function writes only one key/value pair to the
> context. In my modified code, t
Hi,
I need to use bulk load functionality in HBase. I have read the
documentation on HBase wiki page, but the ImportTsv tool does not meet my
need, so I added some code to the map() function in ImportTsv.java.
Originally, that map() function writes only one key/value pair to the
context. In my
1 - 100 of 115 matches
Mail list logo