Make sure there are no primary key clash. HBase would over write the row if you
upload data with same primary key. That's one reason you can possibly get less
rows than what you uploaded
Sent from my mobile device, please excuse the typos
> On May 1, 2014, at 3:34 PM, "Kennedy, Sean C." wro
I ran the following command to import an excel.csv file into hbase. Everything
looked ok however when I ran a scan on the table in hbase I did not see as many
rows as were in excel.csv file.
Any help appreciated
/hd/hadoop/bin/hadoop jar /hbase/hbase-0.94.15/hbase-0.94.15.jar importtsv
'
Matouck,
Thank you very much, I had success!
Have a great day ...
Sincerely,
Sean
From: Matouk IFTISSEN [mailto:matouk.iftis...@ysance.com]
Sent: Friday, February 14, 2014 6:52 PM
To: user@hive.apache.org
Subject: Re: hbase importtsv
hello,
You can use bulkload in to pahses, in
On Feb 14, 2014, at 3:51 PM, Matouk IFTISSEN wrote:
> hello,
> You can use bulkload in to pahses, in MapR distribution we use this:
>
> 2014-02-14 16:59 GMT+01:00 Kennedy, Sean C. :
> I am trying to load 1.4M records in a 7 column CSV file into hbase.
>
>
Sounds like there is a lim
hello,
You can use bulkload in to pahses, in MapR distribution we use this:
1. first phase : map metadata to table hbase
hadoop jar /opt/mapr/hbase/hbase-0.94.5/hbase-0.94.5-mapr.jar
importtsv -Dimporttsv.separator=';'
-Dimporttsv.bulk.output=folder_bulk_local
-Dimporttsv.columns=HBASE_ROW_KEY
I am trying to load 1.4M records in a 7 column CSV file into hbase.
Question 1: Is this feasible?
Question 2: What type of tuning on hbase and or hdfs would be needed?
I am using apache hbase 0.94.15 and apache hadoop 1.2.1
Here is my command string:
/hd/hadoop/bin/hadoop jar /hbase/hbase