I am trying to load 1.4M records in a 7 column CSV file into hbase.

Question 1:   Is this feasible?

Question 2: What type of tuning on hbase and or hdfs would be needed?


I am using apache hbase   0.94.15 and apache hadoop 1.2.1

Here is my command string:

/hd/hadoop/bin/hadoop jar /hbase/hbase-0.94.15/hbase-0.94.15.jar importtsv 
'-Dimporttsv.separator=,' 
-Dimporttsv.columns=HBASE_ROW_KEY,BATCH_ID,B_ITEM_NO,B_ITEM_DESCRIPTION,CONS_BATCH_ID,C_ITEM_NO,C_ITEM_DESC,QTY_ISSUED
      MIIBIG      /md/test_hdfs_input/large.csv

Any help apprecieated..

Sincerely,
Sean
Notice:  This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates Direct contact information
for affiliates is available at 
http://www.merck.com/contact/contacts.html) that may be confidential,
proprietary copyrighted and/or legally privileged. It is intended solely
for the use of the individual or entity named on this message. If you are
not the intended recipient, and have received this message in error,
please notify us immediately by reply e-mail and then delete it from 
your system.

Reply via email to