Thanks Franke! Probably now I don't want to move the data directly into
Hive. My SQL database contains a table 'test' with 2 Columns(file_name
char(100) ,file_data longblob). Column 'file_data' may contain xml
formatted data or pipe delimited data and its huge amount of data. Right
now I am considering to load the data into HDFS under some directory
structure using sqoop. I just want to make sure if there is any possibility
that I may encounter data loss ? Or any best practices that needs to be
followed. Thanks for your time.

On Wednesday, April 27, 2016, Jörn Franke <jornfra...@gmail.com> wrote:

> You could try as binary. Is it just for storing the blobs or for doing
> analyzes on them? In the first case you may think about storing them as
> files in HDFS and including in hive just a string containing the file name
> (to make analysis on the other data faster). In the later case you should
> think about an optimal analysis format in Hive.
>
> > On 27 Apr 2016, at 22:13, Ajay Chander <hadoopde...@gmail.com
> <javascript:;>> wrote:
> >
> > Hi Everyone,
> >
> > I have a table which has few columns as blob types with huge data. Is
> there any best way to 'sqoop import' it to hive tables with out losing any
> data ? Any help is highly appreciated.
> >
> > Thank you!
>

Reply via email to