Serde will allow you to create custom data from your sequence File
https://cwiki.apache.org/confluence/display/Hive/SerDe

On Thu, Apr 19, 2012 at 3:37 PM, Ruben de Vries <ruben.devr...@hyves.nl>wrote:

> I’m trying to migrate a part of our current hadoop jobs from normal
> mapreduce jobs to hive,****
>
> Previously the data was stored in sequencefiles with the keys containing
> valueable data!****
>
> However if I load the data into a table I loose that key data (or at least
> I can’t access it with hive), I want to somehow use the key from the
> sequence file in hive.****
>
> ** **
>
> I know this has come up before since I can find some hints of people
> needing it but I can’t seem to find a working solution and since I’m not
> very good with java I really can’t get it done myself L.****
>
> Does anyone have a snippet of something like this working? ****
>
> ** **
>
> I get errors like; ****
>
> ../hive/mapred/CustomSeqRecordReader.java:14: cannot find symbol****
>
>     [javac] symbol  : constructor SequenceFileRecordReader()****
>
>     [javac] location: class
> org.apache.hadoop.mapred.SequenceFileRecordReader<K,V>****
>
>     [javac] public class CustomSeqRecordReader<K, V> extends
> SequenceFileRecordReader<K, V> implements RecordReader<K, V> {****
>
> ** **
>
> ** **
>
> Hope some1 has a snippet or can help me out, would really love to be able
> to switch part of our jobs to hive,****
>
> ** **
>
> ** **
>
> Ruben de Vries****
>



-- 
https://github.com/zinnia-phatak-dev/Nectar

Reply via email to