I'm trying to migrate a part of our current hadoop jobs from normal mapreduce 
jobs to hive,
Previously the data was stored in sequencefiles with the keys containing 
valueable data!
However if I load the data into a table I loose that key data (or at least I 
can't access it with hive), I want to somehow use the key from the sequence 
file in hive.

I know this has come up before since I can find some hints of people needing it 
but I can't seem to find a working solution and since I'm not very good with 
java I really can't get it done myself :(.
Does anyone have a snippet of something like this working?

I get errors like;
../hive/mapred/CustomSeqRecordReader.java:14: cannot find symbol
    [javac] symbol  : constructor SequenceFileRecordReader()
    [javac] location: class 
org.apache.hadoop.mapred.SequenceFileRecordReader<K,V>
    [javac] public class CustomSeqRecordReader<K, V> extends 
SequenceFileRecordReader<K, V> implements RecordReader<K, V> {


Hope some1 has a snippet or can help me out, would really love to be able to 
switch part of our jobs to hive,


Ruben de Vries

Reply via email to