Does that mean you would like to read the pojo objects using hive? Is your
pojo a custom writable?
LazyBinarySerDe in my opinion is a SerDe that converts bytewritable to
columns. Your recordreader would return a bytewritable and serde along with
objectinspector would convert it to typed columns. So
Hi Aniket,
I am looking to run some data through a mapreduce and I want the output
sequence files to be compatible with Block Compressed Partitioned
LazyBinarySerDe so I can map external tables to it. The current job uses a
pojo that extends writable to serialize to disk, this is easy to read back
Hi Hans,
Can you please elaborate on the use case more? Is your data already in
Binary format readable to LazyBinarySerDe (if you mount a table with that
serde with hive)?
OR
are you trying to write data using mapreduce (java) into a location that
can be further read by a table that is declared to
I am attempting to Use LazyBinarySerDe to read Sequence files output by a
mapreduce job. Is there an example of how the data needs to be packed by
the final reduce, and how the tables are set up so they can read the output?