Hi Aniket,

I am looking to run some data through a mapreduce and I want the output
sequence files to be compatible with Block Compressed Partitioned
LazyBinarySerDe so I can map external tables to it. The current job uses a
pojo that extends writable to serialize to disk, this is easy to read back
in for mapreduce but I am not sure how to read this with hive. Do I need to
define it as a struct, just normal fields and row format is LazyBinarySerDe?

On Sun, Jan 22, 2012 at 5:41 PM, Aniket Mokashi <aniket...@gmail.com> wrote:

> Hi Hans,
>
> Can you please elaborate on the use case more? Is your data already in
> Binary format readable to LazyBinarySerDe (if you mount a table with that
> serde with hive)?
> OR
> are you trying to write data using mapreduce (java) into a location that
> can be further read by a table that is declared to use LazyBinarySerDe?
>
> Please elaborate more.
>
> Thanks,
> Aniket
>
> On Sun, Jan 22, 2012 at 10:23 AM, Hans Uhlig <huh...@uhlisys.com> wrote:
>
>> I am attempting to Use LazyBinarySerDe to read Sequence files output by a
>> mapreduce job. Is there an example of how the data needs to be packed by
>> the final reduce, and how the tables are set up so they can read the
>> output?
>
>
>
>
> --
> "...:::Aniket:::... Quetzalco@tl"
>

Reply via email to