Shawn,
   Can you provide the stack trace that you get with the OOM?

Thanks,
   Owen

On Mon, Sep 17, 2018 at 9:27 AM Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

> Hi Shawn
>
> You might be running into issues related to huge protobuf objects from
> huge string columns. Without
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/ORC-203 there
> isn’t an option other than providing sufficiently large memory. If you can
> reload the data with binary type that should help avoid this issue.
>
> Thanks
> Prasanth
>
>
>
> On Mon, Sep 17, 2018 at 9:10 AM -0700, "Shawn Weeks" <
> swe...@weeksconsulting.us> wrote:
>
> Let me start off by saying I've backed myself into a corner and would
>> rather not reprocess the data if possible. I have a Hive Transactional
>> table in Hive 1.2.1 H that was loaded via NiFi Hive Streaming with a fairly
>> large String column containing XML Documents. Awful I know and I'm working
>> on changing how the data get's loaded. But I've got this table with so many
>> deltas that the Hive Compaction runs out of memory and any queries on the
>> table run out of memory. Any ideas on how I might get the data out of the
>> table and split it into more buckets or something?
>>
>>
>> Thanks
>>
>> Shawn Weeks
>>
>

Reply via email to