Wataru Yukawa created HIVE-13374:
Summary: HiveServer2 hangs up if query to scan too many partition
is submitted
Key: HIVE-13374
URL: https://issues.apache.org/jira/browse/HIVE-13374
Project: Hive
Thanks sanjeev for your help.
BTW I try to increase the Heap Size of HS2 but seeing the same
exception.from where this exception has originated, it looks like
originated from the thrift client.any idea what operation it is doing with
the given stack.
Local Variable: org.apache.thrift.TByteArrayOu
Sanjeev,
I am going off this exception in the stacktrace that you posted.
"at java.lang.OutOfMemoryError.(OutOfMemoryError.java:48)"
which def. indicates that it's not very happy memory wise. I would def.
recommend to bump up the memory and see if it helps. If not, we can debug
further from ther
What this exception implies here? how to identify the problem here.
Thanks
On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma
wrote:
> We have 8GB HS2 java heap, we have not tried any bumping.
>
> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>>
We have 8GB HS2 java heap, we have not tried any bumping.
On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> How much memory have you currently provided to HS2? Have you tried bumping
> that up?
>
> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma
> wr
How much memory have you currently provided to HS2? Have you tried bumping
that up?
On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma
wrote:
> *I am getting the following exception when the HS2 is crashing, any idea
> why it has happening*
>
> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
> at java.
*I am getting the following exception when the HS2 is crashing, any idea
why it has happening*
"pool-1-thread-121" prio=4 tid=19283 RUNNABLE
at java.lang.OutOfMemoryError.(OutOfMemoryError.java:48)
at java.util.Arrays.copyOf(Arrays.java:2271)
Local Variable: byte[]#1
at java.io.ByteArrayOutputStre
Sanjeev,
One possibility is that you are running into[1] which affects hive 0.13. Is
it possible for you to apply the patch on [1] and see if it fixes your
problem?
[1] https://issues.apache.org/jira/browse/HIVE-10410
On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma
wrote:
> We are using hive-0.
We are using hive-0.13 with hadoop1.
On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Sanjeev,
>
> Can you tell me more details about your hive version/hadoop version etc.
>
> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma
> wrote:
>
>> Can some
Sanjeev,
Can you tell me more details about your hive version/hadoop version etc.
On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma
wrote:
> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
> wrote:
>
>> Hi
>> We are experiencing a strange prob
We had a case of retrieving a record which is bigger than the GC limit, for
example a column with Array or Map type that has 1M cells.
On Wed, Aug 19, 2015 at 9:35 PM, Sanjeev Verma
wrote:
> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
>
Can somebody gives me some pointer to looked upon?
On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
wrote:
> Hi
> We are experiencing a strange problem with the hiveserver2, in one of the
> job it gets the GC limit exceed from mapred task and hangs even having
> enough heap available.we are not ab
Hi
We are experiencing a strange problem with the hiveserver2, in one of the
job it gets the GC limit exceed from mapred task and hangs even having
enough heap available.we are not able to identify what causing this issue.
Could anybody help me identify the issue and let me know what pointers I
nee
13 matches
Mail list logo