> 1) confirm your beeline java process is indeed running with expanded memory
The OOM error is clearly coming from the HiveServer2 CBO codepath post beeline.
at
org.apache.calcite.rel.AbstractRelNode$1.explain_(AbstractRelNode.java:409)
at
org.apache.calcite.rel.externalize.Re
Subject: RE: Beeline throws OOM on large input query
Reply to Stephen Sprague
1) confirm your beeline java process is indeed running with expanded
memory
I used the -XX:+PrintCommandLineFlags which showed:
-XX:MaxHeapSize=17179869184
confirming the 16g setting.
2)
try the hive-cli (or the python
*Reply to Stephen Sprague*
*1) confirm your beeline java process is indeed running with expanded*
*memory*
I used the -XX:+PrintCommandLineFlags which showed:
-XX:MaxHeapSize=17179869184
confirming the 16g setting.
*2) *
*try the hive-cli (or the python one even.) or "beeline -u *
*jdbc:hive2
Hi Adam
I’m not clear about what you are trying to achieve in your query.
Can you please give a small example?
Thanks
Dudu
From: Adam [mailto:work@gmail.com]
Sent: Friday, September 02, 2016 4:13 PM
To: user@hive.apache.org
Subject: Re: Beeline throws OOM on large input query
I set the
hmmm. so beeline blew up *before* the query was even submitted to the
execution engine? one would think 16G would be plenty 8M row sql
statement.
some suggestions if you feel like going further down the rabbit hole.
1. confirm your beeline java process is indeed running with expanded
memory (
I set the heap size using HADOOP_CLIENT_OPTS all the way to 16g and still
no luck.
I tried to go down the table join route but the problem is that the
relation is not an equality so it would be a theta join which is not
supported in Hive.
Basically what I am doing is a geographic intersection agai
lemme guess. your query contains an 'in' clause with 1 million static
values? :)
* brute force solution is to set:
HADOOP_CLIENT_OPTS=-Xmx8G (or whatever)
before you run beeline to force a larger memory size
(i'm pretty sure beeline uses that env var though i didn't actually check
the script)