ok. thanks.
so given everything we know the choices i see are:
1. increase your heapsize some more. (And of course confirm your process
that your reported the -Xmx8192M is the HiveServer2 process.)
2. modify your query such that it doesn't use "select *"
3. modify your query such that it does i
Sorry i badly reported it. It's 8192M
Thanks,
David.
Le 18 févr. 2014 18:37, "Stephen Sprague" a écrit :
> oh. i just noticed the -Xmx value you reported.
>
> there's no M or G after that number?? I'd like to see -Xmx8192M or
> -Xmx8G. That *is* very important.
>
> thanks,
> Stephen.
>
>
> On
oh. i just noticed the -Xmx value you reported.
there's no M or G after that number?? I'd like to see -Xmx8192M or
-Xmx8G. That *is* very important.
thanks,
Stephen.
On Tue, Feb 18, 2014 at 9:22 AM, Stephen Sprague wrote:
> thanks.
>
> re #1. we need to find that Hiveserver2 process. For a
thanks.
re #1. we need to find that Hiveserver2 process. For all i know the one
you reported is hiveserver1 (which works.) chances are they use the same
-Xmx value but we really shouldn't make any assumptions.
try wide format on the ps command (eg. ps -efw | grep -i Hiveserver2)
re.#2. okay.
1. I have no process with hiveserver2 ...
"ps -ef | grep -i hive" return some pretty long command with a -Xmx8192
and that's the value set in hive-env.sh
2. The "select * from table limit 1" or even 100 is working correctly.
David.
On Tue, Feb 18, 2014 at 4:16 PM, Stephen Sprague wrote:
>
He lives on after all! and thanks for the continued feedback.
We need the answers to these questions using HS2:
1. what is the output of "ps -ef | grep -i hiveserver2" on your system?
in particular what is the value of -Xmx ?
2. does "select * from table limit 1" work?
Thanks,
Stephen.
I'm so sorry, i wrote an answer, and i forgot to sent it
And i haven't been able to work on this for a few days.
So far :
I have a 15k columns table and 50k rows.
I do not see any changes if i change the storage.
*Hive 12.0*
My test query is "select * from bigtable"
If i use the hive c
With HIVE-3746, which will be included in hive-0.13, HiveServer2 takes less
memory than before.
Could you try it with the version in trunk?
2014-02-13 10:49 GMT+09:00 Stephen Sprague :
> question to the original poster. closure appreciated!
>
>
> On Fri, Jan 31, 2014 at 12:22 PM, Stephen Sprag
question to the original poster. closure appreciated!
On Fri, Jan 31, 2014 at 12:22 PM, Stephen Sprague wrote:
> thanks Ed. And on a separate tact lets look at Hiveserver2.
>
>
> @OP>
>
> *I've tried to look around on how i can change the thrift heap size but
> haven't found anything.*
>
>
> lo
thanks Ed. And on a separate tact lets look at Hiveserver2.
@OP>
*I've tried to look around on how i can change the thrift heap size but
haven't found anything.*
looking at my hiveserver2 i find this:
$ ps -ef | grep -i hiveserver2
dwr 9824 20479 0 12:11 pts/100:00:00 grep -i
Final table compression should not effect the de serialized size of the
data over the wire.
On Fri, Jan 31, 2014 at 2:49 PM, Stephen Sprague wrote:
> Excellent progress David. So. What the most important thing here we
> learned was that it works (!) by running hive in local mode and that thi
Excellent progress David. So. What the most important thing here we
learned was that it works (!) by running hive in local mode and that this
error is a limitation in the HiveServer2. That's important.
so textfile storage handler and having issues converting it to ORC. hmmm.
follow-ups.
1. w
Ok, so here are some news :
I tried to boost the HADOOP_HEAPSIZE to 8192,
I also setted the mapred.child.java.opts to 512M
And it doesn't seem's to have any effect.
--
I tried it using an ODBC driver => fail after few minutes.
Using a local JDBC (beeline) => running forever without any erro
Ok here are the problem(s). Thrift has frame size limits, thrift has to
buffer rows into memory.
Hove thrift has a heap size, it needs to big in this case.
Your client needs a big heap size as well.
The way to do this query if it is possible may be turning row lateral,
potwntially by treating it
oh. thinking some more about this i forgot to ask some other basic
questions.
a) what storage format are you using for the table (text, sequence, rcfile,
orc or custom)? "show create table " would yield that.
b) what command is causing the stack trace?
my thinking here is rcfile and orc are co
thanks for the information. Up-to-date hive. Cluster on the smallish side.
And, well, sure looks like a memory issue. :) rather than an inherent hive
limitation that is.
So. I can only speak as a user (ie. not a hive developer) but what i'd be
interested in knowing next is is this via running hi
We are using the Hive 0.12.0, but it doesn't work better on hive 0.11.0 or
hive 0.10.0
Our hadoop version is 1.1.2.
Our cluster is 1 master + 4 slaves with 1 dual core xeon CPU (with
hyperthreading so 4 cores per machine) + 16Gb Ram each
The error message i get is :
2014-01-29 12:41:09,086 ERROR
there's always a use case out there that stretches the imagination isn't
there? gotta love it.
first things first. can you share the error message? the hive version? and
the number of nodes in your cluster?
then a couple of things come to my mind. Might you consider pivoting the
data such th
18 matches
Mail list logo