Could you try to remove ATSHooks in hive-site.xml? Looks strange.
Thanks,
Navis
2014-07-08 18:51 GMT+09:00 jonas.partner :
> Hi Navis,
>
> after a run to the point where we are seeing exceptions we see the below
>
> num #instances #bytes class name
> ---
Hello again,
Some progress has been made on this issue. From initial testing this patch
has fixed my problem. I had my cluster running all night and the memory
usage is floating around 700 MB. Before it would be > 1GB and climbing.
https://issues.apache.org/jira/browse/HIVE-7353
-Benjamin
O
Could you try "jmap -histo:live " and check hive objects which seemed
too many?
Thanks,
Navis
2014-07-07 22:22 GMT+09:00 jonas.partner :
> Hi Benjamin,
> Unfortunately this was a really critical issue for us and I didn’t think
> we would find a fix in time so we switched to generating a hive s
Hi Benjamin,
Unfortunately this was a really critical issue for us and I didn’t think we
would find a fix in time so we switched to generating a hive scripts
programmatically then running that via an Oozie action which uses the Hive CLI.
This seems to create a stable solution although is a lot
I believe I am having the same issue. Hive 0.13 and Hadoop 2.4. We had to
increase the Hive heap to 4 GB which allows Hive to function for about 2-3
days. After that point it has consumed the entire heap and becomes
unresponsive and/or throws OOM exceptions. We are using Beeline and
HiveServer
Hi Edward,
Thanks for the response. Sorry I posted the wrong version. I also added close
on the two result sets to the code taken from the wiki as below but still the
same problem.
Will try to run it through your kit at the weekend. For the moment I switched
to running the statements as a s
Not saying there is not a leak elswhere but
statement and resultset objects both have .close()
Java 7 now allows you to autoclose
try ( Connection conn ...; Statement st = conn.createStatement() ){
something
}
On Thu, Jul 3, 2014 at 6:35 AM, jonas.partner
wrote:
> We have been struggling to g
We have been struggling to get a reliable system working where we interact with
Hive over JDBC a lot. The pattern we see is that everything starts ok but the
memory used by the Hive server process grows over time and after some hundreds
of operations we start to see exceptions.
To ensure the