Excellent explanation Rajesh! Thanks for making it clear
Ranjan
-Original Message-
From: Rajesh Balamohan [mailto:rajesh.balamo...@gmail.com]
Sent: Saturday, November 26, 2016 12:16 AM
To: dev@hive.apache.org
Cc: dev-h...@hive.apache.org
Subject: Re: Oversized container estimation
If
gt; will not get the vcore at the same time to process the task.
>
> Thanks for the help!!
>
> Ranjan
>
> -Original Message-
> From: Rajesh Balamohan [mailto:rajesh.balamo...@gmail.com]
> Sent: Friday, November 25, 2016 5:40 PM
> To: dev@hive.apache.org
>
...@gmail.com]
Sent: Friday, November 25, 2016 5:40 PM
To: dev@hive.apache.org
Cc: dev-h...@hive.apache.org
Subject: Re: Oversized container estimation
Those are cumulative figures in the DAG level. You may want to check the gc
logs emitted at task level to check the details on whether complete memory
Those are cumulative figures in the DAG level. You may want to check the gc
logs emitted at task level to check the details on whether complete memory
is used or not. Not sure what is the yarn-min container size specified in
your cluster. But based on that, you may run into the risk of running too
Hi everyone,
I have a cluster where each container is configured at 4GB and some of my
queries are getting over in 30 to 40 seconds. This leads me to believe that I
have too much memory for my containers and I am thinking of reducing the
container size to 1.5GB(hive.tez.container.size) but I am