Ah i see, i had missed the fact that each MR jobs had an ApplicationManager
that was taking a container, there were none free to run mappers (my jobs
usually have only one mapper due to small input data). I understood that thanks
to your explanations and using more nodes with a greater concurren
You mentioned you only have one NodeManager.
So, is hive generating 3 MapReduce jobs? And how many map and reduce tasks for
each job?
What is your yarn.nodemanager.resource.memory-mb? That determines the maximum
number of containers you can run.
You are running into an issue where all the job
Is there a known deadlock issue or bug when using Hive parallel execution with
more parallel hive threads than there are computing nodemanagers ?
On my test cluster, i have set Hive parallel excution to 2 or 3 threads, and
have only 1 computing nodemanager with 5 cpu cores.
When i run a hive re