On Tue, Dec 7, 2010 at 3:18 PM, Tali K <ncherr...@hotmail.com> wrote:
> 1) When I cancel hive job with Ctrl C, I noticed that java/hive processes
> still run on some of my nodes.
> I shutdown hadfoop, and restarted it, but noticed that  2 or 3 java/hadoop
> processes were still running on each node.
> So we went to each node and did a 'killall java' - in some cases I had to do
> 'killall -9 java'.
> My question : why is is this happening and what would be recommendations ,
> how to make sure that there is no hadoop / hive processes running after I
> stopped hadoop with stop-all.sh?
>
> PS : The reason that I needed to Ctrl C hive process in a first place was  :
>  if I ran hive -e 'select ....",
> job would finish, result file would be created Iand  see 'OK' on a screen
> for 7 -10 min, before it will actually give me a prompt.
> Why is this happening ?
>
>
>
>
>

When you run a hive query the CLI will launch one or more map reduce
jobs, sometimes in parallel, sometimes in series. If you exit from the
CLI it will usually mean the job will fail eventually but parts may
keep running.

When you launch a hive job it clearly prints what the "job kill URL"s
are. Each stage may have a different kill URL. If you visit that URL
you will kill the job.

If you want to stop jobs you should use hadoop job -kill <job id>. Or
use the job tracker UI. Only in extreme cases should you ever have to
kill a task-attempt locally using kill.

In the near future the behavior of ctrl+c will be
https://issues.apache.org/jira/browse/HIVE-1784

Killing jobs are described in the hadoop documentation

Reply via email to