Thank you Anthony. I am clearer on yarn-cluster and yarn-client now.
On Fri, May 6, 2016 at 1:05 PM, Anthony May wrote:
> Making the master yarn-cluster means that the driver is then running on
> YARN not just the executor nodes. It's then independent of your application
> and can only be killed
Making the master yarn-cluster means that the driver is then running on
YARN not just the executor nodes. It's then independent of your application
and can only be killed via YARN commands, or if it's batch and completes.
The simplest way to tie the driver to your app is to pass in yarn-client as
m
Hi Anthony,
I am passing
--master
yarn-cluster
--name
pysparkexample
--executor-memory
1G
--driver-memory
1G
--conf
Greetings Satish,
What are the arguments you're passing in?
On Fri, 6 May 2016 at 12:50 satish saley wrote:
> Hello,
>
> I am submitting a spark job using SparkSubmit. When I kill my application,
> it does not kill the corresponding spark job. How would I kill the
> corresponding spark job? I k
Hello,
I am submitting a spark job using SparkSubmit. When I kill my application,
it does not kill the corresponding spark job. How would I kill the
corresponding spark job? I know, one way is to use SparkSubmit again with
appropriate options. Is there any way though which I can tell SparkSubmit
a