Thanks, I missed that one.
From: Marcelo Vanzin [mailto:van...@cloudera.com]
Sent: Tuesday, October 13, 2015 2:36 PM
To: Ellafi, Saif A.
Cc: user@spark.apache.org
Subject: Re: Spark shuffle service does not work in stand alone
You have to manually start the shuffle service if you're not ru
sfargo.com]
>
> *Sent:* Tuesday, October 13, 2015 2:25 PM
> *To:* van...@cloudera.com
> *Cc:* user@spark.apache.org
> *Subject:* RE: Spark shuffle service does not work in stand alone
>
>
>
> Hi, thanks
>
>
>
> Executors are simply failing to connect to a shuffle
...@wellsfargo.com [mailto:saif.a.ell...@wellsfargo.com]
Sent: Tuesday, October 13, 2015 2:25 PM
To: van...@cloudera.com
Cc: user@spark.apache.org
Subject: RE: Spark shuffle service does not work in stand alone
Hi, thanks
Executors are simply failing to connect to a shuffle server:
15/10/13 08:29:34 INFO
[mailto:van...@cloudera.com]
Sent: Tuesday, October 13, 2015 1:13 PM
To: Ellafi, Saif A.
Cc: user@spark.apache.org
Subject: Re: Spark shuffle service does not work in stand alone
It would probably be more helpful if you looked for the executor error and
posted it. The screenshot you posted is the driver
It would probably be more helpful if you looked for the executor error and
posted it. The screenshot you posted is the driver exception caused by the
task failure, which is not terribly useful.
On Tue, Oct 13, 2015 at 7:23 AM, wrote:
> Has anyone tried shuffle service in Stand Alone cluster mode
Hi,
AFAIK, the shuffle service makes sense only to delegate the shuffle to
mapreduce (as mapreduce shuffle is most of the time faster than the
spark shuffle).
As you run in standalone mode, shuffle service will use the spark shuffle.
Not 100% thought.
Regards
JB
On 10/13/2015 04:23 PM, saif
Has anyone tried shuffle service in Stand Alone cluster mode? I want to enable
it for d.a. but my jobs never start when I submit them.
This happens with all my jobs.
15/10/13 08:29:45 INFO DAGScheduler: Job 0 failed: json at DataLoader.scala:86,
took 16.318615 s
Exception in thread "main" org.