Have you looked at the slave machine to see if the process has
actually launched? If it has, have you tried peeking into its log
file?

(That error is printed whenever the executors fail to report back to
the driver. Insufficient resources to launch the executor is the most
common cause of that, but not the only one.)

On Tue, Jul 15, 2014 at 2:43 PM, Matt Work Coarr
<mattcoarr.w...@gmail.com> wrote:
> Hello spark folks,
>
> I have a simple spark cluster setup but I can't get jobs to run on it.  I am
> using the standlone mode.
>
> One master, one slave.  Both machines have 32GB ram and 8 cores.
>
> The slave is setup with one worker that has 8 cores and 24GB memory
> allocated.
>
> My application requires 2 cores and 5GB of memory.
>
> However, I'm getting the following error:
>
> WARN TaskSchedulerImpl: Initial job has not accepted any resources; check
> your cluster UI to ensure that workers are registered and have sufficient
> memory
>
>
> What else should I check for?
>
> This is a simplified setup (the real cluster has 20 nodes).  In this
> simplified setup I am running the master and the slave manually.  The
> master's web page shows the worker and it shows the application and the
> memory/core requirements match what I mentioned above.
>
> I also tried running the SparkPi example via bin/run-example and get the
> same result.  It requires 8 cores and 512MB of memory, which is also clearly
> within the limits of the available worker.
>
> Any ideas would be greatly appreciated!!
>
> Matt



-- 
Marcelo

Reply via email to