What pool is the spark shell being put into? (You can see this through the
YARN UI under scheduler)

Are you certain you're starting spark-shell up on YARN? By default it uses
a local spark executor, so if it "just works" then it's because it's not
using dynamic allocation.

On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly <jonathaka...@gmail.com> wrote:

> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
> after using it successfully on an identically configured cluster with Spark
> 1.4.1.
>
> I'm getting the dreaded warning "YarnClusterScheduler: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources", though there's nothing else
> running on my cluster, and the nodes should have plenty of resources to run
> my application.
>
> Here are the applicable properties in spark-defaults.conf:
> spark.dynamicAllocation.enabled  true
> spark.dynamicAllocation.minExecutors 1
> spark.shuffle.service.enabled true
>
> When trying out my example application (just the JavaWordCount example
> that comes with Spark), I had not actually set spark.executor.memory or any
> CPU core-related properties, but setting the spark.executor.memory to a low
> value like 64m doesn't help either.
>
> I've tried a 5-node cluster and 1-node cluster of m3.xlarges, so each node
> has 15.0GB and 4 cores.
>
> I've also tried both yarn-cluster and yarn-client mode and get the same
> behavior for both, except that for yarn-client mode the application never
> even shows up in the YARN ResourceManager. However, spark-shell seems to
> work just fine (when I run commands, it starts up executors dynamically
> just fine), which makes no sense to me.
>
> What settings/logs should I look at to debug this, and what more
> information can I provide? Your help would be very much appreciated!
>
> Thanks,
> Jonathan
>

Reply via email to