, "cluster")
> > > > > > .set("spark.yarn.stagingDir",
> > > > "hdfs://localhost:9000/user/hadoop/")
> > > > > > .set("spark.shuffle.service.enabled", "false&
> > > .set("spark.executor.cores","1")//
> > > >
> > .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> > > > .set("spark.yarn.submit.file.replication",
&
t; .set("spark.executor.memory","500m") //
> > > .set("spark.executor.cores","1")//
> > >
> .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> > >
.cpu-vcores","4")
> > .set("spark.yarn.submit.file.replication", "1")
> > .set("spark.yarn.jars",
> > "hdfs://localhost:9000/user/hadoop/davben/jars/*.jar")
> >
.set("spark.executor.cores","1")//
>
> .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> .set("spark.yarn.submit.file.replication", "1")
>
cation", "1")
.set("spark.yarn.jars",
"hdfs://localhost:9000/user/hadoop/davben/jars/*.jar")
When I check on the http://localhost:8088/cluster/apps/RUNNING I can see that
my job is submitted but y terminal lo
:
>
>> Hi,
>>
>>
>>
>> I get the infamous:
>>
>> Initial job has not accepted any resources; check your cluster UI to
>> ensure that workers are registered and have sufficient resources
>>
>>
>>
>> I run the a
nitial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
I run the app via Eclipse, connecting:
SparkSession spark = SparkSession.builder()
.appName("Converter - Benchmark")
I would check the queue you are submitting job, assuming it is yarn...
On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin wrote:
> Hi,
>
>
>
> I get the infamous:
>
> Initial job has not accepted any resources; check your cluster UI to
> ensure that workers are registered and h
Hi,
I get the infamous:
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
I run the app via Eclipse, connecting:
SparkSession spark = SparkSession.builder()
.appName("Conv
, Jean Georges Perrin wrote:
> Hi,
>
> I am trying to connect to a new cluster I just set up.
>
> And I get...
> [Timer-0:WARN] Logging$class: Initial job has not accepted any resources;
> check your cluster UI to ensure that workers are registered and have
> sufficient r
Hi,
I am trying to connect to a new cluster I just set up.
And I get...
[Timer-0:WARN] Logging$class: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
resources
I must have forgotten something really super obvious.
My
er-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources.
object SparkPi {
val sparkConf = new SparkConf()
.setAppName("Spark Pi")
.setMaster("spark://10.100.103.25:7077&
> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
Did you take a look at the spark UI to see your resource availability?
Thanks and Regards
Noorul
Hi All,
need your advice:
we see in some very rare cases following error in log
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
and in spark UI there are idle workers and application in WAITING state
in json
08/17 01:04:34 WARN DomainSocketFactory: The short-circuit local reads
feature cannot be used because UNIX Domain sockets are not available on
Windows.
16/08/17 01:04:52 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have suffici
nning, but each task
sends the warning : "Initial job has not accepted any resources; check your
cluster UI to ensure that workers are registered and have sufficient
resources". At this time, I see mesos all cpus are used on node1:5050 and
running forever until I kiil a task.
My quest
master URL through spark submit command.
Thnx
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-hive-connection-through-spark-Initial-job-has-not-accepted-any-resources-tp24993p27074.html
Sent from the Apache Spark User List mailing list archive at
>>> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources*
That means you don't have resources for your application, please check your
hadoop web ui.
below
15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and have
sufficient resources
15/12/16 10:22:04 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated
ount is
> "+df.count());
> }
> }
>
> command to submit job ./spark-submit --master spark://masterIp:7077
> --deploy-mode client --class com.ceg.spark.hive.sparkhive.SparkHiveInsertor
> --executor-cores 2 --executor-memory 1gb
> /home/someuser/Desktop/30sep2015/hivespark.j
gt;>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
count is
"+df.count());
}
}
command to submit job ./spark-submit --master spark://masterIp:7077
--deploy-mode client --class com.ceg.spark.hive
Hi,
I am able to fetch data, create table, put data from spark shell (scala
command line) from spark to hive
but when I create java code to do same and submitting it through
spark-submit i am getting *"Initial job has not accepted any resources;
check your cluster UI to ensure that worker
ool is the spark shell being put into? (You can see this through
>>>> the YARN UI under scheduler)
>>>>
>>>> Are you certain you're starting spark-shell up on YARN? By default it
>>>> uses a local spark executor, so if it "just w
r, so if it "just works" then it's because it's
>>> not using dynamic allocation.
>>>
>>>
>>> On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly
>>> wrote:
>>>
>>>> I'm running into a problem with YARN dynamicAll
athan Kelly
>> wrote:
>>
>>> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
>>> after using it successfully on an identically configured cluster with Spark
>>> 1.4.1.
>>>
>>> I'm getting the dreaded warning &
ynamic allocation.
>
>
> On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly
> wrote:
>
>> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
>> after using it successfully on an identically configured cluster with Spark
>> 1.4.1.
>>
>>
on.
On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly wrote:
> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
> after using it successfully on an identically configured cluster with Spark
> 1.4.1.
>
> I'm getting the dreaded warning "YarnCluster
I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0 after
using it successfully on an identically configured cluster with Spark 1.4.1.
I'm getting the dreaded warning "YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensu
x. If not set, applications always get all available cores
unless they configure
spark.cores.max themselves. Set this lower on a shared cluster to prevent
users from grabbing
the whole cluster by default.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/The-Initi
Hi,
I'm running Spark Standalone on a single node with 16 cores. Master and 4
workers are running.
I'm trying to submit two applications via spark-submit and am getting the
following error when submitting the second one: "Initial job has not
accepted any resources; check you
ARN TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory
Ultimately the job runs successfully in most cases, but i feel like this
error has a significant effect in the overall execution time of the job
which i try
com/running-a-job-on-ec2-Initial-job-has-not-accepted-any-resources-tp20607p21218.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apac
Hi mehrdad,
I seem to have the same issue as you wrote about here. Did you manage to
resolve it?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/running-a-job-on-ec2-Initial-job-has-not-accepted-any-resources-tp20607p21218.html
Sent from the Apache Spark
pache-spark-user-list.1001560.n3.nabble.com/issue-while-running-the-code-in-standalone-mode-Initial-job-has-not-accepted-any-resources-check-you-tp19628p19637.html
Sent from the Apache Spark User List mailing list archive at Nabbl
mail.com> wrote:
>
>> Hi,
>>
>> When i trying to execute the program from my laptop by connecting to HDP
>> environment (on which Spark also configured), i'm getting the warning
>> ("Initial job has not accepted any resources; check your cluster UI to
>
, vdiwakar.malladi <
vdiwakar.mall...@gmail.com> wrote:
> Hi,
>
> When i trying to execute the program from my laptop by connecting to HDP
> environment (on which Spark also configured), i'm getting the warning
> ("Initial job has not accepted any resources; check your clus
Hi,
When i trying to execute the program from my laptop by connecting to HDP
environment (on which Spark also configured), i'm getting the warning
("Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient memory&qu
and the program was submitted succesfully
> and running on the cluster. But on console, it showed repeatedly that:
>
> 14/11/18 15:11:48 WARN YarnClientClusterScheduler: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have
-SNAPSHOT-hadoop2.0.0-mr1-cdh4.2.1.jarThe queue
`dt_spark` was free, and the program was submitted succesfully and running on
the cluster. But on console, it showed repeatedly that:
14/11/18 15:11:48 WARN YarnClientClusterScheduler: Initial job has not accepted
any resources; check your cluster UI to
Besides the host1 question what can also happen is that you give the worker
more memory than available (try a value 1G below the memory available just
to be sure for example)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any
get this:
>
> 14/10/14 21:54:23 WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
>
> And it repeats again and again.
>
> How can I fix this?
>
> Best Regards
> Theo
>
>
>
host2, but I get this:
14/10/14 21:54:23 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory
And it repeats again and again.
How can I fix this?
Best Regards
Theo
: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory
And it repeats again and again.
How can I fix this?
Best Regards
Theo
.1 and 1.0.2.
>
> I have checked all nodes and see plenty of free RAM. The driver/master node
> will run and do it's data loading and processing, but the executors never
> start up, attach, connect or w/e to do the real work.
>
>
>
> --
> View this message in context:
&g
y of free RAM. The driver/master node
will run and do it's data loading and processing, but the executors never
start up, attach, connect or w/e to do the real work.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resource
chines, and calling
> rdd.count(); but Spark never managed to complete the job, giving message
> like the following: WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
>
> I
4/08/07 17:15:18 INFO TaskSchedulerImpl: Adding task set 0.0 with 38 tasks
14/08/07 17:15:33 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/08/07 17:15:48 WARN TaskSchedulerImpl: Initial
solution: opened all ports on the ec2 machine that the driver was running on.
need to narrow down what ports akka wants... but the issue is solved.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are
message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are-in-UI-tp10659p10671.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
1 tasks to pool default
2014-07-25 01:25:09,616 [Thread-2] DEBUG
falkonry.commons.service.ServiceHandler - Listening...
61847 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have
It seems like the "Initial job has not accepted any resources;" shows
up for a wide variety of different errors (for example the obvious one
where you've requested more memory than is available) but also for
example in the case where the worker nodes does not have the
appropriat
ark-user-list.1001560.n3.nabble.com/TaskSchedulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8444.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
ssage in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TaskSchedulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8444.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
edulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8285.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
river shows repeatedly:
14/06/25 04:46:29 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
Looks like its either a bug or misinformation. Can someone confirm this so I
can submit a JIRA?
--
1.0 branch. Maybe this makes my problem(s) worse,
but am going to give it a try. Rapidly running out of time to get our code
fully working on EC2.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5344.html
Sent
ou reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5335.html
To unsubscribe from Initial job has not accepted any resources, click here.
NAML
--
View this message in context
ark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5335.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
ration. Any help would be greatly appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
rning
> ""
> 14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory
> 14/04/11 21:30:02 WARN TaskSchedulerImpl: Initial job has not accepted any
> re
nt example, the
execution hangs at the step "reduceByKey" and prints the Warning
""
14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN Tas
nt example, the
execution hangs at the step "reduceByKey" and prints the Warning
""
14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN Tas
63 matches
Mail list logo