Thanks for your response.
I gave correct master url. Moreover as i mentioned in my post, i could able
to run the sample program by using spark-submit. But it is not working when
i'm running from my machine. Any clue on this?
Thanks in advance.
--
View this message in context:
http://apache-sp
mail.com> wrote:
>
>> Hi,
>>
>> When i trying to execute the program from my laptop by connecting to HDP
>> environment (on which Spark also configured), i'm getting the warning
>> ("Initial job has not accepted any resources; check your cluster UI to
>
1124023636-0004/2 is now RUNNING
> 14/11/24 16:07:10 INFO client.AppClient$ClientActor: Executor updated:
> app-20141124023636-0004/3 is now RUNNING
> 14/11/24 16:07:24 WARN scheduler.TaskSchedulerImpl: Initial job has not
> accepted any resources; check your cluster UI to ensure th
Hi,
When i trying to execute the program from my laptop by connecting to HDP
environment (on which Spark also configured), i'm getting the warning
("Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient memory&qu
I occur to this issue with the spark on yarn version 1.0.2. Is there any
hints?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Check-your-cluster-UI-to-ensure-that-workers-are-registered-and-have-sufficient-memory-tp5358p19133.html
Sent from the Apache
Try to use --executor-memory 12g with spark-summit. Or you can set it
in conf/spark-defaults.properties and rsync it to all workers and then
restart. -Xiangrui
On Fri, Jun 27, 2014 at 1:05 PM, Peng Cheng wrote:
> I give up, communication must be blocked by the complex EC2 network topology
> (thou
I give up, communication must be blocked by the complex EC2 network topology
(though the error information indeed need some improvement). It doesn't make
sense to run a client thousands miles away to communicate frequently with
workers. I have moved everything to EC2 now.
--
View this message in
Expanded to 4 nodes and change the workers to listen to public DNS, but still
it shows the same error (which is obviously wrong). I can't believe I'm the
first to encounter this issue.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TaskSchedulerImpl-Initial
river shows repeatedly:
14/06/25 04:46:29 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
Looks like its either a bug or misinformation. Can someone confirm this so I
can submit a JIRA?
--
:
DAGScheduler: Submitting 4 missing tasks from Stage 0 (MappedRDD[1] at
textFile at :12)
YarnClientClusterScheduler: Adding task set 0.0 with 4 tasks
WARN YarnClientClusterScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient
10 matches
Mail list logo