> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
Did you take a look at the spark UI to see your resource availability?
Thanks and Regards
Noorul
Besides the host1 question what can also happen is that you give the worker
more memory than available (try a value 1G below the memory available just
to be sure for example)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-re
Is this the spark uri (spark://host1:7077) that you are seeing in your
clusters webui? (http://master-host:8080) top left side of the page.
Thanks
Best Regards
On Wed, Oct 15, 2014 at 12:18 PM, Theodore Si wrote:
> Can anyone help me, please?
>
> 在 10/14/2014 9:58 PM, Theodore Si 写道:
>
> Hi al
Can anyone help me, please?
在 10/14/2014 9:58 PM, Theodore Si 写道:
Hi all,
I have two nodes, one as master(*host1*) and the other as
worker(*host2*). I am using the standalone mode.
After starting the master on host1, I run
$ export MASTER=spark://host1:7077
$ bin/run-example SparkPi 10
on hos
just as Marcelo Vanzin said there are two possible reasons for this problem.
I solved reason2 several days ago.
my process is, ssh to one of the worker node, read its log output , find a
line that says
"Remoting started"
after that line their should be some line of "connecting to x"
MAKE SURE
I see this error too. I have never found a fix and I've been working on this
for a few months.
For me, I have 4 nodes with 46GB and 8 cores each. If I change the executor
to use 8GB, if fails. If I use 6GB, it works. I request 2 cores only. On
another cluster, I have different limits. My workloa
There are two problems that might be happening:
- You're requesting more resources than the master has available, so
your executors are not starting. Given your explanation this doesn't
seem to be the case.
- The executors are starting, but are having problems connecting back
to the driver. In th
solution: opened all ports on the ec2 machine that the driver was running on.
need to narrow down what ports akka wants... but the issue is solved.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are-in
thx for the reply,
the UI says my application has cores and mem
ID NameCores Memory per Node Submitted Time UserState Duration
app-20140725164107-0001 SectionsAndSeamsPipeline6 512.0 MB
2014/07/25
16:41:07tercel RUNNING 21 s
--
View this message
Since it appears breeze is going to be included by default in Spark in 1.0,
and I ran into the issue here:
http://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException-td5182.html
And it seems like the issues I had were recently introduced, I am cloning
spark and checking out the 1.0
Hi Jeremy,
I am running from the most recent release, 0.9. I just fixed the problem, and
it is indeed correct setting of variables in deployment.
Once I had the cluster I wanted running, I began to suspect that master was not
responding. So I killed a worker, then recreated it, and found it cou
Hey Pedro,
>From which version of Spark were you running the spark-ec2.py script? You
might have run into the problem described here
(http://apache-spark-user-list.1001560.n3.nabble.com/spark-ec2-error-td5323.html),
which Patrick just fixed up to ensure backwards compatibility.
With the bug, it w
12 matches
Mail list logo