hi,
thanks for your answer!
I have few more:
1) the file /root/spark/conf/slaves , has the full DNS names of servers (
ec2-52-26-7-137.us-west-2.compute.amazonaws.com), did you add there the
internal ip?
2) You call to start-all. Isn't it too aggressive? Let's say I have 20
slaves up, and I want
hi
update regarding that, hope it will get me some answers...
When I enter one the workers log (for of its task), I can see the following
exception:
Exception in thread "main" akka.actor.ActorNotFound: Actor not found
for: ActorSelection[Anchor(akka.tcp://sparkDriver@172.31.0.186:38560/),
Path(/
and last update for that -
The job itself seems to be working and generates output on s3, it just
reports itself as KILLED, and history server can't find the logs
On Sun, Jun 14, 2015 at 3:55 PM, Nizan Grauer wrote:
> hi
>
> update regarding that, hope it will get me some answers
I'm having 30G per machine
This is the first (and only) job I'm trying to submit. So it's weird that
for --total-executor-cores=20 it works, and for --total-executor-cores=4 it
doesn't
On Tue, Jun 23, 2015 at 10:46 PM, Igor Berman wrote:
> probably there are already running jobs there
> in addi