hi,
A simple way to recreate the problem -
I have two servers installations, one with spark 1.3.1, and one with spark
1.4.0
I ran the following on both servers:
root@ip-172-31-6-108 ~]$ spark/bin/spark-shell --total-executor-cores 1
scala> val text = sc.textFile("hdfs:///some-file.txt”);
s
hi
update regarding that, hope it will get me some answers...
When I enter one the workers log (for of its task), I can see the following
exception:
Exception in thread "main" akka.actor.ActorNotFound: Actor not found
for: ActorSelection[Anchor(akka.tcp://sparkDriver@172.31.0.186:38560/),
Path(/
and last update for that -
The job itself seems to be working and generates output on s3, it just
reports itself as KILLED, and history server can't find the logs
On Sun, Jun 14, 2015 at 3:55 PM, Nizan Grauer wrote:
> hi
>
> update regarding that, hope it will get me some answers...
>
> When I
hi,
I have a running and working cluster with spark 1.3.1, and I tried to
install a new cluster that is working with spark 1.4.0
I ran a job on the new 1.4.0 cluster, and the same job on the old 1.3.1
cluster
After the job finished (in both clusters), I entered the job in the UI, and
in the new