na.kadiy...@gmail.com>]
Sent: Wednesday, October 29, 2014 9:45 AM
To: Pagliari, Roberto
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: problem with start-slaves.sh
I see this when I start a worker and then try to start it again forgetting it's
already
* Wednesday, October 29, 2014 9:45 AM
> *To:* Pagliari, Roberto
> *Cc:* user@spark.apache.org
> *Subject:* Re: problem with start-slaves.sh
>
> I see this when I start a worker and then try to start it again
> forgetting it's already running (I don't use start-slav
, Roberto
Cc: user@spark.apache.org
Subject: Re: problem with start-slaves.sh
I see this when I start a worker and then try to start it again forgetting it's
already running (I don't use start-slaves, I start the slaves individually with
start-slave.sh). All this is telling you is tha
I see this when I start a worker and then try to start it again forgetting
it's already running (I don't use start-slaves, I start the slaves
individually with start-slave.sh). All this is telling you is that there is
already a running process on that machine. You can see it if you do a ps
-aef|gre
I ran sbin/start-master.sh followed by sbin/start-slaves.sh (I build with PHive
option to be able to interface with hive)
I'm getting this
ip_address: org.apache.spark.deploy.worker.Worker running as process . Stop
it first.
Am I doing something wrong? In my specific case, shark+hive is ru