0.49.226.148:60949>
]
Port 48019 on machine2 is indeed open, connected, and listening.
Any ideas?
Thanks!
Shannon
On 6/27/14, 1:54 AM, sujeetv wrote:
Try to explicitly set set the "spark.driver.host" property to
the master's
IP.
Suj
949>
]
Port 48019 on machine2 is indeed open, connected, and listening.
Any ideas?
Thanks!
Shannon
On 6/27/14, 1:54 AM, sujeetv wrote:
Try to explicitly set set the "spark.driver.host" property to
the master's
IP.
Sujeet
--
kExecutor@machine2:60949]]
> [
> akka.remote.EndpointAssociationException: Association failed with
> [akka.tcp://sparkExecutor@machine2:60949]
> Caused by:
> akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
> Connection refused: machine2/130.49.226.148:6
: machine2/130.49.226.148:60949
]
Port 48019 on machine2 is indeed open, connected, and listening. Any ideas?
Thanks!
Shannon
On 6/27/14, 1:54 AM, sujeetv wrote:
Try to explicitly set set the "spark.driver.host" property to the master's
IP.
Sujeet
--
View this message in co
Sorry, master spark URL in the web UI is *spark://192.168.1.101:5060*,
exactly as configured.
On 6/27/14, 9:07 AM, Shannon Quinn wrote:
I put the settings as you specified in spark-env.sh for the master.
When I run start-all.sh, the web UI shows both the worker on the
master (machine1) and the
://apache-spark-user-list.1001560.n3.nabble.com/Spark-standalone-network-configuration-problems-tp8304p8396.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I put the settings as you specified in spark-env.sh for the master. When
I run start-all.sh, the web UI shows both the worker on the master
(machine1) and the slave worker (machine2) as ALIVE and ready, with the
master URL at spark://192.168.1.101. However, when I run spark-submit,
it immediate
Try to explicitly set set the "spark.driver.host" property to the master's
IP.
Sujeet
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-standalone-network-configuration-problems-tp8304p8396.html
Sent from the Apache Spark User List maili
Hi Shannon,
How about a setting like the following? (just removed the quotes)
export SPARK_MASTER_IP=192.168.1.101
export SPARK_MASTER_PORT=5060
#export SPARK_LOCAL_IP=127.0.0.1
Not sure whats happening in your case, it could be that your system is not
able to bind to 192.168.1.101 address. What
In the interest of completeness, this is how I invoke spark:
[on master]
> sbin/start-all.sh
> spark-submit --py-files extra.py main.py
iPhone'd
> On Jun 26, 2014, at 17:29, Shannon Quinn wrote:
>
> My *best guess* (please correct me if I'm wrong) is that the master
> (machine1) is sending t
My *best guess* (please correct me if I'm wrong) is that the master
(machine1) is sending the command to the worker (machine2) with the
localhost argument as-is; that is, machine2 isn't doing any weird
address conversion on its end.
Consequently, I've been focusing on the settings of the maste
export SPARK_MASTER_IP="192.168.1.101"
export SPARK_MASTER_PORT="5060"
export SPARK_LOCAL_IP="127.0.0.1"
That's it. If I comment out the SPARK_LOCAL_IP or set it to be the same
as SPARK_MASTER_IP, that's when it throws the "address already in use"
error. If I leave it as the localhost IP, that'
Can you paste your spark-env.sh file?
Thanks
Best Regards
On Thu, Jun 26, 2014 at 7:01 PM, Shannon Quinn wrote:
> Both /etc/hosts have each other's IP addresses in them. Telneting from
> machine2 to machine1 on port 5060 works just fine.
>
> Here's the output of lsof:
>
> user@machine1:~/spar
Both /etc/hosts have each other's IP addresses in them. Telneting from
machine2 to machine1 on port 5060 works just fine.
Here's the output of lsof:
user@machine1:~/spark/spark-1.0.0-bin-hadoop2$ lsof -i:5060
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java23985 user 30u I
Do you have machine1 in your workers /etc/hosts also? If so
try telneting from your machine2 to machine1 on port 5060. Also make sure
nothing else is running on port 5060 other than Spark (*lsof -i:5060*)
Thanks
Best Regards
On Thu, Jun 26, 2014 at 6:35 PM, Shannon Quinn wrote:
>
Still running into the same problem. /etc/hosts on the master says
127.0.0.1localhost
machine1
is the same address set in spark-env.sh for SPARK_MASTER_IP. Any
other ideas?
On 6/26/14, 3:11 AM, Akhil Das wrote:
Hi Shannon,
It should be a configuration issue, check in your /
Hi Shannon,
It should be a configuration issue, check in your /etc/hosts and make sure
localhost is not associated with the SPARK_MASTER_IP you provided.
Thanks
Best Regards
On Thu, Jun 26, 2014 at 6:37 AM, Shannon Quinn wrote:
> Hi all,
>
> I have a 2-machine Spark network I've set up: a ma
Hi all,
I have a 2-machine Spark network I've set up: a master and worker on
machine1, and worker on machine2. When I run 'sbin/start-all.sh',
everything starts up as it should. I see both workers listed on the UI
page. The logs of both workers indicate successful registration with the
Spark
18 matches
Mail list logo