Dear,

I meet a strange issue and I am not sure whether it is a Spark usage
limitation or a configuration issue.

I run Spark 1.5.1 in standalone mode. There is only one node in my cluster.
All services status are ok. While I visit the Spark web address, the Spark
URL is spark://c910f04x12.pok.test.com:7077 and I can successfully submit
job and start remote Spark shell with the URL and then I change the host
long name to the IP address: spark://10.4.12.1:7077 but failed. And then, I
check the master output, it caused by message dropped by akka

16/02/22 10:55:06 ERROR ErrorMonitor: dropping message [class
akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://
sparkMaster@10.4.12.1:7077/]] arriving at [akka.tcp://
sparkMaster@10.4.12.1:7077] inbound addresses are [akka.tcp://
sparkmas...@c910f04x12.pok.test.com:7077]

And then, I stop the service and set the SPARK_MASTER_IP to 10.4.12.1 in
spark-evn.sh and then start the service, I found the spark URL change to
spark://10.4.12.1:7077 and then I can successfully submit job with the URL
of IP format but if I change to host name, the submission will be failed
again. And the log similar just change the host name to IP.

So, is it my usage error or a usage limitation in Spark, the URL must be
exact same string but not replaced by IP or hostname?

BTW, I use the nslookup to test my hostname and IP, the result is right.
And I also try to add a long name and IP mapping in the /etc/hosts but
there is no any help.

Reply via email to