It seems that is due to spark SPARK_LOCAL_IP setting.export
SPARK_LOCAL_IP=localhost
will not work.
Then, how it would be set.
Thank you all~~
On Friday, September 25, 2015 5:57 PM, Zhiliang Zhu
wrote:
Hi Steve,
Thanks a lot for your reply.
That is, some commands could work on
Hi Steve,
Thanks a lot for your reply.
That is, some commands could work on the remote server gateway installed , but
some other commands will not work.As expected, the remote machine is not in the
same area network as the cluster, and the cluster's portis forbidden.
While I make the remote machi
On 25 Sep 2015, at 05:25, Zhiliang Zhu
mailto:zchl.j...@yahoo.com.INVALID>> wrote:
However, I just could use "hadoop fs -ls/-mkdir/-rm XXX" commands to operate at
the remote machine with gateway,
which means the namenode is reachable; all those commands only need to interact
with it.
but c
And the remote machine is not in the same local area network with the cluster .
On Friday, September 25, 2015 12:28 PM, Zhiliang Zhu
wrote:
Hi Zhan,
I have done that as your kind help.
However, I just could use "hadoop fs -ls/-mkdir/-rm XXX" commands to operate at
the remote mach
Hi Zhan,
I have done that as your kind help.
However, I just could use "hadoop fs -ls/-mkdir/-rm XXX" commands to operate at
the remote machine with gateway,
but commands "hadoop fs -cat/-put XXX YYY" would not work with error message
as below:
put: File /user/zhuzl/wordcount/input/1._COPYING
Hi Zhan,
I really appreciate your help, I will do as that next.And on the local machine,
no hadoop/spark needs to be installed, but only copied with the
/etc/hadoop/conf... whether the information (for example IP, hostname etc) of
local machine
would be set in the conf files...
Moreover, do yo
Hi Zhiliang,
I cannot find a specific doc. But as far as I remember, you can log in one of
your cluster machine, and find the hadoop configuration location, for example
/etc/hadoop/conf, copy that directory to your local machine.
Typically it has hdfs-site.xml, yarn-site.xml etc. In spark, the f
Hi Zhan,
Yes, I get it now.
I have not ever deployed hadoop configuration locally, and do not find the
specific doc, would you help provide the doc to do that...
Thank you,Zhiliang
On Wednesday, September 23, 2015 11:08 AM, Zhan Zhang
wrote:
There is no difference between running th
There is no difference between running the client in or out of the client
(assuming there is no firewall or network connectivity issue), as long as you
have hadoop configuration locally. Here is the doc for running on yarn.
http://spark.apache.org/docs/latest/running-on-yarn.html
Thanks.
Zhan
Hi Zhan,
Thanks very much for your help comment.I also view it would be similar to
hadoop job submit, however, I was not deciding whether it is like that when it
comes to spark.
Have you ever tried that for spark...Would you give me the deployment doc for
hadoop and spark gateway, since this i
It should be similar to other hadoop jobs. You need hadoop configuration in
your client machine, and point the HADOOP_CONF_DIR in spark to the
configuration.
Thanks
Zhan Zhang
On Sep 22, 2015, at 6:37 PM, Zhiliang Zhu
mailto:zchl.j...@yahoo.com.INVALID>> wrote:
Dear Experts,
Spark job is run
Dear Experts,
Spark job is running on the cluster by yarn. Since the job can be submited at
the place on the machine from the cluster,however, I would like to submit the
job from another machine which does not belong to the cluster.I know for this,
hadoop job could be done by way of another ma
12 matches
Mail list logo