nt.java:619)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
> ----------
> 发件人:Piotr Nowojski
> 发送时间:2017年10月20日(星期五) 21:39
> 收件人:邓俊华
> 抄 送:user
> 主 题:Re: flink can't read hdfs namenode logica
日(星期五) 21:39收件人:邓俊华
抄 送:user 主 题:Re: flink can't
read hdfs namenode logical url
Hi,
Please double check the content of config files in YARN_CONF_DIR and
HADOOP_CONF_DIR (the first one has a priority over the latter one) and that
they are pointing to correct files.
Also check logs (WARN and
Hi,
Please double check the content of config files in YARN_CONF_DIR and
HADOOP_CONF_DIR (the first one has a priority over the latter one) and that
they are pointing to correct files.
Also check logs (WARN and INFO) for any relevant entries.
Piotrek
> On 20 Oct 2017, at 06:07, 邓俊华 wrote:
>
hi,
I start yarn-ssession.sh on yarn, but it can't read hdfs logical url. It always
connect to hdfs://master:8020, it should be 9000, my hdfs defaultfs is
hdfs://master.I have config the YARN_CONF_DIR and HADOOP_CONF_DIR, it didn't
work.Is it a bug? i use flink-1.3.0-bin-hadoop27-scala_2.10
2017