Re: flink can't read hdfs namenode logical url

2017-10-23 Thread Piotr Nowojski
nt.java:619) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149) > ---------- > 发件人:Piotr Nowojski > 发送时间:2017年10月20日(星期五) 21:39 > 收件人:邓俊华 > 抄 送:user > 主 题:Re: flink can't read hdfs namenode logica

回复:flink can't read hdfs namenode logical url

2017-10-23 Thread 邓俊华
日(星期五) 21:39收件人:邓俊华 抄 送:user 主 题:Re: flink can't read hdfs namenode logical url Hi, Please double check the content of config files in YARN_CONF_DIR and HADOOP_CONF_DIR (the first one has a priority over the latter one) and that they are pointing to correct files. Also check logs (WARN and

Re: flink can't read hdfs namenode logical url

2017-10-20 Thread Piotr Nowojski
Hi, Please double check the content of config files in YARN_CONF_DIR and HADOOP_CONF_DIR (the first one has a priority over the latter one) and that they are pointing to correct files. Also check logs (WARN and INFO) for any relevant entries. Piotrek > On 20 Oct 2017, at 06:07, 邓俊华 wrote: >

flink can't read hdfs namenode logical url

2017-10-19 Thread 邓俊华
hi, I start yarn-ssession.sh on yarn, but it can't read hdfs logical url. It always connect to hdfs://master:8020, it should be 9000, my hdfs defaultfs is hdfs://master.I have config the YARN_CONF_DIR and HADOOP_CONF_DIR, it didn't work.Is it a bug? i use flink-1.3.0-bin-hadoop27-scala_2.10 2017