flink can't read hdfs namenode logical url

2017-10-19 Thread
hi, I start yarn-ssession.sh on yarn, but it can't read hdfs logical url. It always connect to hdfs://master:8020, it should be 9000, my hdfs defaultfs is hdfs://master.I have config the YARN_CONF_DIR and HADOOP_CONF_DIR, it didn't work.Is it a bug? i use flink-1.3.0-bin-hadoop27-scala_2.10 2017

回复:flink can't read hdfs namenode logical url

2017-10-23 Thread
日(星期五) 21:39收件人:邓俊华 抄 送:user 主 题:Re: flink can't read hdfs namenode logical url Hi, Please double check the content of config files in YARN_CONF_DIR and HADOOP_CONF_DIR (the first one has a priority over the latter one) and that they are pointing to correct files. Also check logs (WARN and

How can I count the element in datastream

2018-01-11 Thread
hi, How can I count the element in datastream? I don't want the keyBy().

回复:How can I count the element in datastream

2018-01-14 Thread
Yes, I want count all the elements. But I can't do cumulative.eg:    distinctOrder.map(new MapFunction() { @Override public Object map(Order value) throws Exception { return null; } }).setParallelism(1).print(); -

回复:Flink Yarn session

2018-01-28 Thread
hi, Please check the logs in yours flink  stream app to see if there is some issues connecting yarn resourcemanager.I also encounted the issue when run flink stream in yarn with HA and we have discussed the issue in the preceding  months,please check the issue mail url in the below.http://mail-