hi,
I start yarn-ssession.sh on yarn, but it can't read hdfs logical url. It always
connect to hdfs://master:8020, it should be 9000, my hdfs defaultfs is
hdfs://master.I have config the YARN_CONF_DIR and HADOOP_CONF_DIR, it didn't
work.Is it a bug? i use flink-1.3.0-bin-hadoop27-scala_2.10
2017
日(星期五) 21:39收件人:邓俊华
抄 送:user 主 题:Re: flink can't
read hdfs namenode logical url
Hi,
Please double check the content of config files in YARN_CONF_DIR and
HADOOP_CONF_DIR (the first one has a priority over the latter one) and that
they are pointing to correct files.
Also check logs (WARN and
hi,
How can I count the element in datastream? I don't want the keyBy().
Yes, I want count all the elements. But I can't do cumulative.eg:
distinctOrder.map(new MapFunction() {
@Override
public Object map(Order value) throws Exception {
return null;
}
}).setParallelism(1).print();
-
hi,
Please check the logs in yours flink stream app to see if there is some issues
connecting yarn resourcemanager.I also encounted the issue when run flink
stream in yarn with HA and we have discussed the issue in the preceding
months,please check the issue mail url in the
below.http://mail-