Re: Flink on Yarn, restart job will not destroy original task manager

2018-09-03 Thread James (Jian Wu) [FDS Data Platform]
be true to enable or false(default) to disable local recovery. By default, local recovery is deactive. In 1.5.0, I’ve not enable local recovery. So whether I need manual disable local recovery via flink.conf? Regards James From: "James (Jian Wu) [FDS Data Platform]" Date: Monday, S

Re: Flink on Yarn, restart job will not destroy original task manager

2018-09-03 Thread James (Jian Wu) [FDS Data Platform]
My Flink version is 1.5, I will rebuild new version flink Regards James From: Gary Yao Date: Monday, September 3, 2018 at 3:57 PM To: "James (Jian Wu) [FDS Data Platform]" Cc: "user@flink.apache.org" Subject: Re: Flink on Yarn, restart job will not destroy original ta

Flink on Yarn, restart job will not destroy original task manager

2018-09-03 Thread James (Jian Wu) [FDS Data Platform]
Hi: I launch flink application on yarn with 5 task manager, every task manager has 5 slots with such script #!/bin/sh CLASSNAME=$1 JARNAME=$2 ARUGMENTS=$3 export JVM_ARGS="${JVM_ARGS} -Dmill.env.active=aws" /usr/bin/flink run -m yarn-cluster --parallelism 15 -yn 5 -ys 3 -yjm 8192 -ytm 8192

Re: Restore state from save point with add new flink sql

2018-06-25 Thread James (Jian Wu) [FDS Data Platform]
Jun 15, 2018 at 9:59 AM James (Jian Wu) [FDS Data Platform] mailto:james...@coupang.com>> wrote: Hi: My application use flink sql, I want to add new sql to the application, For example first version is DataStream paymentCompleteStream = getKafkaStream(env, bootStr

Restore state from save point with add new flink sql

2018-06-15 Thread James (Jian Wu) [FDS Data Platform]
Hi: My application use flink sql, I want to add new sql to the application, For example first version is DataStream paymentCompleteStream = getKafkaStream(env, bootStrapServers, kafkaGroup, orderPaymentCompleteTopic) .flatMap(new PaymentComplete2AggregatedOrderItemFlatMap()).assignT

Link checkpoint failure issue

2018-06-05 Thread James (Jian Wu) [FDS Data Platform]
Hi: I am using Flink streaming continuous query. Scenario: Kafka-connector to consume a topic, and streaming incremental calculate 24 hours window data. And use processingTime as TimeCharacteristic. I am using RocksDB as StateBackend, file system is HDFS, and checkpoint interval is 5 minu