Checkpoint problem in 1.12.0

2021-02-03 Thread simpleusr
our jobs and noticed that checkpoint offsets are not committed to kafka for source connectors. To simplfiy the issues I created simple repoducer projects: https://github.com/simpleusr/flink_problem_1.5.5 https://github.com/simpleusr/flink_problem_1.12.0 It seems that there are major changes in

Re: Window elements for certain period for delayed processing

2019-02-15 Thread simpleusr
Many Thanks Fabian I will start to investigate ProcessFunction Regards -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Window elements for certain period for delayed processing

2019-02-14 Thread simpleusr
Hi, My ultimate requirement is to stop processing of certain events between 00:00:00 and 01:00:00 for each day (Time is in HH:mm:SS format). I am flink newbie and I thought only option to delay elements is to collect them in a window between 00:00:00 and 01:00:00 for each day. TumblingEventTime

Re: Flink Standalone cluster - logging problem

2019-02-11 Thread simpleusr
Hi Selveraj, This did not help either. Thanks -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Flink Standalone cluster - logging problem

2019-02-11 Thread simpleusr
Hi Gary, By "job logs" I mean all the loggers under a subpackage of com.mycompany.xyz . We are using ./bin/flink run command for job execution thats why I modified log4j-cli.properties. Modification of log4j.properties also did not help... Regards -- Sent from: http://apache-flink-user-mail

Re: Flink Standalone cluster - logging problem

2019-02-10 Thread simpleusr
Hi Chesnay, below is the content for my log4j-cli.properties file. I expect my job logs (packaged under com.mycompany.xyz to be written to file2 appender. However no file generated with prefix XYZ. I restarted the cluster , canceled resubmitted several times but none of them helped. / log4j.root

Re: Flink Standalone cluster - dumps

2019-02-10 Thread simpleusr
Hi Chesnay, Many thanks.. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Flink Standalone cluster - production settings

2019-02-10 Thread simpleusr
I know this seems a silly question but I am trying to figure out optimal set up for our flink jobs. We are using standalone cluster with 5 jobs. Each job has 3 asynch operators with Executors with thread counts of 20,20,100. Source is kafka and cassandra and rest sinks exist. Currently we are usin

Flink Standalone cluster - logging problem

2019-02-08 Thread simpleusr
We are using standalone cluster and submittig jobs through command line client. As stated in https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html , we are editing log4j-cli.properties but this does not make any effect? Anybody seen that before? Regards -- Sent fro

Flink Standalone cluster - production settings

2019-02-08 Thread simpleusr
I know this seems a silly question but I am trying to figure out optimal set up for our flink jobs. We are using standalone cluster with 5 jobs. Each job has 3 asynch operators with Executors with thread counts of 20,20,100. Source is kafka and cassandra and rest sinks exist. Currently we are usin

Flink Standalone cluster - dumps

2019-02-08 Thread simpleusr
Flink Standalone cluster - dumps We are using standalone cluster and submittig jobs through command line client. As far as I understand, the job is executed in task manager. A single task manager represents a single jvm? So the dump shows threads from all jobs bound to task manager. Two questions

Apache flink - event time based watermarks generators - optimal strategy

2019-01-30 Thread simpleusr
I am a flink newbie and trying to apply windowing . My source is kafka and my model does not contain event time info so, I am tring to use Kafka timestamps with assignTimestampsAndWatermarks() method I implemented two timestamp assigners as below. public class TimestampAssigner1 implements Assign