Flink Kafka Consumer stops fetching records

2018-01-01 Thread Teena K
Hi, I am using Flink 1.4 along with Kafka 0.11. My stream job has 4 Kafka consumers each subscribing to 4 different topics. The stream from each consumer gets processed in 3 to 4 different ways there by writing to a total of 12 sinks (cassandra tables). When the job runs, up to 8 or 10 records get

about the checkpoint and state backend

2018-01-01 Thread Jinhua Luo
Hi All, I have two questions: a) does the records/elements themselves would be checkpointed? or just record offset checkpointed? That is, what data included in the checkpoint except for states? b) where flink stores the state globally? so that the job manager could restore them on each task mang

About Kafka08Fetcher and Kafka010Fetcher

2018-01-01 Thread Jaxon Hu
In Kafka08Fetcher, it use Map to manage multi-threads. But I notice in Kafka09Fetcher or Kafka010Fetcher, it's gone. So how Kafka09Fetcher implements multi-threads read partitions from kafka?

Re: does the flink sink only support bio?

2018-01-01 Thread Jinhua Luo
2017-12-08 18:25 GMT+08:00 Stefan Richter : > You need to be a bit careful if your sink needs exactly-once semantics. In > this case things should either be idempotent or the db must support rolling > back changes between checkpoints, e.g. via transactions. Commits should be > triggered for conf

Two operators consuming from same stream

2018-01-01 Thread Sofer, Tovi
Hi group, We have the following graph below, on which we added metrics for latency calculation. We have two streams which are consumed by two operators: * ordersStream and pricesStream - they are both consumed by two operators: CoMapperA and CoMapperB, each using connect. Initially we