Hi,
I am using Flink 1.4 along with Kafka 0.11. My stream job has 4 Kafka
consumers each subscribing to 4 different topics. The stream from each
consumer gets processed in 3 to 4 different ways there by writing to a
total of 12 sinks (cassandra tables). When the job runs, up to 8 or 10
records get
Hi All,
I have two questions:
a) does the records/elements themselves would be checkpointed? or just
record offset checkpointed? That is, what data included in the
checkpoint except for states?
b) where flink stores the state globally? so that the job manager
could restore them on each task mang
In Kafka08Fetcher, it use Map to manage
multi-threads. But I notice in Kafka09Fetcher or Kafka010Fetcher, it's
gone. So how Kafka09Fetcher implements multi-threads read partitions from
kafka?
2017-12-08 18:25 GMT+08:00 Stefan Richter :
> You need to be a bit careful if your sink needs exactly-once semantics. In
> this case things should either be idempotent or the db must support rolling
> back changes between checkpoints, e.g. via transactions. Commits should be
> triggered for conf
Hi group,
We have the following graph below, on which we added metrics for latency
calculation.
We have two streams which are consumed by two operators:
* ordersStream and pricesStream - they are both consumed by two
operators: CoMapperA and CoMapperB, each using connect.
Initially we