Thanks Hequn for the pointer.
From what I read, I may also need to emit the timestamp regularly for all idle
partitions to ensure watermark progression.
—
Fritz
> On Nov 8, 2018, at 6:02 PM, Hequn Cheng wrote:
>
> Hi Fritz,
>
> Watermarks are merged on stream shuffles. If one of the input's
How can I configure 1 Flink Job (stream execution environment, parallelism set
to 10) to have multiple kafka sources where each has its' own sink to s3.
For example, let's say the sources are:
- Kafka Topic A - Consumer (10 partitions)
- Kafka Topic B - Consumer (10 partitions)
- Kafka Topic C -
Hi Ufuk, thanks for checking. I am using openJDK 1.8_171, I still have the
same issue with presto.
- why checkpoint is not starting from 1? old chk stored in ZK caused it, I
cleaned it up, but not very helpful
- I switched to Flink + Hadoop28, and used hadoop s3, with no other
changes, check point