Thanks for your helpful response,
Setting the consumer's 'isolation.level' property to 'read_committed' solved
the problem!
In fact, still there is some duplicated messages in the sink topic but they are
uncommitted and if a kafka consumer reads the messages from this sink, the
duplicated messag
Thanks for your attention,
I have streaming jobs and use RocksDB state backends. Do you mean that I don't
need to be worry about memory management even if the allocated memory not be
released after cancellation?
Kind regards,
Nastaran Motavalli
From: Kosta
Hi everyone,
We have great news for you!
We are *extending* the deadline for the Call for Presentations to *December
7, 11:59 PT*
This will give you extra time to prepare and get a chance to showcase your
work, give back to the Flink community and receive valuable feedback!
Previous events have f
Good to hear that it's working, thanks for the update!
On Sat, Dec 1, 2018, 4:29 AM Vijay Balakrishnan Hi Gordon,
> Finally figured out my issue.Do not need to add http:// in proxyHost name.
> String proxyHost= "proxy-chaincom";//not http://proxy-chain...com
> kinesisConsumerConfig.setPropert