Hi Niclas,
Glad that you got it working!
Thanks for sharing the problem and solution.
Best, Fabian
2018-02-19 9:29 GMT+01:00 Niclas Hedhman :
>
> (Sorry for the incoherent order and ramblings. I am writing this as I am
> trying to sort out what is going on...)
>
> 1. It is the first message to
(Sorry for the incoherent order and ramblings. I am writing this as I am
trying to sort out what is going on...)
1. It is the first message to be processed in the Kafka topic. If I set the
offset manually, it will pick up the message at that point, process it, and
ignore all following messages.
2
Hi Niclas,
About the second point you mentioned, was the processed message a random one or
a fixed one?
The default startup mode for FlinkKafkaConsumer is StartupMode.GROUP_OFFSETS,
maybe you could try StartupMode.EARLIST while debugging. Also, before that, you
may try fetching the messages w
So, the producer is run (at the moment) manually (command-line) one message
at a time.
Kafka's tooling (different consumer group) shows that a message is added
each time.
Since my last post, I have also added a UUID as the key, and that didn't
make a difference, so you are likely correct about de-
Hi Niclas,
Flink's Kafka consumer should not apply any deduplication. AFAIK, such a
"feature" is not implemented.
Do you produce into the topic that you want to read or is the data in the
topic static?
If you do not produce in the topic while the consuming application is
running, this might be an
Hi,
I am pretty new to Flink, and I like what I see and have started to build
my first application using it.
I must be missing something very fundamental. I have a
FlinkKafkaConsumer011, followed by a handful of filter, map and flatMap
functions and terminated with the standard CassandraSink. I hav