Great to hear it works!
`setStartFromGroupOffset` [1] will start reading partitions from the
consumer group’s (group.id setting in the consumer properties) committed
offsets in Kafka brokers. If offsets could not be found for a partition,
the 'auto.offset.reset' setting in the properties will be u
Hi Kevin,
Could you share the code of how you register the FlinkKafkaConsumer as a
table?
Regarding your initialization of FlinkKafkaConsumer, I would recommend to
setStartFromEarliest() to guarantee it consumes all the records in
partitions.
Regarding the flush(), it seems it is in the foreach
Hi Kevin,
thanks a lot for posting this problem.
I'm adding Jark to the thread, he or another committer working on Flink SQL
can maybe provide some insights.
On Tue, Nov 3, 2020 at 4:58 PM Kevin Kwon wrote:
> Looks like the event time that I've specified in the consumer is not being
> respected.
Looks like the event time that I've specified in the consumer is not being
respected. Does the timestamp assigner actually work in Kafka consumers?
.withTimestampAssigner(new SerializableTimestampAssigner[Order] {
override def extractTimestamp(order: Order, recordTimestamp:
Long): Lo
Hi guys, I've been recently experimenting with end-to-end testing
environment with Kafka and Flink (1.11)
I've setup an infrastructure with Docker Compose composed of single Kafka
broker / Flink (1.11) / MinIO for checkpoint saves
Here's the test scenario
1. Send 1000 messages with manual timest