Re: I have some interesting result with my test code

2020-11-05 Thread Jark Wu
Great to hear it works! `setStartFromGroupOffset` [1] will start reading partitions from the consumer group’s (group.id setting in the consumer properties) committed offsets in Kafka brokers. If offsets could not be found for a partition, the 'auto.offset.reset' setting in the properties will be u

Re: I have some interesting result with my test code

2020-11-04 Thread Jark Wu
Hi Kevin, Could you share the code of how you register the FlinkKafkaConsumer as a table? Regarding your initialization of FlinkKafkaConsumer, I would recommend to setStartFromEarliest() to guarantee it consumes all the records in partitions. Regarding the flush(), it seems it is in the foreach

Re: I have some interesting result with my test code

2020-11-03 Thread Robert Metzger
Hi Kevin, thanks a lot for posting this problem. I'm adding Jark to the thread, he or another committer working on Flink SQL can maybe provide some insights. On Tue, Nov 3, 2020 at 4:58 PM Kevin Kwon wrote: > Looks like the event time that I've specified in the consumer is not being > respected.

Re: I have some interesting result with my test code

2020-11-03 Thread Kevin Kwon
Looks like the event time that I've specified in the consumer is not being respected. Does the timestamp assigner actually work in Kafka consumers? .withTimestampAssigner(new SerializableTimestampAssigner[Order] { override def extractTimestamp(order: Order, recordTimestamp: Long): Lo

I have some interesting result with my test code

2020-11-02 Thread Kevin Kwon
Hi guys, I've been recently experimenting with end-to-end testing environment with Kafka and Flink (1.11) I've setup an infrastructure with Docker Compose composed of single Kafka broker / Flink (1.11) / MinIO for checkpoint saves Here's the test scenario 1. Send 1000 messages with manual timest