Hi Hayden,
as far as I know, an end offset is not supported by Flink's Kafka consumer.
You could extend Flink's consumer. As you said, there is already code to
set the starting offset (per partition), so you might be able to just
piggyback on that.
Gordon (in CC) who has worked a lot on the Kafka
I have 2 datasets that I need to join together in a Flink batch job. One of the
datasets needs to be created dynamically by completely 'draining' a Kafka topic
in an offset range (start and end), and create a file containing all messages
in that range. I know that in Flink streaming I can specif