Thanks, Gordan!  Will keep an eye on that!

Connie

From: "Tzu-Li (Gordon) Tai" <tzuli...@apache.org>
Date: Monday, December 11, 2017 at 5:29 PM
To: Connie Yang <cy...@ebay.com>
Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Flink-Kafka connector - partition offsets for a given timestamp?

Hi Connie,

We do have a pull request for the feature, that should almost be ready after 
rebasing: 
https://github.com/apache/flink/pull/3915<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fflink%2Fpull%2F3915&data=02%7C01%7Ccyang%40ebay.com%7C6326cf882d31466c2cc008d540ffdd30%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%7C636486390025975513&sdata=l5ttzcZlDNAJj7%2Bh%2FWXDtiQZeIIUeipnEVXx%2F5EOInE%3D&reserved=0>,
 JIRA: 
https://issues.apache.org/jira/browse/FLINK-6352<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-6352&data=02%7C01%7Ccyang%40ebay.com%7C6326cf882d31466c2cc008d540ffdd30%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%7C636486390025975513&sdata=Ay6YN6q7BVPVHdOyLvAJEq3TX69gXzuiuy%2BXQeE709c%3D&reserved=0>.
This means, of course, that the feature isn't part of any release yet. We can 
try to make sure this happens for Flink 1.5, for which the proposed release 
date is around February 2018.

Cheers,
Gordon

On Tue, Dec 12, 2017 at 3:53 AM, Yang, Connie 
<cy...@ebay.com<mailto:cy...@ebay.com>> wrote:
Hi,

Does Flink-Kafka connector allow job graph to consume topoics/partitions from a 
specific timestamp?

https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java#L469<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fflink%2Fblob%2Fmaster%2Fflink-connectors%2Fflink-connector-kafka-base%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fflink%2Fstreaming%2Fconnectors%2Fkafka%2FFlinkKafkaConsumerBase.java%23L469&data=02%7C01%7Ccyang%40ebay.com%7C6326cf882d31466c2cc008d540ffdd30%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%7C636486390025975513&sdata=Vr9f1Qlo1v%2Fy0C9Tg3OL5rCqPopTWiB2irDTJALqD%2BE%3D&reserved=0>
 seems to suggest that a job graph can only start from an earliest, latest or a 
set of offsets.

KafkaConsumer API, 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L1598<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fkafka%2Fblob%2Ftrunk%2Fclients%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fkafka%2Fclients%2Fconsumer%2FKafkaConsumer.java%23L1598&data=02%7C01%7Ccyang%40ebay.com%7C6326cf882d31466c2cc008d540ffdd30%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%7C636486390025975513&sdata=xHa2TeAbeuoRf0yX9XniFi6XgP8tSF811iarYHE%2BR5Q%3D&reserved=0>,
 gives us a way to find partition offsets based on a timestamp.

Thanks
Connie

Reply via email to