Hi Vladimir,

Did you pass the properties to the FlinkKafkaConsumer?

Cheers,
Max

On Thu, Dec 3, 2015 at 7:06 PM, Vladimir Stoyak <vsto...@yahoo.com> wrote:
> Gave it a try, but does not seem to help. Is it working for you?
>
> Thanks
>
> Sent from my iPhone
>
>> On Dec 3, 2015, at 6:11 PM, Vladimir Stoyak <vsto...@yahoo.com> wrote:
>>
>> As far as I know "auto.offset.reset" what to do if offset it not available 
>> or out of bound?
>>
>> Vladimir
>>
>>
>> On Thursday, December 3, 2015 5:58 PM, Maximilian Michels <m...@apache.org> 
>> wrote:
>> Hi Vladimir,
>>
>> You may supply Kafka consumer properties when you create the 
>> FlinkKafkaConsumer.
>>
>> Properties props = new Properties();
>>
>> // start from largest offset - DEFAULT
>> props.setProperty("auto.offset.reset", "largest");
>> // start from smallest offset
>> props.setProperty("auto.offset.reset", "smallest");
>>
>> I don't think it is possible to start from a specific offset. The
>> offset is only unique per partition. You could modify the offsets in
>> the Zookeeper state but you really have to know what you're doing
>> then.
>>
>> Best regards,
>> Max
>>
>>
>>
>>> On Thu, Dec 3, 2015 at 4:01 PM, Vladimir Stoyak <vsto...@yahoo.com> wrote:
>>> I see that Flink 0.10.1 now supports Keyed Schemas which allows us to rely 
>>> on Kafka topics set to "compact" retention for data persistence.
>>>
>>> In our topology we wanted to set some topics with Log Compactions enabled 
>>> and read topic from the beginning when the topology starts or component 
>>> recovers. Does current Kafka Consumer implementation allow to read all 
>>> messages in a topic from the beginning or from a specific offset.
>>>
>>> Thanks,
>>> Vladimir

Reply via email to