Would changing the direct stream api to support committing the offsets to
kafka's ZK( like a regular consumer) as a fallback mechanism, in case
recovering from checkpoint fails , be an accepted solution?

On Thursday, September 24, 2015, Cody Koeninger <c...@koeninger.org> wrote:

> This has been discussed numerous times, TD's response has consistently
> been that it's unlikely to be possible
>
> On Thu, Sep 24, 2015 at 12:26 PM, Radu Brumariu <bru...@gmail.com
> <javascript:_e(%7B%7D,'cvml','bru...@gmail.com');>> wrote:
>
>> It seems to me that this scenario that I'm facing, is quite common for
>> spark jobs using Kafka.
>> Is there a ticket to add this sort of semantics to checkpointing ? Does
>> it even make sense to add it there ?
>>
>> Thanks,
>> Radu
>>
>>
>> On Thursday, September 24, 2015, Cody Koeninger <c...@koeninger.org
>> <javascript:_e(%7B%7D,'cvml','c...@koeninger.org');>> wrote:
>>
>>> No, you cant use checkpointing across code changes.  Either store
>>> offsets yourself, or start up your new app code and let it catch up before
>>> killing the old one.
>>>
>>> On Thu, Sep 24, 2015 at 8:40 AM, Radu Brumariu <bru...@gmail.com> wrote:
>>>
>>>> Hi,
>>>> in my application I use Kafka direct streaming and I have also enabled
>>>> checkpointing.
>>>> This seems to work fine if the application is restarted. However if I
>>>> change the code and resubmit the application, it cannot start because of
>>>> the checkpointed data being of different class versions.
>>>> Is there any way I can use checkpointing that can survive across
>>>> application version changes?
>>>>
>>>> Thanks,
>>>> Radu
>>>>
>>>>
>>>
>

Reply via email to