[ 
https://issues.apache.org/jira/browse/KAFKA-18168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17922174#comment-17922174
 ] 

Matthias J. Sax commented on KAFKA-18168:
-----------------------------------------

{quote}But then the question that I have is, if checkpointing to a time 
interval without any activity is efficient or not? 
{quote}
I guess it could lead to some inefficiency, but as long as we ship with 
efficient defaults, and/or allow users to configure it to their needs, it 
should be ok? In the end, we cannot guard agains "misconfigurations", right? We 
can only ship with good defaults.
{quote}whether I should implement periodic checkpointing as well or if 
checkpointing only during restoring and closing is enough
{quote}
Guess that's up to you. We can implement "checkpointing when restoring finished 
and on close" w/o any user facing changes. So we don't need a KIP. If we want 
to add periodic checkpointing, and we introduce some configs for it, it's 
public / user-facing change, and we need a KIP... (We can also do 
"checkpointing when restoring finished and on close" right away, and do a KIP 
for the second part in parallel – and by "we", I guess I mean you :)).

The second part would boil down to the following questions:
 * Do we want to keep the 10K threshold (ie, count based) at all? If yes, 
should it be configurable? If no, do we want to replace it with a time-based 
approach (we could maybe re-use `commit.interval.ms`)
 * If we keep a count-based threshold, do we want to complement it with a 
time-based one? Would we re-use `commit.interval.ms` or introduce some new 
config?
 * If we have two thresholds (count and time based), do we allow to combine 
them? Or would we want them to be mutually exclusive? If we combine them, would 
we use "and" or "or" (or again, make it configurable)?

I don't have an option myself yet what the "best" design could be, but that's 
why we would want to do a KIP, to discuss the design and tradeoffs. If you want 
to pick up doing the second part about periodic checkpoints, you should first 
think about a design that you would prefer yourself, and write it down as a 
KIP. Plus add to the "rejected alternatives" section explaining why the you 
prefer what you propose, and discuss pros/cons of the thing you don't want to 
do. After you have the KIP, we can take it from there.

> GlobalKTable does not checkpoint restored offsets until next 10K events
> -----------------------------------------------------------------------
>
>                 Key: KAFKA-18168
>                 URL: https://issues.apache.org/jira/browse/KAFKA-18168
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>    Affects Versions: 3.4.1, 3.8.1
>            Reporter: Sergey Zyrianov
>            Assignee: Janindu Pathirana
>            Priority: Minor
>
> As in https://issues.apache.org/jira/browse/KAFKA-5241, there is a state of 
> considerable size kept on a topic that backs up GlobalKTalbe. Restoring 
> GlobalKTable takes minutes before it is operational. After successful restore 
> the checkpoint file is not created until further 10K events happen on the 
> topic. 
> The following scenario illustrates the issue:
>  # {*}Scaling Out{*}: When a new instance (e.g., pod X) is added to an 
> already running set of instances (pods 0...X-1), the new instance will 
> restore the state successfully. However, it will not create a checkpoint file 
> until 10K events are processed on the {{GlobalKTable}} topic.
>  # {*}Lack of Traffic{*}: If there is no new traffic on the {{GlobalKTable}} 
> topic, there is no mechanism to force the creation of the checkpoint file. 
> The state remains uncheckpointed. Ref 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StateManagerUtil.java#L78C35-L78C72]
>  # {*}Instance Restart{*}: If the new instance (pod X) is restarted (due to 
> update for ex) before 10K events have been processed, it will have to restore 
> the entire state from the topic again, leading to the same time-consuming 
> restoration process. This issue persists across restarts.
> IMO, checkpointing during the restore process and upon completion/close is 
> missing in the current implementation
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to