vvcephei commented on a change in pull request #11581:
URL: https://github.com/apache/kafka/pull/11581#discussion_r767030936



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
##########
@@ -107,10 +107,10 @@ private void putAndMaybeForward(final 
ThreadCache.DirtyEntry entry,
             // we can skip flushing to downstream as well as writing to 
underlying store
             if (rawNewValue != null || rawOldValue != null) {
                 // we need to get the old values if needed, and then put to 
store, and then flush
-                wrapped().put(entry.key(), entry.newValue());
-
                 final ProcessorRecordContext current = context.recordContext();
                 context.setRecordContext(entry.entry().context());
+                wrapped().put(entry.key(), entry.newValue());

Review comment:
       I'm not sure I follow; we always did set the context, we just did it 
(incorrectly) between the `wrapped().put` and forwarding downstream. Why should 
we do the `wrapped().put` with the wrong context and then forward with the 
right context?
   
   Taking your example sequence, the correct offset for record `A` is 0.
   The old behavior was that we would do `wrapped().put(A)` with offset 2 and 
then forward `A` with offset 0.
   The new behavior is that we do `wrapped().put(A)` with offset 0 and then 
forward `A` with offset 0.
   
   There's no scenario in which we would forward A with offset 2.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to