vvcephei commented on a change in pull request #11581:
URL: https://github.com/apache/kafka/pull/11581#discussion_r767028987



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/StoreQueryUtils.java
##########
@@ -62,4 +71,28 @@ public static void updatePosition(
             position.withComponent(meta.topic(), meta.partition(), 
meta.offset());
         }
     }
+
+    public static boolean isPermitted(

Review comment:
       I was hesitant to add anything extra to the Position class, but there's 
no particular reason for it.

##########
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/utils/IntegrationTestUtils.java
##########
@@ -125,22 +128,81 @@
         final long deadline = start + DEFAULT_TIMEOUT;
 
         do {
+            if (Thread.currentThread().isInterrupted()) {
+                fail("Test was interrupted.");
+            }
             final StateQueryResult<R> result = kafkaStreams.query(request);
-            if (result.getPartitionResults().keySet().containsAll(partitions)
-                || result.getGlobalResult() != null) {
+            if (result.getPartitionResults().keySet().containsAll(partitions)) 
{
                 return result;
             } else {
-                try {
-                    Thread.sleep(100L);
-                } catch (final InterruptedException e) {
-                    throw new RuntimeException(e);
-                }
+                sleep(100L);
             }
         } while (System.currentTimeMillis() < deadline);
 
         throw new TimeoutException("The query never returned the desired 
partitions");
     }
 
+    /**
+     * Repeatedly runs the query until the response is valid and then return 
the response.
+     * <p>
+     * Validity in this case means that the response position is up to the 
specified bound.
+     * <p>
+     * Once position bounding is generally supported, we should migrate tests 
to wait on the
+     * expected response position.
+     */
+    public static <R> StateQueryResult<R> iqv2WaitForResult(

Review comment:
       Thanks!

##########
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
##########
@@ -107,10 +107,10 @@ private void putAndMaybeForward(final 
ThreadCache.DirtyEntry entry,
             // we can skip flushing to downstream as well as writing to 
underlying store
             if (rawNewValue != null || rawOldValue != null) {
                 // we need to get the old values if needed, and then put to 
store, and then flush
-                wrapped().put(entry.key(), entry.newValue());
-
                 final ProcessorRecordContext current = context.recordContext();
                 context.setRecordContext(entry.entry().context());
+                wrapped().put(entry.key(), entry.newValue());

Review comment:
       I'm not sure I follow; we always did set the context, we just did it 
(incorrectly) between the `wrapped().put` and forwarding downstream. Why should 
we do the `wrapped().put` with the wrong context and then forward with the 
right context?
   
   Taking your example sequence, the correct offset for record `A` is 0.
   The old behavior was that we would do `wrapped().put(A)` with offset 2 and 
then forward `A` with offset 0.
   The new behavior is that we do `wrapped().put(A)` with offset 0 and then 
forward `A` with offset 0.
   
   There's no scenario in which we would forward A with offset 2.

##########
File path: 
streams/src/main/java/org/apache/kafka/streams/query/StateQueryResult.java
##########
@@ -97,11 +97,15 @@ public void addResult(final int partition, final 
QueryResult<R> r) {
      * prior observations.
      */
     public Position getPosition() {
-        Position position = Position.emptyPosition();
-        for (final QueryResult<R> r : partitionResults.values()) {
-            position = position.merge(r.getPosition());
+        if (globalResult != null) {

Review comment:
       That's correct.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to