zhijiangW commented on a change in pull request #11814:
URL: https://github.com/apache/flink/pull/11814#discussion_r415202630



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpillingAdaptiveSpanningRecordDeserializer.java
##########
@@ -597,12 +597,12 @@ private void addNextChunkFromMemorySegment(MemorySegment 
segment, int offset, in
                                throw new 
UnsupportedOperationException("Unaligned checkpoint currently do not support 
spilled " +
                                        "records.");
                        } else if (recordLength != -1) {
-                               int leftOverSize = leftOverLimit - 
leftOverStart;
+                               int leftOverSize = leftOverData != null ? 
leftOverLimit - leftOverStart : 0;

Review comment:
       Thanks for finding this bug!
   
   I think the root cause was the state inconsistency among `{leftOverLimit, 
leftOverStart}` with `leftOverData`. During `#clear()` we only reset 
`leftOverData` as null, but now reset the derived `{leftOverLimit, 
leftOverStart}` from `leftOverData`. So we can check the condition only by 
`leftOverData`. Maybe we can also reset `{leftOverLimit, leftOverStart}` during 
`#clear()` to keep all consistency.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to