ableegoldman commented on code in PR #12235:
URL: https://github.com/apache/kafka/pull/12235#discussion_r888670820
##########
streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java:
##########
@@ -199,6 +201,7 @@ public <K, V> void send(final String topic,
log.trace("Failed record: (key {} value {} timestamp {})
topic=[{}] partition=[{}]", key, value, timestamp, topic, partition);
}
});
+ return recordSizeInBytes(keyBytes == null ? 0 : keyBytes.length,
valBytes == null ? 0 : valBytes.length, topic, headers);
Review Comment:
I did it like this to avoid the extra/unnecessary null check for consumer
records specifically, which already track the serialized size in bytes unlike
the producer record. And unfortunately they don't inherit from a common
interface/class -- but I added separate middle-man methods to handle them and
moved the null check for the producer case there, should be addressed now
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]