That sound like expected. Note, if you use exactly-once, on commit/abort
of a transaction, the broker write a commit/abort message into the
partitions. Those markers fill up one message/record slot.
Thus, you have "gaps" in the message offsets and in you case, I assume
that you have 5 commit marke
Hi Srikanth,
Looked at the source code once again and discussing with other committer I
now remembered why we designed it that way: when you set the
HandlerResponse to FAIL, it means that once a "poison record" is received,
stop the world by throwing this exception all the way up. And hence at tha
Hello, i have following problem with kafka-streams scala app and Exactly
once delivery quarantee:
Topic filled with kafka-streams app(exactly once enabled) has wrong ending
offset. Broker and streams API version 0.11. When i run
*kafka.tools.GetOffsetShell*, it gives ending offset 17, but in top
Helo Srikanth,
Thanks for reporting this, as I checked the code
skippedDueToDeserializationError is effectively only recorded when the
DeserializationHandlerResponse is not set to FAIL. I agree it is not
exactly matching the documentation's guidance, and will try to file a JIRA
and fix it.
As for
Hi all,
Does somebody know if it's possible to retrieve message kafka headers
(since 11) using the Confluent REST proxy?
Thanks in advance,
Thanks a lot. I think that's the only way that ensures GDPR compliance.
In a second iteration, my thoughts are to anonymize instead of removing,
maybe identifying PII fields using AVRO custom types.
Thanks again,
2017-11-28 15:54 GMT+01:00 Ben Stopford :
> You should also be able to manage this
Hello,
As per doc when LogAndContinueExceptionHandler is used it will set
skippedDueToDeserializationError-rate metric to indicate deserialization
error.
I notice that it is never set. Instead skipped-records-rate is set. My
understanding was that skipped-records-rate is set due to timestamp
extra
I am really not sure of max.poll.interval.ms, do we really need this.
Consumer liveless is already ensured by
session.timeout.ms/hearbeat.interval.ms.
max.poll.interval.ms - by default is 5 mins.
session.timeout.ms - 10 secs.
if max.poll.interval.ms is not met, then we kill the thread. So, lets sa
Hi,
I am talking w.r.t. Kafka 1.0., reducing the poll interval and reducing the
number of records polled are always an option.
I wanted to explore if there are some other options apart from this and in
case of GC pause, both of the above mentioned options will not help.
-Sameer.
On Fri, Jan 26,