I have a producer that fails to get metadata when it first attempts to send
a record to a certain topic. It fails on
ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(),
record.partition(), maxBlockTimeMs); [0]
and yields:
org.apache.kafka.common.errors.TimeoutExceptio
It appears—at least according to debug logs—that the metadata request is
sent after the metadata update times out:
[... stack trace ommitted ...]
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to
update metadata after 5000 ms.
[2017-04-29 18:47:24,264] DEBUG (org.apache.kaf
Just a follow up (we identified a bug in the "skipped records" metric).
The reported value is not correct.
On 4/28/17 9:12 PM, Matthias J. Sax wrote:
> Ok. That makes sense.
>
> Question: why do you use .aggregate() instead of .count() ?
>
> Also, can you share the code of you AggregatorFunctio
Thanks for the update Matthias! And sorry for the delayed response.
The reason we use .aggregate() is because we want to count the number of
unique values for a particular field in the message. So, we just add that
particular field's value in the HashSet and then take the size of the
HashSet.
On