Hi All,
I am looking for an help to resolve a MirrorMaker2 issue. Here is the exception
I am receiving at the kafka connect end during launch.
2020-06-19 05:21:45,006 INFO [Worker clientId=connect-1,
groupId=mirrormaker2-cluster] Cluster ID: UYTj0E0PQOmG9PAb6axowg
(org.apache.kafka.clients.Me
We are using AWS MSK with Kafka 2.4.1 (and same client version), 3 Brokers. We
are seeing fairly frequent consumer offset commit fails as shown in the example
logs below. Things continue working as they are all retriable, however I would
like to improve this situation.
The issue occurs most o
Pushkar,
You are not wrong. Indeed whatever deserialization errors that happens
during the poll() method will cause your code to be interrupted without
much information about which offset failed. A workaround would be trying
to parse the message contained in the exception SerializationExceptio
Hello,
I'm playing around kafka for building a chat application. Here is what I
have done so far:
1. I have set up a CDC (change data capture) on my Postgresql database
2. The change on my table will get published to Kafka
3. I have node socket.io server, which listen to these messages on kafka
a
Hi Ricardo,
Probably this is more complicated than that since the exception has
occurred during Consumer.poll itself, so there is no ConsumerRecord for the
application to process and hence the application doesn't know the offset of
record where the poll has failed.
On Thu, Jun 18, 2020 at 7:03 PM
Pushkar,
Kafka uses the concept of offsets to identify the order of each record
within the log. But this concept is more powerful than it looks like.
Committed offsets are also used to keep track of which records has been
successfully read and which ones are not. When you commit a offset in
t
Hemant,
This behavior might be the result of the version of AK (Apache Kafka)
that you are using. Before AK 2.4 the default behavior for the
DefaultPartitioner was to load balance data production across the
partitions as you described. But it was found that this behavior would
cause performan
Pushkar,
"1. Would setting the cleanup policy to compact (and No delete) would always
retain the latest value for a key?" -- Yes. This is the purpose of this
setting.
"2. Does parameters like segment.bytes, retention.ms also play any role in
compaction?" -- They don't play any role in compacti
I am seeing the following exception in one of the broker log files.
Set up contains 3 brokers.
Environment - Windows
I am ok to remove the files c:\tmp directory. However, I'm a little curious
to know why this broker got into this state and if there is a way to
rectify the issue without deleting
Hi Gerbrand,
thanks for the update, however if i dig more into it, the issue is because
of schema registry issue and the schema registry not accessible. So the
error is coming during poll operation itself:
So this is a not a bad event really but the event can't be deserialized
itself due to schema
Hello Pushkar,
I'd split records/events in categories based on the error:
- Events that can be parsed or otherwise handled correctly, e.g. good events
- Fatal error, like parsing error, empty or incorrect values, etc., e.g. bad
events
- Non-fatal, like database-connection failure, io-failure, ou
Hi All,
This is what I am observing: we have a consumer which polls data from
topic, does the processing, again polls data which keeps happening
continuously.
At one time, there was some bad data on the topic which could not be
consumed by consumer, probably because it couldn't deserialize the eve
12 matches
Mail list logo