Filed https://issues.apache.org/jira/browse/KAFKA-9141
On Thu, Oct 31, 2019 at 7:30 PM Chris Toomey wrote:
> I'm getting an OffsetOutOfRangeException accompanied by the log message
> "Updating global state failed. You can restart KafkaStreams to recover from
> this error." But I've restarted th
Definetely it's a good start but currently we're building our
disaster/recovery processes and some topic contents are really important.
So in a paranoiac mode we want to some guarantees that messages are not
corrupted. Are there some internals in storage mechanisms during the send
and the replicati
How about High watermark check ?
Since consumers consume based on HWM, presence of the same HWM should be a
good checkpoint, no?
Regards,
On Mon, 4 Nov 2019 at 22:53, Guillaume Arnaud wrote:
> Hi,
>
> I would like to compare the messages of an original topic with a mirrored
> topic in an other
Hi,
I would like to compare the messages of an original topic with a mirrored topic
in an other cluster to be sure that the content is the same.
I see that checksum method in KafkaConsumer is deprecated:
https://kafka.apache.org/10/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsum
I can verify that the above did take. ( kicking myself ) . It should be the
same for, these too ?
b.producer.batch.size = 1048576
b.producer.linger.ms = 30
b.producer.acks = 1
etc etc...
I also see that the properties can be overridden, so this routine
* kill 1 MM2
* change the mm2.prope
> BTW any ideas when 2.4 is being released
Looks like there are a few blockers still.
On Mon, Nov 4, 2019 at 2:06 PM Vishal Santoshi
wrote:
> I bet I have tested the "b.producer.acks' route. I will test again and let
> you know. Note that I resorted to hardcoding that value in the Sender and
>
I bet I have tested the "b.producer.acks' route. I will test again and let
you know. Note that I resorted to hardcoding that value in the Sender and
that alleviated the throttle I was seeing on consumption. BTW any ideas
when 2.4 is being released ( I thought it was Oct 30th 2019 )...
On Mon, Nov
On 2019/10/10 16:06:39, Uma Maheswari wrote:
> I have created a topic with cleanup.policy set to compact. segment.ms and
> delete.retention.ms are also configured for the topic. Compaction is
> happening but records with null value are not removed. But when segment.bytes
> is configured, re
Hi Jamie,
It is enabled, because that is the default, but that is just coincidental.
My use case which reproduced this error was the following:
1. Launch a single Kafka broker & ZK node with docker (testcontainers-java)
2. Create a topic with 2 partitions by using the admin client. I block
durin
Vishal, b.producer.acks should work, as can be seen in the following unit
test with similar producer property "client.id":
https://github.com/apache/kafka/blob/6b905ade0cdc7a5f6f746727ecfe4e7a7463a200/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorMakerConfigTest.java#L182
Kee
Hi All,
Is there any formula for calculating the number of required fetch sessions?
I notice we have a lot of "fetch session doesn't exist" messages in the
consumer logs, my understanding is that this is caused by fetch sessions being
removed by the broker because we have more consumers than fetc
Hi Sean,
Out of interest, is auto topic creation enabled on the brokers?
Thanks,
Jamie
-Original Message-
From: Sean Glover
To: users
Sent: Mon, Nov 4, 2019 04:21 PM
Subject: Producer send blocking when destination partition does not exist
Hi,
I accidentally created a scenario where
Hi,
I accidentally created a scenario where I was attempting to produce a
record to a partition that did not exist, because I was manually overriding
the destination partition, and I noticed that the producer.send blocked for
60s (producer property max.block.ms). During this time the producer was
Hi Matthias,
Could you help with above issue? Or any suggestions?
Thanks a lot!
On Thu, Oct 31, 2019 at 4:00 PM Xiyuan Hu wrote:
>
> Hi Matthias,
>
> Some additional information, after I restart the app, it went to
> endless rebalancing. Join rate loos like below attachment. It's
> basically re
14 matches
Mail list logo