Hi Luke,
Unfortunately we are not able to see any log information.
We already increased the log level but also in this case, no logs are written.
Best regards,
Daniel
***
From: Luke Chen
Date: Tue, 15 Mar 2022 14:43:56 +0800
Subject: Re:
Hi,
We are looking forward to the kafka upgrade from log4j to log4j 2 in the
upcoming release scheduled for April. I could see the upcoming release plan
through the links below.
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.2.0
https://issues.apache.org/jira/browse/KAFKA-9
Hi Dan,
Okay, so if you're looking for low latency, I'm guessing that you're using
a very low linger.ms in the producers? Also, what format are the records?
If they're already in a binary format like Protobuf or Avro, unless they're
composed largely of strings, compression may offer little benefit
We're using protos but there are still a bunch of custom fields where
clients specify redundant strings.
My local test is showing 75% reduction in size if I use zstd or gzip. I
care the most about Kafka storage costs right now.
On Tue, Mar 15, 2022 at 2:25 PM Liam Clarke-Hutchinson
wrote:
> Hi
Sounds like a goer then :) Those strings in the protobuf always get ya,
can't use clever encodings for them like you can with numbers.
On Wed, 16 Mar 2022 at 11:29, Dan Hill wrote:
> We're using protos but there are still a bunch of custom fields where
> clients specify redundant strings.
>
> My
Oh, and meant to say, zstd is a good compromise between CPU and compression
ratio, IIRC it was far less costly on CPU than gzip.
So yeah, I generally recommend setting your topic's compression to
"producer", and then going from there.
On Wed, 16 Mar 2022 at 11:49, Liam Clarke-Hutchinson
wrote:
Thanks, Liam! I was convinced to do zstd. I'm using an older version of
Flink that uses an older Kafka Producer (so zstd isn't available in it).
I'll switch to zstd when I upgrade.
On Tue, Mar 15, 2022 at 3:52 PM Liam Clarke-Hutchinson
wrote:
> Oh, and meant to say, zstd is a good compromise b
Trying to find a good sample of what consumer settings besides setting
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG to
org.apache.kafka.clients.consumer.CooperativeStickyAssignor
is needed to make the rebalance happen cleanly. Unable to find and decent
documentation or code samples. I have
Hi Richard,
To use `CooperativeStickyAssignor`, no other special configuration is
required.
I'm not sure what does `make the rebalance happen cleanly` mean.
Did you find any problem during group rebalance?
Thank you.
Luke
On Wed, Mar 16, 2022 at 1:00 PM Richard Ney
wrote:
> Trying to find a g