Hey Nitin,
I have already done that. I used dump-log-segments option. And I can see
the codec used is snappy/gzip/lz4. My question is, only gzip is giving me
compression. Rest are equivalent to uncompressed storage,
On Wed, May 12, 2021 at 11:16 AM nitin agarwal
wrote:
> You can read the data f
You can read the data from the disk and see compression type.
https://thehoard.blog/how-kafkas-storage-internals-work-3a29b02e026
Thanks,
Nitin
On Wed, May 12, 2021 at 11:10 AM Shantanu Deshmukh
wrote:
> I am trying snappy compression on my producer. Here's my setup
>
> Kafka - 2.0.0
> Spring-K
I am trying snappy compression on my producer. Here's my setup
Kafka - 2.0.0
Spring-Kafka - 2.1.2
Here's my producer config
compressed producer ==
configProps.put( ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServer);
configProps.put(
ProducerConfig.KEY_
Hi All,
Please let me know if anyone is working on this.
If you are not the correct contact, please help with right contact.
Thanks & Regards,
Laxmikant Shete
Sr. Network Systems Engineer
Direct + 91-20-40175476
From: Shete, Laxmikant
Sent: Sunday, May 9, 2021 5:05 PM
To: webmas...@apache.or
Hello Pietro,
1) If you are using the Streams DSL with an aggregation, it would
repartition the input streams by the aggregation field for data
parallelism, and hence multiple instances would be able to do the
aggregation in parallel and independently with correct results.
2) Short answer is "prob
Hi!
I've been using the ./bin/kafka-producer-perf-test.sh script to
experiment with the quota settings to try to understand them better.
With the following command, I'm producing one single-byte record per
second, and forcing each produce request to only have one record:
./bin/kafka-producer-per