You can add the following configuration for enabling the compression:
If the cluster names are c1 and c2 then
*c1->c2.producer.override.**compression.type=lz4*
Thanks,
Nitin
On Thu, Jul 9, 2020 at 11:30 AM Iftach Ben-Yosef
wrote:
> Hello Nitin, I have been unable to successfully setup producer
Hi Nitin,
According to your suggestion I tried to set this up in our test env. We got
'test' as source and 'dev' as dest;
'*test->dev.producer.override.**compression.type=gzip*'
I output the logs from mm2 to a file. I still see the compression is set to
none.
grep compre /tmp/log
compression.typ
Hi Liam,
Thanks for the response.
As we are using Spark Structured Streaming, the commit won't happen at Kafka
Side. For checkpoint, we are using HDFS.
As we are expecting, Kafka-consumer-groups.sh CLI should return LOG-END-OFFSET
with Partition details. However, it didn't display anyth
If you load data into a KTable or GlobalKTable, it's expected that the
data is partitioned by key, and that records with the same key have
non-descending timestamps.
If a record with let's say key A and timestamp 5 is put into the table,
and later a record with key A and timestamp 4 is put into th