Kafka Producer NetworkException and Timeout Exceptions

2017-11-08 Thread Shantanu Deshmukh
We are getting random NetworkExceptions and TimeoutExceptions in our production environment: Brokers: 3 Zookeepers: 3 Servers: 3 Kafka: 0.10.0.1 Zookeeeper: 3.4.3 We are occasionally getting this exception in my producer logs: Expiring 10 record(s) for TOPIC:XX: 5608 ms has passed since bat

Frequent consumer rebalances, auto commit failures

2018-05-22 Thread Shantanu Deshmukh
ecified different CGs for different topics' consumers. Even this is not helping. I am trying to search over the web, checked my code, tried many combinations of configuration but still no luck. Please help me. Thanks & Regards, Shantanu Deshmukh

Fwd: Frequent consumer rebalances, auto commit failures

2018-05-23 Thread Shantanu Deshmukh
hen I specified different CGs for different topics' consumers. Even this is not helping. I am trying to search over the web, checked my code, tried many combinations of configuration but still no luck. Please help me. Thanks & Regards, Shantanu Deshmukh

Frequent consumer rebalance, auto commit failures

2018-05-23 Thread Shantanu Deshmukh
ecified different CGs for different topics' consumers. Even this is not helping. I am trying to search over the web, checked my code, tried many combinations of configuration but still no luck. Please help me. *Thanks & Regards,* *Shantanu Deshmukh*

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
Someone please help me. I am suffering due to this issue since a long time and not finding any solution. On Wed, May 23, 2018 at 3:48 PM Shantanu Deshmukh wrote: > We have a 3 broker Kafka 0.10.0.1 cluster. There we have 3 topics with 10 > partitions each. We have an application which

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
che.org/0101/documentation.html#newconsumerconfigs > > > On Thu, May 24, 2018 at 2:39 PM, Shantanu Deshmukh > wrote: > > > Someone please help me. I am suffering due to this issue since a long > time > > and not finding any solution. > > > > On Wed,

kafka manual commit vs auto commit

2018-05-24 Thread Shantanu Deshmukh
machine or network etc? Is there a better optimized method of manual commit? Or better yet, how to avoid "auto commit failed" error? *Thanks & Regards,* *Shantanu Deshmukh*

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
Hi M. Manna, Thanks I will try these settings. On Thu, May 24, 2018 at 5:15 PM M. Manna wrote: > Set your rebalance.backoff.ms=1 and zookeeper.session.timeout.ms=3 > in addition to what Manikumar said. > > > > On 24 May 2018 at 12:41, Shantanu Deshmukh

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
s > =3 > > in addition to what Manikumar said. > > > > > > > > On 24 May 2018 at 12:41, Shantanu Deshmukh > wrote: > > > > > Hello, > > > > > > There was a type in my first mail. session.timeout.ms is actually > 6 > >

Re: kafka manual commit vs auto commit

2018-05-24 Thread Shantanu Deshmukh
; 1) https://www.confluent.io/blog/transactions-apache-kafka/ > 2) > > https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/ > > ALso, please don't forget to read Javadoc on KafkaConsumer.java > > Regards, > > On 24 May 201

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
tions-consumer Then nothing. After 5-6 minutes activities start. On Thu, May 24, 2018 at 6:49 PM Shantanu Deshmukh wrote: > Hi Vincent, > > Yes I reduced max.poll.records to get that same effect. I reduced it all > the way down to 5 records still I am seeing same error. What else can b

Re: Frequent consumer rebalance, auto commit failures

2018-05-24 Thread Shantanu Deshmukh
rg/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long) > > > for a period of time longer than session.timeout.ms then it will be > considered dead and its partitions will be assigned to another process." > > Best > > On Thu, May 24, 2018 at 4:07

Reliable way to purge data from Kafka topics

2018-05-24 Thread Shantanu Deshmukh
Hello, We have cross data center replication. Using Kafka mirror maker we are replicating data from our primary cluster to backup cluster. Problem arises when we start operating from backup cluster, in case of drill or actual outage. Data gathered at backup cluster needs to be reverse-replicated t

Re: Reliable way to purge data from Kafka topics

2018-05-25 Thread Shantanu Deshmukh
replicated. > > You may reduce the probability but it will never be impossible. > > > > Your application should be able to handle duplicated messages. > > > > > On 25. May 2018, at 08:54, Shantanu Deshmukh > > wrote: > > > > > > Hello, >

Re: Reliable way to purge data from Kafka topics

2018-05-25 Thread Shantanu Deshmukh
fra at our current stage. Thanks & Regards, Shantanu Deshmukh On Fri 25 May, 2018, 1:30 PM Vincent Maurin, wrote: > What is the end results done by your consumers ? > From what I understand, having the need for no duplicates means that these > duplicates can show up somewhere ? > &

Re: Facing Duplication Issue in kakfa

2018-05-28 Thread Shantanu Deshmukh
Duplication can happen if your producer or consumer are exiting uncleanly. Like if producer just crashes before it receives ack from broker your logic will fail to register that message got produced. And when it comes back up it will try to send that batch again. Same with consumer, if it crashes b

Effect of settings segment.ms and retention.ms not accurate

2018-05-28 Thread Shantanu Deshmukh
I have a topic otp-sms. I want that retention of this topic should be 5 minutes as OTPs are invalid post that amount of time. So I set retention.ms=30. However, this was not working. So reading more in depth in Kafka configuration document I found another topic level setting that can be tuned

Re: Effect of settings segment.ms and retention.ms not accurate

2018-05-28 Thread Shantanu Deshmukh
Please help. On Mon, May 28, 2018 at 5:18 PM Shantanu Deshmukh wrote: > I have a topic otp-sms. I want that retention of this topic should be 5 > minutes as OTPs are invalid post that amount of time. So I set > retention.ms=30. However, this was not working. So reading more in &

Re: Facing Duplication in consumer

2018-05-28 Thread Shantanu Deshmukh
Which Kafka version? On Mon, May 28, 2018 at 9:09 PM Dinesh Subramanian < dsubraman...@apptivo.co.in> wrote: > Hi, > > Whenever we bounce the consumer in tomcat node, I am facing duplication. > It is consumed from the beginning. I have this property in consumer > "auto.offset.reset" = "earliest

Re: Frequent consumer rebalances, auto commit failures

2018-05-28 Thread Shantanu Deshmukh
12:42 PM Shantanu Deshmukh wrote: > > Hello, > > We have a 3 broker Kafka 0.10.0.1 cluster. There we have 3 topics with 10 > partitions each. We have an application which spawns threads as consumers. > We spawn 5 consumers for each topic. I am observing that consider group >

Re: Effect of settings segment.ms and retention.ms not accurate

2018-05-28 Thread Shantanu Deshmukh
ted after the bound passed. > > However, client side, you can always check the record timestamp and just > drop older data that is still in the topic. > > Hope this helps. > > > -Matthias > > > On 5/28/18 9:18 PM, Shantanu Deshmukh wrote: > > Please help.

Correct usage of consumer groups

2018-05-29 Thread Shantanu Deshmukh
Hello, Is it wise to use a single consumer group for multiple consumers who consume from many different topics? Can this lead to frequent rebalance issues?

Re: Correct usage of consumer groups

2018-05-29 Thread Shantanu Deshmukh
> > On 29 May 2018 at 08:26, Shantanu Deshmukh wrote: > > > Hello, > > > > Is it wise to use a single consumer group for multiple consumers who > > consume from many different topics? Can this lead to frequent rebalance > > issues? > > >

Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
Hello, We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10 partitions. I have an application which consumes from all these topics by creating multiple consumer processes. All of these consumers are under a same consumer group. I am noticing that every time we restart this appli

Re: Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
gt; > On 29 May 2018 at 12:51, Shantanu Deshmukh wrote: > > > Hello, > > > > We have 3 broker Kafka 0.10.0.1 cluster. We have 5 topics, each with 10 > > partitions. I have an application which consumes from all these topics by > > creating multiple consumer processes.

Re: Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
y.threads.per.data.dir=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=30 ssl.keystore.location=/opt/kafka/certificates/kafka.keystore.jks ssl.keystore.password= ssl.key.password= ssl.truststore.location=/opt/kafka/certificates/kafka.truststore.jks ssl.truststore.password

Re: Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
ied increase the poll time higher, e.g. 4000 and see if that > helps matters? > > On 29 May 2018 at 13:44, Shantanu Deshmukh wrote: > > > Here is the code which consuming messages > > > > >>>>>>>> > > while(true && startShutdown == false)

Re: Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
No, no dynamic topic creation. On Tue, May 29, 2018 at 6:38 PM Jaikiran Pai wrote: > Are your topics dynamically created? If so, see this > threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html > > -Jaikiran > > > On 29/05/18 5:21 PM, Shantanu Desh

Re: Long start time for consumer

2018-05-29 Thread Shantanu Deshmukh
t; threadhttps://www.mail-archive.com/dev@kafka.apache.org/msg67224.html > > > > > > -Jaikiran > > > > > > > > > On 29/05/18 5:21 PM, Shantanu Deshmukh wrote: > > > > Hello, > > > > > > > > We have 3 broker Kafka 0.10.0.1

Re: Correct usage of consumer groups

2018-05-29 Thread Shantanu Deshmukh
ause a segment can only be dropped if > _all_ messages in a segment passed the retention time. > > Does this make sense? > > Of course, we are always happy to improve the docs. Feel free to do a PR :) > > > -Matthias > > > On 5/29/18 3:01 AM, Shantanu Deshmukh wrot

Re: Effect of settings segment.ms and retention.ms not accurate

2018-05-29 Thread Shantanu Deshmukh
This is helpful. Thanks a lot :-) On Tue, May 29, 2018 at 11:47 PM Matthias J. Sax wrote: > ConsumerRecord#timestamp() > > similar to ConsumerRecord#key() and ConsumerRecord#value() > > > -Matthias > > On 5/28/18 11:22 PM, Shantanu Deshmukh wrote: > > But then I w

Re: retention.ms not honored for topic

2018-05-29 Thread Shantanu Deshmukh
Hey, You should try setting topic level config by doing kafka-topics.sh --alter --topic --config = --zookeeper Make sure you also set segment.ms for topics which are not that populous. This setting specifies amount of time after which a new segment is rolled. So Kafka deletes only those message

Re: Best Practice for Consumer Liveliness and avoid frequent rebalancing

2018-05-31 Thread Shantanu Deshmukh
Do you want to avoid rebalancing in such way that if a consumer exits then its previously owned partition should be left disowned? But then who will consume from partition that was deserted by a exiting consumer? In such case you can go for manual partition assignment. Then there is no question of

Re: Frequent consumer rebalances, auto commit failures

2018-06-03 Thread Shantanu Deshmukh
y round of poll ? > > Thanks ! > > -- > Sent from my iPhone > > On May 28, 2018, at 10:44 PM, Shantanu Deshmukh > wrote: > > Can anyone here help me please? I am at my wit's end. I now have > max.poll.records set to just 2. Still I am getting Auto offset

Frequent "offset out of range" messages, partitions deserted by consumer

2018-06-14 Thread Shantanu Deshmukh
.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer The topic which went orphan has 10 partitions, retention.ms=180, segment.ms=180. Please help. Thanks & Regards, Shantanu Deshmukh

Re: Frequent "offset out of range" messages, partitions deserted by consumer

2018-06-14 Thread Shantanu Deshmukh
Any help please. On Thu, Jun 14, 2018 at 2:39 PM Shantanu Deshmukh wrote: > We have a consumer application which has a single consumer group > connecting to multiple topics. We are seeing strange behaviour in consumer > logs. With these lines > > Fetch offset 1109143 is o

Re: Frequent "offset out of range" messages, partitions deserted by consumer

2018-06-19 Thread Shantanu Deshmukh
I desperately need help. Facing this issue on production since a while now. Someone please help me out. On Fri, Jun 15, 2018 at 2:02 AM Lawrence Weikum wrote: > unsubscribe > >

Re: Frequent "offset out of range" messages, partitions deserted by consumer

2018-06-19 Thread Shantanu Deshmukh
It is happening via auto-commit. Frequence is 3000 ms On Wed, Jun 20, 2018 at 10:31 AM Liam Clarke wrote: > How frequently are your consumers committing offsets? > > On Wed, 20 Jun. 2018, 4:52 pm Shantanu Deshmukh, > wrote: > > > I desperately need help. Facing this issue

Re: Frequent "offset out of range" messages, partitions deserted by consumer

2018-06-21 Thread Shantanu Deshmukh
re old committed offsets expire after a period of time. > > On Wed, 20 Jun. 2018, 5:46 pm Shantanu Deshmukh, > wrote: > > > It is happening via auto-commit. Frequence is 3000 ms > > > > On Wed, Jun 20, 2018 at 10:31 AM Liam Clarke > > wrote: > >

Very long consumer rebalances

2018-07-06 Thread Shantanu Deshmukh
type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer Please help. *Thanks & Regards,* *Shantanu Deshmukh*

Re: Very long consumer rebalances

2018-07-09 Thread Shantanu Deshmukh
Kind people on this group, please help me! On Fri, Jul 6, 2018 at 3:24 PM Shantanu Deshmukh wrote: > Hello everyone, > > We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which > spawns many consumer threads consuming from different topics. For every > topic we

Re: Very long consumer rebalances

2018-07-12 Thread Shantanu Deshmukh
; > > Try reducing below timer > > metadata.max.age.ms = 30 > > > > > > On Fri, Jul 6, 2018 at 5:55 AM Shantanu Deshmukh > > wrote: > > > > > Hello everyone, > > > > > > We are running a 3 broker Kafka 0.10.0.1 cluster. W

Zookeeper logging “exception causing close of session 0x0” infinitely in logs

2018-08-06 Thread Shantanu Deshmukh
We have a cluster of 3 kafka+zookeeper. Only on one of our zookeeper servers we are seeing these logs infinitely getting written in zookeeper.out log file WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCxn@1033] - Exception causing close of session 0x0 due to java.io.Exception INFO [NIO

Re: Very long consumer rebalances

2018-08-09 Thread Shantanu Deshmukh
I am facing too many problems these days. Now one of our consumer groups is rebalancing every now and then. And rebalance takes very low, more than 5-10 minutes. Even after re-balancing I see that only half of the consumers are active/receive assignment. Its all going haywire. I am seeing these l

Re: Very long consumer rebalances

2018-08-09 Thread Shantanu Deshmukh
tdown? > 2) Have you configured the session timeouts for client and zookeeper > accordingly? > > Regards, > > On 9 August 2018 at 08:00, Shantanu Deshmukh > wrote: > > > I am facing too many problems these days. Now one of our consumer groups > > is rebalancing every

Re: Looking for help with a question on the consumer API

2018-08-09 Thread Shantanu Deshmukh
Consumer gets kicked out if it fails to send heart beat in designated time period. Every call to poll sends one heart beat to consumer group coordinator. You need to look at *how much time is it taking to process your single record*. *Maybe it is exceeding session.timeout.ms

Re: Very long consumer rebalances

2018-08-10 Thread Shantanu Deshmukh
uired before all changes take place. > > On 9 August 2018 at 13:48, Shantanu Deshmukh > wrote: > > > Hi, > > > > Yes my consumer application works like below > > > >1. Reads how many workers are required to process each topics from > >properties f

Re: How to reduce kafka's rebalance time ?

2018-08-15 Thread Shantanu Deshmukh
I am also facing the same issue. Whenever I am restarting my consumers it is taking upto 10 minutes to start consumption. Also some of the consumers randomly rebalance and it again takes same amount of time to complete rebalance. I haven't been able to figure out any solution for this issue, nor ha

Re: How to reduce kafka's rebalance time ?

2018-08-16 Thread Shantanu Deshmukh
gt; Regards, > > Regards, > On Thu, 16 Aug 2018 at 06:55, Shantanu Deshmukh > wrote: > > > I am also facing the same issue. Whenever I am restarting my consumers it > > is taking upto 10 minutes to start consumption. Also some of the > consumers > > randomly rebal

Re: Very long consumer rebalances

2018-08-16 Thread Shantanu Deshmukh
try. Some of these > configs may or may not be applicable at runtime. so a rolling restart may > be required before all changes take place. > > On 9 August 2018 at 13:48, Shantanu Deshmukh > wrote: > > > Hi, > > > > Yes my consumer application works like below &g

Re: Kafka issue

2018-08-19 Thread Shantanu Deshmukh
How many brokers are there in your cluster? This error usually comes when one of the brokers who is leader for a partition dies and you are trying to access it. On Fri, Aug 17, 2018 at 9:23 PM Harish K wrote: > Hi, >I have installed Kafka and created topic but while data ingestion i get > so

Re: NetworkException exception while send/publishing records(Producer)

2018-08-19 Thread Shantanu Deshmukh
Firstly, record size of 150mb is too big. I am quite sure your timeout exceptions are due to such a large record. There is a setting in producer and broker config which allows you to specify max message size in bytes. But still records each of size 150mb might lead to problems with increasing volum

Re: Very long consumer rebalances

2018-08-22 Thread Shantanu Deshmukh
Can anyone help me understand how to debug this issue? I tried setting log level to trace in consumer logback configuration. But at such times nothing appears in log, even in trace level. It is like entire code is frozen. On Thu, Aug 16, 2018 at 6:32 PM Shantanu Deshmukh wrote: > I saw a

Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-22 Thread Shantanu Deshmukh
Hello, We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an application which consumes from a topic having 10 partitions. 10 consumers are spawned from this process, they belong to one consumer group. What we have observed is that very frequently we are observing such messages in cons

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-22 Thread Shantanu Deshmukh
g did it take to process 50 `ConsumerRecord`s? > > On Wed, Aug 22, 2018, 5:16 PM Shantanu Deshmukh > wrote: > > > Hello, > > > > We have Kafka 0.10.0.1 running on a 3 broker cluster. We have an > > application which consumes from a topic having 10 partitions. 10 &

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-22 Thread Shantanu Deshmukh
How do I check for GC pausing? On Wed, Aug 22, 2018 at 4:12 PM Steve Tian wrote: > Did you observed any GC-pausing? > > On Wed, Aug 22, 2018, 6:38 PM Shantanu Deshmukh > wrote: > > > Hi Steve, > > > > Application is just sending mails. Every record is just a

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-22 Thread Shantanu Deshmukh
in the email thread. > > On Wed, Aug 22, 2018, 6:51 PM Shantanu Deshmukh > wrote: > > > How do I check for GC pausing? > > > > On Wed, Aug 22, 2018 at 4:12 PM Steve Tian > > wrote: > > > > > Did you observed any GC-pausing? > > > > &g

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-22 Thread Shantanu Deshmukh
and the size > of returned `ConsumrRecords`? > > On Wed, Aug 22, 2018, 7:00 PM Shantanu Deshmukh > wrote: > > > Ohh sorry, my bad. Kafka version is 0.10.1.0 indeed and so is the client. > > > > On Wed, Aug 22, 2018 at 4:26 PM Steve Tian > > wrote: > &

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-28 Thread Shantanu Deshmukh
, Aug 22, 2018 at 5:47 PM Shantanu Deshmukh wrote: > I know average time of processing one record, it is about 70-80ms. I have > set session.timeout.ms so high total processing time for one poll > invocation should be well within it. > > On Wed, Aug 22, 2018 at 5:04 PM Stev

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-28 Thread Shantanu Deshmukh
s easy: reduce max.poll.records. > > Ryanne > > On Tue, Aug 28, 2018 at 6:34 AM Shantanu Deshmukh > wrote: > > > Someone, please help me. Only 1 or 2 out of 7 consumer groups keep > > rebalancing every 5-10mins. One topic is constantly receiving 10-20 > > msg/se

Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-30 Thread Shantanu Deshmukh
session.timeout.ms to any value above default consumers start very slow. Has anyone seen such behaviour or explain me why this is hapening? On Wed, Aug 29, 2018 at 12:04 PM Shantanu Deshmukh wrote: > Hi Ryanne, > > Thanks for your response. I had even tried with 5 records and session > timeo

Kafka producer huge memory usage (leak?)

2018-09-18 Thread Shantanu Deshmukh
Hello, We have a 3 broker Kafka 0.10.1.0 deployment in production. There are some applications which have Kafka Producers embedded in them which send application logs to a topic. This topic has 10 partitions with replication factor of 3. We are observing that memory usage on some of these applica

Re: Kafka producer huge memory usage (leak?)

2018-09-18 Thread Shantanu Deshmukh
n Tue, Sep 18, 2018 at 5:36 PM Shantanu Deshmukh wrote: > Hello, > > We have a 3 broker Kafka 0.10.1.0 deployment in production. There are some > applications which have Kafka Producers embedded in them which send > application logs to a topic. This topic has 10 partitions with repli

Re: Kafka producer huge memory usage (leak?)

2018-09-18 Thread Shantanu Deshmukh
Any thoughts on this matter? Someone, please help. On Tue, Sep 18, 2018 at 6:05 PM Shantanu Deshmukh wrote: > Additionally, here's the producer config > > kafka.bootstrap.servers=x.x.x.x:9092,x.x.x.x:9092,x.x.x.x:9092 > kafka.acks=0 >

Re: Kafka producer huge memory usage (leak?)

2018-09-21 Thread Shantanu Deshmukh
n you guide me here? On Wed, Sep 19, 2018 at 1:02 PM Manikumar wrote: > Similar issue reported here:KAFKA-7304, but on broker side. maybe you can > create a JIRA and upload the heap dump for analysis. > > On Wed, Sep 19, 2018 at 11:59 AM Shantanu Deshmukh > wrote: > > >

Re: Kafka producer huge memory usage (leak?)

2018-09-21 Thread Shantanu Deshmukh
p 21, 2018 at 2:36 PM Manikumar wrote: > Hi, > Instead trying the PR, make sure you are setting valid security protocol > and connecting to valid broker port. > also looks for any errors in producer logs. > > Thanks, > > > > > > On Fri, Sep 21, 2018 at 12:35 PM

Kafka SASL auth setup error: Connection to node 0 (localhost/127.0.0.1:9092) terminated during authentication

2019-04-03 Thread Shantanu Deshmukh
19-04-03 16:32:31,268] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient) Please help. Unable to understand this problem. Thanks & Regards, Shantanu Deshmukh

Re: Kafka SASL auth setup error: Connection to node 0 (localhost/127.0.0.1:9092) terminated during authentication

2019-04-09 Thread Shantanu Deshmukh
.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; }; On Mon, Apr 8, 2019 at 2:11 PM 1095193...@qq.com <1095193...@qq.com> wrote: > > > On 2019/04/03 13:08:45, Shantanu Deshmukh wrote: > > He

Re: How to handle kafka large messages

2019-04-09 Thread Shantanu Deshmukh
Well, from your own synopsis it is clear that message you want to send it much larger than max.message.bytes setting on broker. You can modify it. However, do keep in mind that if you seem to be constantly increasing this limit then you have to look at your message itself. Does it really need to be

Re: Kafka SASL auth setup error: Connection to node 0 (localhost/127.0.0.1:9092) terminated during authentication

2019-04-10 Thread Shantanu Deshmukh
q.com <1095193...@qq.com> wrote: > > > On 2019/04/09 11:21:10, Shantanu Deshmukh wrote: > > That was a blooper. But even after correcting, it still isn't working. > > Still getting the same error. > > Here are the configs again: &

Kafka 2.0.0 - How to verify if Kafka compression is working

2021-05-11 Thread Shantanu Deshmukh
I am trying snappy compression on my producer. Here's my setup Kafka - 2.0.0 Spring-Kafka - 2.1.2 Here's my producer config compressed producer == configProps.put( ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer); configProps.put( ProducerConfig.KEY_

Re: Kafka 2.0.0 - How to verify if Kafka compression is working

2021-05-11 Thread Shantanu Deshmukh
ata from the disk and see compression type. > https://thehoard.blog/how-kafkas-storage-internals-work-3a29b02e026 > > Thanks, > Nitin > > On Wed, May 12, 2021 at 11:10 AM Shantanu Deshmukh > wrote: > > > I am trying snappy compression on my producer. Here's my

Re: Kafka 2.0.0 - How to verify if Kafka compression is working

2021-05-12 Thread Shantanu Deshmukh
0 200.760078 records/sec (19.61 MB/sec) 0.635 In short snappy = uncompressed !! Why is this happening? On Wed, May 12, 2021 at 11:40 AM Shantanu Deshmukh wrote: > Hey Nitin, > > I have already done that. I used dump-log-segments option. And I can see > the codec used is s

Re: Kafka 2.0.0 - How to verify if Kafka compression is working

2021-05-12 Thread Shantanu Deshmukh
_text_, it will compress somewhat since text doesn't > use all 256 possible byte values and so it can use less than 8 bits per > character in the encoding. > > > > On Wed, May 12, 2021, 22:35 Shantanu Deshmukh > wrote: > > > I have some updates on this. > &