Guys, let me up this one again. Still looking for comments about
kafka-consumer-groups.sh
tool.
Thank you.
On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov
wrote:
> I've tried 3 brokers on command line, like that:
>
> /usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-server
> broker:
FTR, the problem with the 4096 character limit is a known issue:
https://issues.apache.org/jira/browse/KAFKA-4931
Cheers,
Tom
On 6 July 2017 at 13:55, Kamal C wrote:
> Don't use `kill -9 PID`. Use `kill -s TERM PID` - sends a signal to the
> process to end, and will trigger any cleanup routine
thanks for that explanation.
i use json instead of avro should i use the json serialization that
serializes both schema and data, so that the schema travels with the data
from source to sink? so set key.converter.schemas.enable=true and
value.converter.schemas.enable=true?
is it a correct assumpt
Ah, sorry, I have never used the JsonConverter, so didn't know that was
actually a thing. Looking at the code it looks like the converter can
handle json with or without the schema [1]. Take a look at the json
envelope code to get an idea of how the schema is passed along with the
message (also in
Total shot in the dark but could it be related, this talks about CPU but
could have an impact on memory as well:
http://kafka.apache.org/0102/documentation.html#upgrade_10_performance_impact
Hope this helps.
On Sun, 9 Jul 2017 at 10:45 John Yost wrote:
> Hey Ismael,
>
> Thanks a bunch for resp
Hi John,
Yes, down conversion when consuming messages does increase JVM heap usage
as we have to load the data into the JVM heap to convert it. If down
conversion is not needed, we are able to send the data without copying it
to the JVM heap.
Ismael
On Sun, Jul 9, 2017 at 4:23 PM, John Yost wro
Hey Matt,
Indeed! Ismael mentioned this same link yesterday, tried it this AM, and
this change totally fixed the problem! The manifestation we observed was
not increased CPU usage, but rather a MUCH larger memory heap requirement.
Once I changed log.message.format.version to the version of our cli
Hi Ismael,
Thanks again for sending that link yesterday! I tried it this AM and this
change totally fixed the problem! The manifestation we observed was not
increased CPU usage, but rather a MUCH larger memory heap requirement. Once
I changed log.message.format.version to the version of our client
Hi Jozef, all,
I seem to be running into another issue .. here is what i did ..
I'm running Spark Streaming - Kafka integration using Spark 2.x & Kafka 10.
I compiled the program using sbt, and the compilation went through fine.
I was able able to import this into Eclipse & run the program from
I'm bumping this up again to get some feedback, especially from some of
the committers, on the KIP and on the note below.
Thanks.
--Vahid
From: "Vahid S Hashemian"
To: d...@kafka.apache.org
Cc: "Kafka User"
Date: 06/21/2017 12:49 PM
Subject:Re: [DISCUSS] KIP-163: Lower t
Hi,
looking at docs I see that Kafka seems to support throttling of
consumer/replication traffic, but I can't find anything that would suggest
you can prioritize one traffic type over another.
The problem: if at some point consumers starts to be lagging they will
start consuming messages as fast
11 matches
Mail list logo