Guozhang,
Here is the snippet.
private Properties getProperties() {
Properties p = new Properties();
...
p.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, kafkaConfig.getString("
streamThreads"));
p.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndConti
Hi Sunil,
Burrow might be of interest to you:
https://github.com/linkedin/Burrow
Hope that helps.
Thanks,
Subhash
Sent from my iPhone
> On Jan 29, 2018, at 7:40 PM, Sunil Parmar wrote:
>
> We're using 0.9 ( CDH ) and consumer offsets are stored within Kafka. What
> is the preferred way to g
Hi,
I have three servers:
blade1 (192.168.112.31),
blade2 (192.168.112.32) and
blade3 (192.168.112.33).
On each of servers kafka_2.11-1.0.0 is installed.
On blade3 (192.168.112.33:2181) zookeeper is installed as well.
I have created a topic repl3part5 with the following line:
bin/kafka-topics
Sorry, I have attached wrong server.properties file. Now the right one
is in the attachment.
Regards.
On 01/30/2018 02:59 PM, Zoran wrote:
Hi,
I have three servers:
blade1 (192.168.112.31),
blade2 (192.168.112.32) and
blade3 (192.168.112.33).
On each of servers kafka_2.11-1.0.0 is installe
Alternatively, you can dump out the consumer offsets using a command like
this:
kafka-console-consumer --topic __consumer_offsets --bootstrap-server
localhost:9092 --formatter
"kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter"
On Tue, Jan 30, 2018 at 8:38 AM, Subhash Sriram
Your code for setting the handler seems right to me.
Another double checking: have you turned on DEBUG level metrics recording
in order for this metric? Note skippedDueToDeserializationError is recorded
as DEBUG level so you need to set metrics.recording.level accordingly
(default is INFO). Lower
I've Kafka 10.
I've a basic question - what determines when the Kafka topic marked for
deletion gets deleted ?
Today, i marked a topic for deletion, and it got deleted immediately
(possibly because the topic was not being used for last few months ?) ..
In earlier instances, i'd to wait for some ti
Hi Guozhang,
Thanks very much for your reply. I am inclined to consider this a bug, since
Kafka Streams in the default configuration is likely to run into this problem
while reprocessing old messages, and in most cases the problem wouldn't be
noticed (since there is no error -- the job just pro
Hi,
I'm trying to write an external tool to monitor consumer lag on Apache
Kafka.
For this purpose, I'm using the kafka-consumer-groups tool to fetch the
consumer offsets.
When using this tool, partition assignments seem to be unavailable
temporarily during the creation of a new topic even if th
Hi Andrey,
My topics are replicated with a replicated factor equals to the number of
nodes, 3 in this test.
Didn't know about the kip-227.
The problems I see at 70k topics coming from ZK are related to any
operation where ZK has to retrieve topics metadata. Just listing topics at
50K or 60k you wil
You need to write some custom code using Interactive Queries and
implement a scatter-gather pattern.
Basically, you need to do the range on each instance and then merge all
partial results.
https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html
You can also fi
On Tue, Jan 30, 2018 at 1:38 PM, David Espinosa wrote:
> Hi Andrey,
> My topics are replicated with a replicated factor equals to the number of
> nodes, 3 in this test.
> Didn't know about the kip-227.
> The problems I see at 70k topics coming from ZK are related to any
> operation where ZK has to
Hi Martin,
That is a good point. In fact in the coming release we have made such
repartition topics really "transient" by periodically purging it with the
embedded admin client, so we can actually set its retention to -1:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-220%3A+Add+AdminClien
13 matches
Mail list logo