Hi Marius,
I'm curious what you've found?
One way to go about it is to have alerts (those that look for health status
- in SPM we call them Heartbeat alerts) hooked up to a Webhook, where a
Webhook is basically your custom HTTP endpoint (or something like Nagios).
This should let you integrate he
Hmm, that sounds not right to me. Could you create a ticket with the
uploaded benchmark code and steps to reproduce the results, and someone
could then take a look into that?
On Tue, Sep 6, 2016 at 10:03 PM, Yifan Ying wrote:
> Hi Guozhang,
>
> batch-size-avg and request-size-avg looked similar
I have a question with respect to the KafkaStreams API.
I noticed during my prototyping work that my KafkaStreams application was
not able to keep up with the input on the stream so I dug into it a bit and
found that it was spending an inordinate amount of time in
org.apache.kafka.common.network.S
Looks like re-partitioning is probably the way to go. I've seen reference
to this pattern a couple of times but wanted to make sure I wasn't missing
something obvious. Looks like kafka streams makes this kind of thing a bit
easier than samza.
Thanks for sharing your wisdom folks :-)
On Wed, Sep
Obviously for the keys you don’t have, you would have to look them up…sorry, I
kinda missed that part. That is indeed a pain. The job that looks those keys
up would probably have to batch queries to the external system. Maybe you
could use kafka-connect-jdbc to stream in updates to that syste
The “simplest” way to solve this is to “repartition” your data (i.e. the
streams you wish to join) with the partition key you wish to join on. This
obviously introduces redundancy, but it will solve your problem. For example..
suppose you want to join topic T1 and topic T2…but they aren’t part
Thanks for the response; we found the issue. We run in AWS, and inexplicably,
the new instance we launched to replace the dead one had exceedingly low
network bandwidth to exactly one of the remaining brokers, resulting in
timeouts. After rolling the dice again things are replicating normally.
One possibility is that broker 0 has exhausted it's available file
descriptors. If this is the case it will be able to maintain it's existing
connections, giving off the appearance that it is operating normally while
refusing new ones.
I don't recall the exact exception message but something along
Hi,
We are working on the project where we wish to use Kafka. Based on our
learning we have few queries:
- https://github.com/agaoglu/udp-kafka-bridge : Used to push any message
received on UDP port.
Now we wish to receive messages using UDP.. Is it possible to receive
using UDP?
Thanks so much to both of you.
Vadim
On Wed, Sep 7, 2016 at 2:07 AM, Ismael Juma wrote:
> Hi Vadim,
>
> You have to upgrade the brokers first and then the clients. So, you can use
> 0.8 clients with 0.10 brokers, but _not_ 0.10 clients with 0.8 brokers.
>
> On Wed, Sep 7, 2016 at 9:19 AM, Vadim
Cachestat
https://www.datadoghq.com/blog/collecting-kafka-performance-metrics/
On 9/7/16, 8:31 AM, "Peter Sinoros Szabo"
wrote:
Hi,
As I read more and more about kafka monitoring it seems that monitoring
the linux page cache hit ration is important, but I do not really find a
The leader for each partition is on a different broker.
Example:
Three brokers
Topic has three partitions and replication of three.
In this case each broker will be a leader for one partition and a follower for
two. Three consumers would each be reading from a different topic.
Dave
> On Sep 7
Hi All,
I am new to Kafka and have a query around "Replication"
Reference URL's : https://www.youtube.com/watch?v=BGhlHsFBhLE#t=40m53s
The above You-tube URL mentions that It is possible to read the same TOPIC
from multiple brokers by the same consumer it increases the throughput.
Query :
Hi,
As I read more and more about kafka monitoring it seems that monitoring
the linux page cache hit ration is important, but I do not really find a
good solution to get that value.
Do you have a good practice how to get that value?
Regards,
Peter
Hello,
I have the following scenario:
I have 1 consumer that consumes from multiple topics. One of these topics
is special - it holds some boot data, so it needs to be fully read and
processed from the begining, till its current end, before messages from
other topics are processed. After I go to t
Hi Vadim,
You have to upgrade the brokers first and then the clients. So, you can use
0.8 clients with 0.10 brokers, but _not_ 0.10 clients with 0.8 brokers.
On Wed, Sep 7, 2016 at 9:19 AM, Vadim Keylis wrote:
> Hello. We are running kafka server 0.8. We are considering upgrading to the
> lates
I believe there were some incompatibilities between 0.8.x and 0.9.x for
the clients.
If you are upgrading from 0.8.x to 0.10.x, you should check out the
upgrade guide available here:
http://kafka.apache.org/documentation.html#upgrade_10
Cheers,
Francis
On 7/09/2016 6:19 PM, Vadim Keylis wrot
Hello. We are running kafka server 0.8. We are considering upgrading to the
latest version of kafka. Does latest consumer 0.10 backward compatible with
older version of Kafka? What is the right approach of upgrading Kafka
servers and consumers?
Thanks in advance.
18 matches
Mail list logo