Apparently it's documented in the FAQ - but I ignored it since it said
"0.8.0" and I was using 0.8.2.1. After reading all the lengthy forum post
dating to 2013: The problematic code there is in DefaultEventHandler.scala,
but if I'm only using KafkaProducer.java - the java flavor - I won't be
expose
Since the image is now shown, here's a direct link to it:
https://s32.postimg.org/xoet3vu2t/image.png
On Tue, Jul 5, 2016 at 7:01 AM Asaf Mesika wrote:
> As we continue to track down the cause I'm trying to ping back here in
> case someone new might have an answer to the question below?
>
>
> On
As we continue to track down the cause I'm trying to ping back here in case
someone new might have an answer to the question below?
On Thu, Jun 16, 2016 at 12:39 PM Asaf Mesika wrote:
> Hi,
>
> We've noticed that we have some partitions receiving more messages than
> others. What I've done to le
And this error message comes from Metadata.awaitUpdate() method.
发件人: tong...@csbucn.com
发送时间: 2016-07-05 10:39
收件人: users
主题: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 3000 ms
Hi,
I have 2 Kafka nodes and 1 zookeeper node.
When I use kill -9 to shutdown th
Hi,
I have 2 Kafka nodes and 1 zookeeper node.
When I use kill -9 to shutdown the kafk-node1,I got the error message from the
producer when sending messages:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 3000 ms
The kafka client version is: 0.9.0.1, producer c
Hi Charity
There will be a KIP for this coming out shortly.
All the best
B
> On 4 Jul 2016, at 13:14, Alexis Midon wrote:
>
> Same here at Airbnb. Moving data is the biggest operational challenge
> because of the network bandwidth cannibalization.
> I was hoping that rate limiting would appl
Same here at Airbnb. Moving data is the biggest operational challenge
because of the network bandwidth cannibalization.
I was hoping that rate limiting would apply to replica fetchers too.
On Sun, Jul 3, 2016 at 15:38 Tom Crayford wrote:
> Hi Charity,
>
> I'm not sure about the roadmap. The way
Hi there,
my question is about Kafka Streams. I'm writting an application using
Streams. I read JSON from Kafka topic and I make some transformations.
I'm using
Serde jsonNodeSerder = Serdes.serdeFrom(new JsonSerializer(), new
JsonDeserializer());
KStream kStream = builder.stream(Serdes.String()