We are looking at using kafka 0.8-beta1 and high level consumer.
kafka 0.7 consumer supported backoff.increment.ms to avoid repeatedly
polling a broker node which has no new data. It appears that this property
is no longer supported in 0.8. What is the reason?
Instead there is fetch.wait.max.ms w
Hi,
I am wondering does metadata of Kafka stores producer/consumer information.
As I start any producer,I provide a broker-list to it.But producer must be
connecting to one of the broker to send data(as per Im gettiing the
things).Similarly I can start more producer by giving a broker-list in t
Hello Neha, does it mean even if not all replica acknowledged and timeout
kicked in and producer get an exception - message still will be written?
Thanks.
On Thu, Oct 24, 2013 at 8:08 PM, Neha Narkhede wrote:
> The message will be written to the leader as well as the replicas.
>
> Thanks,
> Neh
The message will be written to the leader as well as the replicas.
Thanks,
Neha
On Thu, Oct 24, 2013 at 7:08 PM, Guozhang Wang wrote:
> Hi,
>
> In this case the request would be treated as timed out and hence failed, if
> the producer is async then after the number of retries it still failed,
Hi,
In this case the request would be treated as timed out and hence failed, if
the producer is async then after the number of retries it still failed, the
messages will be dropped.
Guozhang
On Thu, Oct 24, 2013 at 6:50 PM, Kane Kane wrote:
> If i set request.required.acks to -1, and set rela
If i set request.required.acks to -1, and set relatively short
request.timeout.ms and timeout happens before all replicas acknowledge the
write - would be message written to the leader or dropped?
I am planning to use kafka 0.8 spout and after studing the source code
found that it doesnt handle errors. There is a fork that adds try catch
over using fetchResponse but my guess is this will lead to spout attempting
the same partition infinitely until the leader is elected/comes back
online. I w
Thank you for posting these guidelines. I'm wondering if anyone out there
that is using the Kafka spout (for Storm) knows whether or not the Kafka
spout takes care of these types of details ?
regards
-chris
On Thu, Oct 24, 2013 at 2:05 PM, Neha Narkhede wrote:
> Yes, when a leader dies, th
Yes, when a leader dies, the preference is to pick a leader from the ISR.
If not, the leader is picked from any other available replica. But if no
replicas are alive, the partition goes offline and all production and
consumption halts, until at least one replica is brought online.
Thanks,
Neha
O
>>publishing to and consumption from the partition will halt
and will not resume until the faulty leader node recovers
Can you confirm that's the case? I think they won't wait until leader
recovered and will try to elect new leader from existing non-ISR replicas?
And in case if they wait, and faul
Sounds good, yup!
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
On Oct 24, 2013, at 1:12 PM, Jun Rao wrote:
> At this mom
At this moment, we have resolved all jiras that we intend to fix in 0.8.0
final.
Joe,
Would you like to drive the 0.8.0 final release again?
Thanks,
Jun
On Mon, Oct 21, 2013 at 8:53 PM, Jun Rao wrote:
> Hi, Everyone,
>
> At this moment, we have only one remaining jira (KAFKA-1097) that we p
Hi Folks/Roger,
Unfortunately I don't have legal clearance to contribute patches yet back
to Kafka for code done at work, so Roger it will be great if you can
provide this patch.
Thanks!
Tim
On Mon, Oct 21, 2013 at 11:17 AM, Roger Hoover wrote:
> Agreed. Tim, it would be very helpful is yo
Got it. Thanks.
Regards,
Libo
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Thursday, October 24, 2013 10:09 AM
To: users@kafka.apache.org
Subject: Re: question about default key
The default key is null.
Thanks,
Neha
On Oct 24, 2013 6:47 AM, "Yu, Libo"
The default key is null.
Thanks,
Neha
On Oct 24, 2013 6:47 AM, "Yu, Libo" wrote:
> Hi team,
>
> If I don't specify a key when publishing a message, a default key will be
> generated.
> In this case, how long is the default key and will the consumer get this
> default key?
>
> Thanks.
>
> Libo
>
Hi team,
If I don't specify a key when publishing a message, a default key will be
generated.
In this case, how long is the default key and will the consumer get this
default key?
Thanks.
Libo
Thanks Neha
On 24 October 2013 18:11, Neha Narkhede wrote:
> Yes. And during retries, the producer and consumer refetch metadata.
>
> Thanks,
> Neha
> On Oct 24, 2013 3:09 AM, "Aniket Bhatnagar"
> wrote:
>
> > I am trying to understand and document how producers & consumers
> > will/should beh
Yes. And during retries, the producer and consumer refetch metadata.
Thanks,
Neha
On Oct 24, 2013 3:09 AM, "Aniket Bhatnagar"
wrote:
> I am trying to understand and document how producers & consumers
> will/should behave in case of node failures in 0.8. I know there are
> various other threads t
Thanks Neha. That was the issue. Configuring the right access policies in AWS
solved the problem.
Thanks again.
It is perBroker.
It gives the count of messages that a broker has.
As per my understanding to get it at cluster level,the messages count at
each broker needs to be summed up.
On Thu, Oct 24, 2013 at 2:19 PM, Kane Kane wrote:
> I see this MBean:
> "kafka.server":name="AllTopicsMessagesInPerSec",
I am trying to understand and document how producers & consumers
will/should behave in case of node failures in 0.8. I know there are
various other threads that discuss this but I wanted to bring all the
information together in one post. This should help people building
producers & consumers in oth
This is per broker.
What we do is use JMXtrans (https://github.com/jmxtrans/jmxtrans) to pull
this data into statsd. To get all messages into the cluster at once, we sum
over all the brokers in Graphite.
On Thu, Oct 24, 2013 at 4:49 AM, Kane Kane wrote:
> I see this MBean:
> "kafka.server":nam
I see this MBean:
"kafka.server":name="AllTopicsMessagesInPerSec",type="BrokerTopicMetrics"
Does it return number per broker or per cluster? If it's per broker how to
get global value per cluster and vice versa?
Thanks.
23 matches
Mail list logo