I think you are right. I was too quick saying that pre 0.7 we also have
"at-least-once".
Guozhang
On Sun, Oct 27, 2013 at 9:51 AM, Jason Rosenberg wrote:
> Perhaps, it's a matter of semantics.
>
> But, I think I'm not talking only about failure, but normal operation.
> It's normal to take a c
Perhaps, it's a matter of semantics.
But, I think I'm not talking only about failure, but normal operation.
It's normal to take a cluster down for maintenance, or code update. And
this should be done a rolling restart manner (1 server at a time).
The reason for replication, is to increase relia
Hello Jason,
You are right. I think we just have different definitions about "at least
once". What you have described to me is more related to "availability",
which says that your message will not be lost when there are failures. And
we achieve this through replication (which is related to
request
Guozhang,
It turns out this is not entirely true, you do need request.required.acks =
-1 (and yes, you need to retry if failure) in order have guaranteed
delivery.
I discovered this, when doing tests with rolling restarts (and hard
restarts) of the kafka servers. If the server goes down, e.g. if
Jason,
Setting request.required.acks=-1 is orthogonal to the 'at least once'
guarantee, it only relates to the latency/replication trade-off. For
example, even if you set request.required.acks to 1, and as long as you
retry on all non-fatal exceptions you have the "at least once" guarantee;
and ev
Just to clarify, I think in order to get 'at least once' guarantees, you
must produce messages with 'request.required.acks=-1'. Otherwise, you
can't be 100% sure the message was received by all ISR replicas.
On Fri, Oct 25, 2013 at 9:56 PM, Kane Kane wrote:
> Thanks Guozhang, it makes sense if
Thanks Guozhang, it makes sense if it's by design. Just wanted to ensure
i'm not doing something wrong.
On Fri, Oct 25, 2013 at 5:57 PM, Guozhang Wang wrote:
> As we have said, the timeout exception does not actually mean the message
> is not committed to the broker. When message.send.max.retri
As we have said, the timeout exception does not actually mean the message
is not committed to the broker. When message.send.max.retries is 0 Kafka
does guarantee "at-most-once" which means that you will not have
duplicates, but not means that all your exceptions can be treated as
"message not deliv
There are a lot of exceptions, I will try to pick an example of each:
ERROR async.DefaultEventHandler - Failed to send requests for topics
benchmark with correlation ids in [879,881]
WARN async.DefaultEventHandler - Produce request with correlation id 874
failed due to [benchmark,43]: kafka.common
Kane,
If you set message.send.max.retries to 0 it should be at-most-once, and I
saw your props have the right config. What are the exceptions you got from
the send() call?
Guozhang
On Fri, Oct 25, 2013 at 12:54 PM, Steve Morin wrote:
> Kane and Aniket,
> I am interested in knowing what the
Kane and Aniket,
I am interested in knowing what the pattern/solution that people usually
use to implement exactly once as well.
-Steve
On Fri, Oct 25, 2013 at 11:39 AM, Kane Kane wrote:
> Guozhang, but i've posted a piece from kafka documentation above:
> So effectively Kafka guarantees at-l
Guozhang, but i've posted a piece from kafka documentation above:
So effectively Kafka guarantees at-least-once delivery by default and
allows the user to implement at most once delivery by disabling retries on
the producer.
What i want is at-most-once and docs claim it's possible with certain
set
I.e. from the documentation:
So effectively Kafka guarantees at-least-once delivery by default and
allows the user to implement at most once delivery by disabling retries on
the producer
I've disabled retries but it's not at-most-once which my test proves. It's
still at-least-once.
On Fri, Oct
Aniket is exactly right. In general, Kafka provides "at least once"
guarantee instead of "exactly once".
Guozhang
On Fri, Oct 25, 2013 at 11:13 AM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:
> As per my understanding, if the broker says the msg is committed, its
> guaranteed to have
Hello Aniket,
Thanks for the answer, this totally makes sense and implementing that layer
on consumer side
to check for dups sound like a good solution to this issue.
Can we get a confirmation from kafka devs that this is how kafka supposed
to work (by design)
and how we should implement the solu
As per my understanding, if the broker says the msg is committed, its
guaranteed to have been committed as per ur ack config. If it says it did
not get committed, then its very hard to figure out if this was just a
false error. Since there is concept of unique ids for messages, a replay of
the sam
Or, to rephrase it more generally, is there a way to know exactly if
message was committed or no?
On Fri, Oct 25, 2013 at 10:43 AM, Kane Kane wrote:
> Hello Guozhang,
>
> My partitions are split almost evenly between broker, so, yes - broker
> that I shutdown is the leader for some of them. Doe
Hello Guozhang,
My partitions are split almost evenly between broker, so, yes - broker that
I shutdown is the leader for some of them. Does it mean i can get an
exception and data is still being written? Is there any setting on the
broker where i can control this? I.e. can i make broker replicatio
Hello Kane,
As discussed in the other thread, even if a timeout response is sent back
to the producer, the message may still be committed.
Did you shut down the leader broker of the partition or a follower broker?
Guozhang
On Fri, Oct 25, 2013 at 8:45 AM, Kane Kane wrote:
> I have cluster of
I have cluster of 3 kafka brokers. With the following script I send some
data to kafka and in the middle do the controlled shutdown of 1 broker. All
3 brokers are ISR before I start sending. When i shutdown the broker i get
a couple of exceptions and I expect data shouldn't be written. Say, I send
20 matches
Mail list logo