Hi all,

So, perhaps it's worth adding a couple specific examples of where this
feature is useful, to make this a bit more concrete:

- Suppose I'm using Kafka as a commit log for a partitioned KV store,
like Samza or Pistachio (?) do. We bootstrap the process state by
reading from that partition, and log all state updates to that
partition when we're running. Now imagine that one of my processes
locks up -- GC or similar -- and the system transitions that partition
over to another node. When the GC is finished, the old 'owner' of that
partition might still be trying to write to the commit log at the same
as the new one is. A process might detect this by noticing that the
offset of the published message is bigger than it thought the upcoming
offset was, which implies someone else has been writing to the log...
but by then it's too late, and the commit log is already corrupt. With
a 'conditional produce', one of those processes will have it's publish
request refused -- so we've avoided corrupting the state.

- Envision some copycat-like system, where we have some sharded
postgres setup and we're tailing each shard into its own partition.
Normally, it's fairly easy to avoid duplicates here: we can track
which offset in the WAL corresponds to which offset in Kafka, and we
know how many messages we've written to Kafka already, so the state is
very simple. However, it is possible that for a moment -- due to
rebalancing or operator error or some other thing -- two different
nodes are tailing the same postgres shard at once! Normally this would
introduce duplicate messages, but by specifying the expected offset,
we can avoid this.

So perhaps it's better to say that this is useful when a single
producer is *expected*, but multiple producers are *possible*? (In the
same way that the high-level consumer normally has 1 consumer in a
group reading from a partition, but there are small windows where more
than one might be reading at the same time.) This is also the spirit
of the 'runtime cost' comment -- in the common case, where there is
little to no contention, there's no performance overhead either. I
mentioned this a little in the Motivation section -- maybe I should
flesh that out a little bit?

For me, the motivation to work this up was that I kept running into
cases, like the above, where the existing API was almost-but-not-quite
enough to give the guarantees I was looking for -- and the extension
needed to handle those cases too was pretty small and natural-feeling.

On Fri, Jul 17, 2015 at 4:49 PM, Ashish Singh <asi...@cloudera.com> wrote:
> Good concept. I have a question though.
>
> Say there are two producers A and B. Both producers are producing to same
> partition.
> - A sends a message with expected offset, x1
> - Broker accepts is and sends an Ack
> - B sends a message with expected offset, x1
> - Broker rejects it, sends nack
> - B sends message again with expected offset, x1+1
> - Broker accepts it and sends Ack
> I guess this is what this KIP suggests, right? If yes, then how does this
> ensure that same message will not be written twice when two producers are
> producing to same partition? Producer on receiving a nack will try again
> with next offset and will keep doing so till the message is accepted. Am I
> missing something?
>
> Also, you have mentioned on KIP, "it imposes little to no runtime cost in
> memory or time", I think that is not true for time. With this approach
> producers' performance will reduce proportionally to number of producers
> writing to same partition. Please correct me if I am missing out something.
>
>
> On Fri, Jul 17, 2015 at 11:32 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
>
>> If we have 2 producers producing to a partition, they can be out of order,
>> then how does one producer know what offset to expect as it does not
>> interact with other producer?
>>
>> Can you give an example flow that explains how it works with single
>> producer and with multiple producers?
>>
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Fri, Jul 17, 2015 at 10:28 AM, Flavio Junqueira <
>> fpjunque...@yahoo.com.invalid> wrote:
>>
>> > I like this feature, it reminds me of conditional updates in zookeeper.
>> > I'm not sure if it'd be best to have some mechanism for fencing rather
>> than
>> > a conditional write like you're proposing. The reason I'm saying this is
>> > that the conditional write applies to requests individually, while it
>> > sounds like you want to make sure that there is a single client writing
>> so
>> > over multiple requests.
>> >
>> > -Flavio
>> >
>> > > On 17 Jul 2015, at 07:30, Ben Kirwin <b...@kirw.in> wrote:
>> > >
>> > > Hi there,
>> > >
>> > > I just added a KIP for a 'conditional publish' operation: a simple
>> > > CAS-like mechanism for the Kafka producer. The wiki page is here:
>> > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
>> > >
>> > > And there's some previous discussion on the ticket and the users list:
>> > >
>> > > https://issues.apache.org/jira/browse/KAFKA-2260
>> > >
>> > >
>> >
>> https://mail-archives.apache.org/mod_mbox/kafka-users/201506.mbox/%3CCAAeOB6ccyAA13YNPqVQv2o-mT5r=c9v7a+55sf2wp93qg7+...@mail.gmail.com%3E
>> > >
>> > > As always, comments and suggestions are very welcome.
>> > >
>> > > Thanks,
>> > > Ben
>> >
>> >
>>
>>
>> --
>> -Regards,
>> Mayuresh R. Gharat
>> (862) 250-7125
>>
>
>
>
> --
>
> Regards,
> Ashish

Reply via email to