Thanks Neha for continued insight.... What you describe as a possible solution is what I was thinking (although I wasn't as concerned as maybe I should be with the added delay of the new leader delaying processing new requests while it finishes consuming from the old leader, and communicates back and forth to complete the leader hand-off).
E.g., isn't the definition of a broker being in the ISR that it is keeping itself up to date with the leader of the ISR (within an allowed replication lag)? So, it should be possible to elect a new leader, have it buffer incoming requests while it finishes replicating everything from the old leader (which it should complete within an allowed replication lag timeout), and then start acking any buffered requests. I guess this buffering period would be akin to the leader 'unavailability' window, but in reality, it is just a delay (and shouldn't be much more than the replication lag timeout). The producing client can decide to timeout the request if it's taking too long, and retry it (that's normal anyway if a producer fails to get an ack, etc.). So, as long as the old leader atomically starts rejecting incoming requests at the time it relinquishes leadership, then producer requests will fail fast, initiate a new meta data request to find the new leader, and continue on with the new leader (possibly after a bit of a replication catch up delay). The old leader can then proceed with shutdown after the new leader has caught up (which it will signal with an RPC). I realize there are all sorts of edge cases here, but it seems there should be a way to make it work. I guess I'm willing to allow a bit of an 'unavailability' delay, rather than have messages silently acked then lost during a controlled shutdown/new leader election. My hunch is, in the steady state, when leadership is stable and brokers aren't being shutdown, the performance benefit of being able to use request.required.acks=1 (instead of -1), far outweighs any momentary performance blip during a leader availability delay during a leadership change (which should be fully recoverable and retryable by a concerned producer client). Now, of course, if I want to guard against a hard-shutdown, then that's a whole different ball of wax! Jason On Sun, Oct 6, 2013 at 4:30 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote: > Ok, so if I initiate a controlled shutdown, in which all partitions that a > shutting down broker is leader of get transferred to another broker, why > can't part of that controlled transfer of leadership include ISR > synchronization, such that no data is lost? Is there a fundamental reason > why that is not possible? Is it it worth filing a Jira for a feature > request? Or is it technically not possible? > > It is not as straightforward as it seems and it will slow down the shut > down operation furthermore (currently several zookeeper writes already slow > it down) and also increase the leader unavailability window. But keeping > the performance degradation aside, it is tricky since in order to stop > "new" data from coming in, we need to move the leaders off of the current > leader (broker being shutdown) onto some other follower. Now moving the > leader means some other follower will become the leader and as part of that > will stop copying existing data from the old leader and will start > receiving new data. What you are asking for is to insert some kind of > "wait" before the new follower becomes the leader so that the consumption > of messages is "done". What is the definition of "done" ? This is only > dictated by the log end offset of the old leader and will have to be > included in the new leader transition state change request by the > controller. So this means an extra RPC between the controller and the other > brokers as part of the leader transition. Also there is no guarantee that > the other followers are alive and consuming, so how long does the broker > being shutdown wait ? Since it is no longer the leader, it technically > cannot kick followers out of the ISR, so ISR handling is another thing that > becomes tricky here. > > Also, this sort of catching up on the last few messages is not limited to > just controlled shutdown, but you can extend it to any other leader > transition. So special casing it does not make much sense. This "wait" > combined with the extra RPC will mean extending the leader unavailability > window even further than what we have today. > > So this is fair amount of work to make sure last few messages are > replicated to all followers. Instead what we do is to simplify the leader > transition and let the clients handle the retries for requests that have > not made it to the desired number of replicas, which is configurable. > > We can discuss that in a JIRA if that helps. May be other committers have > more ideas. > > Thanks, > Neha > > > On Sun, Oct 6, 2013 at 10:08 AM, Jason Rosenberg <j...@squareup.com> wrote: > > > On Sun, Oct 6, 2013 at 4:08 AM, Neha Narkhede <neha.narkh...@gmail.com > > >wrote: > > > > > Does the > > > leader just wait for the followers in the ISR to consume? > > > > > > That's right. Until that is done, the producer does not get an ack > back. > > It > > > has an option of retrying if the previous request times out or fails. > > > > > > > > Ok, so if I initiate a controlled shutdown, in which all partitions that > a > > shutting down broker is leader of get transferred to another broker, why > > can't part of that controlled transfer of leadership include ISR > > synchronization, such that no data is lost? Is there a fundamental > reason > > why that is not possible? Is it it worth filing a Jira for a feature > > request? Or is it technically not possible? > > > > I'm ok with losing data in this case during a hard-shutdown, but for a > > controlled shutdown, I'm wondering if there's at least a best effort > > attempt to be made to sync all writes to the ISR. > > > > Jason > > >