Hi, Guozhang, In general, users may want to optimize affinity in different ways, e.g. latency, cost, etc. I am not sure if all those cases can by captured by client IP addresses. So, it seems that having a rack.id in the consumer is still potentially useful.
Thanks, Jun On Wed, Mar 27, 2019 at 9:05 AM Guozhang Wang <wangg...@gmail.com> wrote: > Hello Jun, > > Regarding 200: if we assume that most client would not bother setting > rack.id at all and affinity can be determined w/o rack.id via TCP header, > plus rack.id may not be "future-proof" additional information is needed as > well, then do we still need to change the protocol of metadata request to > add `rack.id`? > > > Guozhang > > On Tue, Mar 26, 2019 at 6:23 PM Jun Rao <j...@confluent.io> wrote: > > > Hi, Jason, > > > > Thanks for the KIP. Just a couple of more comments. > > > > 200. I am wondering if we really need the replica.selection.policy config > > in the consumer. A slight variant is that we (1) let the consumer always > > fetch from the PreferredReplica and (2) provide a default implementation > of > > ReplicaSelector that always returns the leader replica in select() for > > backward compatibility. Then, we can get rid of replica.selection.policy > in > > the consumer. The benefits are that (1) fewer configs, (2) affinity > > optimization can potentially be turned on with just a broker side change > > (assuming affinity can be determined w/o client rack.id). > > > > 201. I am wondering if PreferredReplica in the protocol should be named > > PreferredReadReplica since it's intended for reads? > > > > Jun > > > > On Mon, Mar 25, 2019 at 9:07 AM Jason Gustafson <ja...@confluent.io> > > wrote: > > > > > Hi All, discussion on the KIP seems to have died down, so I'd like to > go > > > ahead and start a vote. Here is a link to the KIP: > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica > > > . > > > > > > +1 from me (duh) > > > > > > -Jason > > > > > > > > -- > -- Guozhang >