Thanks Ryanne. These were good questions.
-Jason
On Wed, Mar 20, 2019 at 5:19 PM Ryanne Dolan wrote:
> Thanks Jason, that helps. I agree my concerns are orthogonal to reducing
> cross-DC transfer costs -- I'm only interested in how this affects what
> happens when a rack is unavailable, since,
Thanks Jason, that helps. I agree my concerns are orthogonal to reducing
cross-DC transfer costs -- I'm only interested in how this affects what
happens when a rack is unavailable, since, as the name implies, the whole
point of stretching a cluster across availability zones is for increased
availab
Hi Ryanne,
Thanks, responses below:
Thanks Jason, I see that the proposed ReplicaSelector would be where that
> decision is made. But I'm not certain how a consumer triggers this process?
> If a consumer can't reach its preferred rack, how does it ask for a new
> assignment?
As documented in th
Thanks Jason, I see that the proposed ReplicaSelector would be where that
decision is made. But I'm not certain how a consumer triggers this process?
If a consumer can't reach its preferred rack, how does it ask for a new
assignment?
I suppose a consumer that can't reach its preferred rack would n
Hi Ryanne,
Thanks for the comment. If I understand your question correctly, I think
the answer is no. I would expect typical selection logic to consider
replica availability first before any other factor. In some cases, however,
a user may put a higher priority on saving cross-dc traffic costs. If
Jason, awesome KIP.
I'm wondering how this change would affect availability of the cluster when
a rack is unreachable. Is there a scenario where availability is improved
or impaired due to the proposed changes?
Ryanne
On Tue, Mar 19, 2019 at 4:32 PM Jason Gustafson wrote:
> Hi Jun,
>
> Yes, th
Hi Jun,
Yes, that makes sense to me. I have added a ClientMetadata class which
encapsulates various metadata including the rackId and the client address
information.
Thanks,
Jason
On Tue, Mar 19, 2019 at 2:17 PM Jun Rao wrote:
> Hi, Jason,
>
> Thanks for the updated KIP. Just one more comment
Hi, Jason,
Thanks for the updated KIP. Just one more comment below.
100. The ReplicaSelector class has the following method. I am wondering if
we should additionally pass in the client connection info to the method.
For example, if rackId is not set, the plugin could potentially select the
replic
Hey Everyone,
Apologies for the long delay. I am picking this work back up.
After giving this some further thought, I decided it makes the most sense
to move replica selection logic into the broker. It is much more difficult
to coordinate selection logic in a multi-tenant environment if operators
Hi, Jason,
Thanks for the updated KIP. Looks good overall. Just a few minor comments.
20. For case 2, if the consumer receives an OFFSET_NOT_AVAILABLE, I am
wondering if the consumer should refresh the metadata before retrying. This
can allow the consumer to switch to an in-sync replica sooner.
Hey Jun,
Sorry for the late reply. I have been giving your comments some thought.
Replies below:
1. The section on handling FETCH_OFFSET_TOO_LARGE error says "Use the
> OffsetForLeaderEpoch API to verify the current position with the leader".
> The OffsetForLeaderEpoch request returns log end off
Hi Eno,
Thanks for the clarification. From a high level, the main thing to keep in
mind is that this is an opt-in feature. It is a bit like using acks=1 in
the sense that a user is accepting slightly weaker guarantees in order to
optimize for some metric (in this case, read locality). The default
Hi Jason,
My question was on producer + consumer semantics, not just the producer
semantics. I'll rephrase it slightly and split into two questions:
- scenario 1: an application that both produces and consumes (e.g., like
Kafka streams) produces synchronously a single record to a topic and then
at
Hey Mickael,
Thanks for the comments. Responses below:
- I'm guessing the selector will be invoke after each rebalance so
> every time the consumer is assigned a partition it will be able to
> select it. Is that true?
I'm not sure it is necessary to do it after every rebalance, but certainly
th
Hey Eno,
Thanks for the comments. However, I'm a bit confused. I'm not suggesting we
change Produce semantics in any way. All writes still go through the
partition leader and nothing changes with respect to committing to the ISR.
The main issue, as I've mentioned in the KIP, is the increased laten
Hi Jason,
This is an interesting KIP. This will have massive implications for
consistency and serialization, since currently the leader for a partition
serializes requests. A few questions for now:
- before we deal with the complexity, it'd be great to see a crisp example
in the motivation as to
Hi Jason,
Very cool KIP!
A couple of questions:
- I'm guessing the selector will be invoke after each rebalance so
every time the consumer is assigned a partition it will be able to
select it. Is that true?
- From the selector API, I'm not sure how the consumer will be able to
address some of the
Hey Jason,
This is certainly a very exciting KIP.
I assume that no changes will be made to the offset commits and they will
continue to be sent to the group coordinator?
I also wanted to address metrics - have we considered any changes there? I
imagine that it would be valuable for users to be ab
Hi, Jason,
Thanks for the KIP. Looks good overall. A few minor comments below.
1. The section on handling FETCH_OFFSET_TOO_LARGE error says "Use the
OffsetForLeaderEpoch API to verify the current position with the leader".
The OffsetForLeaderEpoch request returns log end offset if the request
lea
I didn't review yet, and I'm sure there are many details to iron out.
But my thoughts are:
OMG. YES. THANK YOU.
On Wed, Nov 21, 2018 at 12:54 PM Jason Gustafson wrote:
>
> Hi All,
>
> I've posted a KIP to add the often-requested support for fetching from
> followers:
> https://cwiki.apache.org/c
Hi All,
I've posted a KIP to add the often-requested support for fetching from
followers:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica.
Please take a look and let me know what you think.
Thanks,
Jason
21 matches
Mail list logo