If the purpose of the KIP is only to protect the cluster from being overwhelmed by crazy clients and is not intended to address resource allocation problem among the clients, I am wondering if using request handling time quota (CPU time quota) is a better option. Here are the reasons:
1. request handling time quota has better protection. Say we have request rate quota and set that to some value like 100 requests/sec, it is possible that some of the requests are very expensive actually take a lot of time to handle. In that case a few clients may still occupy a lot of CPU time even the request rate is low. Arguably we can carefully set request rate quota for each request and client id combination, but it could still be tricky to get it right for everyone. If we use the request time handling quota, we can simply say no clients can take up to more than 30% of the total request handling capacity (measured by time), regardless of the difference among different requests or what is the client doing. In this case maybe we can quota all the requests if we want to. 2. The main benefit of using request rate limit is that it seems more intuitive. It is true that it is probably easier to explain to the user what does that mean. However, in practice it looks the impact of request rate quota is not more quantifiable than the request handling time quota. Unlike the byte rate quota, it is still difficult to give a number about impact of throughput or latency when a request rate quota is hit. So it is not better than the request handling time quota. In fact I feel it is clearer to tell user that "you are limited because you have taken 30% of the CPU time on the broker" than otherwise something like "your request rate quota on metadata request has reached". Thanks, Jiangjie (Becket) Qin On Mon, Feb 20, 2017 at 2:23 PM, Jay Kreps <j...@confluent.io> wrote: > I think this proposal makes a lot of sense (especially now that it is > oriented around request rate) and fills the biggest remaining gap in the > multi-tenancy story. > > I think for intra-cluster communication (StopReplica, etc) we could avoid > throttling entirely. You can secure or otherwise lock-down the cluster > communication to avoid any unauthorized external party from trying to > initiate these requests. As a result we are as likely to cause problems as > solve them by throttling these, right? > > I'm not so sure that we should exempt the consumer requests such as > heartbeat. It's true that if we throttle an app's heartbeat requests it may > cause it to fall out of its consumer group. However if we don't throttle it > it may DDOS the cluster if the heartbeat interval is set incorrectly or if > some client in some language has a bug. I think the policy with this kind > of throttling is to protect the cluster above any individual app, right? I > think in general this should be okay since for most deployments this > setting is meant as more of a safety valve---that is rather than set > something very close to what you expect to need (say 2 req/sec or whatever) > you would have something quite high (like 100 req/sec) with this meant to > prevent a client gone crazy. I think when used this way allowing those to > be throttled would actually provide meaningful protection. > > -Jay > > > > On Fri, Feb 17, 2017 at 9:05 AM, Rajini Sivaram <rajinisiva...@gmail.com> > wrote: > > > Hi all, > > > > I have just created KIP-124 to introduce request rate quotas to Kafka: > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP- > > 124+-+Request+rate+quotas > > > > The proposal is for a simple percentage request handling time quota that > > can be allocated to *<client-id>*, *<user>* or *<user, client-id>*. There > > are a few other suggestions also under "Rejected alternatives". Feedback > > and suggestions are welcome. > > > > Thank you... > > > > Regards, > > > > Rajini > > >