I agree that the request scheduler should probably be deprecated and removed 
unless someone wants to put in something that's usable from the non thrift 
request processor. We added it for prioritization and QoS but I don't know of 
anyone ever using it. Our project we thought of using it for got shelved.

Unless it's just multiple clients with the same general use case, I think multi 
tenant is going to be quite difficult to tune and diagnose problems for. I 
would steer clear and have a cluster per logical app if at all possible.

> On Sep 9, 2016, at 6:43 PM, Mick Semb Wever <m...@thelastpickle.com> wrote:
> 
> On 15 July 2016 at 16:38, jason zhao yang <zhaoyangsingap...@gmail.com>
> wrote:
> 
>> 
>> May I ask is there any plan of extending functionalities related to
>> Multi-Tenant?
> 
> 
> 
> I had needs for this in the past and my questioning always seemed to
> eventuate to answers along the lines of this should be done more at the
> resource level. There is a variety of ways a bad datamodel or client can
> bring a cluster down, not just at request time.
> 
> There was some thoughts IIRC around a resource scheduler somewhere post-3.0
> but i don't think that ever eventuated (someone more knowledgable please
> correct me).
> 
> Otherwise you could look into using tiered storage so that you had at least
> disk isolation per keyspace. Solves some things, but won't help with
> overhead and memtable impact from number of keyspaces/tables and lack of
> heap/throughput isolation/scheduling.
> 
> The approach of doing this at the driver level, prefixing the partition
> key, is as good as any approach for now.
> 
> Could be an idea to remove/deprecate the request_scheduler from code and
> yaml.
> 
> ~mck

Reply via email to