I think the resource constraining aspects are one of the most important things 
we are missing.  Actually doing resource constraints in SEDA is hard. In TPC it 
should be easier, so we put off some discussions we were having about it until 
we have TPC in place such that tracking resource use of a given query should be 
much easier if a given request is being serviced by a single thread.

> On Sep 9, 2016, at 9:41 PM, Jason Brown <jasedbr...@gmail.com> wrote:
> 
> Heh, nice find, Jeremy. Thanks for digging it up
> 
> On Friday, September 9, 2016, Jeremy Hanna <jeremy.hanna1...@gmail.com>
> wrote:
> 
>> For posterity, our wiki page from many moons ago was
>> https://wiki.apache.org/cassandra/MultiTenant <https://wiki.apache.org/
>> cassandra/MultiTenant>.  It was a different era of the project but there
>> might be some useful bits in there for anyone interested in MT.
>> 
>>>> On Sep 9, 2016, at 9:28 PM, Jason Brown <jasedbr...@gmail.com
>>> <javascript:;>> wrote:
>>> 
>>> The current implementation will probably be yanked when thrift as a whole
>>> is removed for 4.0. And I'm ok with that.
>>> 
>>> That being said, there has been an undercurrent of interest over time
>> about
>>> multitenancy, and I'm willing to entertain a renewed discussion. It might
>>> be instructive to see if any other systems are currently offering
>>> multitenancy and if there's something to be learned there. If not, we
>> could
>>> at least explore the topic more seriously and then document for posterity
>>> the well-informed pros/cons of why we as a community choose to not do it,
>>> postpone for later, or actually do it. Of course, it would be great for a
>>> motivated individual to lead the effort if we really want to entertain
>> it.
>>> 
>>> On Friday, September 9, 2016, Jeremy Hanna <jeremy.hanna1...@gmail.com
>> <javascript:;>>
>>> wrote:
>>> 
>>>> I agree that the request scheduler should probably be deprecated and
>>>> removed unless someone wants to put in something that's usable from the
>> non
>>>> thrift request processor. We added it for prioritization and QoS but I
>>>> don't know of anyone ever using it. Our project we thought of using it
>> for
>>>> got shelved.
>>>> 
>>>> Unless it's just multiple clients with the same general use case, I
>> think
>>>> multi tenant is going to be quite difficult to tune and diagnose
>> problems
>>>> for. I would steer clear and have a cluster per logical app if at all
>>>> possible.
>>>> 
>>>>> On Sep 9, 2016, at 6:43 PM, Mick Semb Wever <m...@thelastpickle.com
>> <javascript:;>
>>>> <javascript:;>> wrote:
>>>>> 
>>>>> On 15 July 2016 at 16:38, jason zhao yang <zhaoyangsingap...@gmail.com
>> <javascript:;>
>>>> <javascript:;>>
>>>>> wrote:
>>>>> 
>>>>>> 
>>>>>> May I ask is there any plan of extending functionalities related to
>>>>>> Multi-Tenant?
>>>>> 
>>>>> 
>>>>> 
>>>>> I had needs for this in the past and my questioning always seemed to
>>>>> eventuate to answers along the lines of this should be done more at the
>>>>> resource level. There is a variety of ways a bad datamodel or client
>> can
>>>>> bring a cluster down, not just at request time.
>>>>> 
>>>>> There was some thoughts IIRC around a resource scheduler somewhere
>>>> post-3.0
>>>>> but i don't think that ever eventuated (someone more knowledgable
>> please
>>>>> correct me).
>>>>> 
>>>>> Otherwise you could look into using tiered storage so that you had at
>>>> least
>>>>> disk isolation per keyspace. Solves some things, but won't help with
>>>>> overhead and memtable impact from number of keyspaces/tables and lack
>> of
>>>>> heap/throughput isolation/scheduling.
>>>>> 
>>>>> The approach of doing this at the driver level, prefixing the partition
>>>>> key, is as good as any approach for now.
>>>>> 
>>>>> Could be an idea to remove/deprecate the request_scheduler from code
>> and
>>>>> yaml.
>>>>> 
>>>>> ~mck
>> 
>> 

Reply via email to