We have lots of dedicated Cassandra clusters for large use cases, but we
have a long tail of (~100) of internal customers who want to store < 200GB
of data with < 5k qps and non-critical data. It does not make sense to
create a 3 node dedicated cluster for each of these small use cases. So we
have
Aren't you using mesos Cassandra framework to manage your multiple clusters
? (Seen a presentation in cass summit)
What's wrong with your current mesos approach ?
I am also thinking it's better to split a large cluster into smallers
except if you also manage client layer that query cass and you can
On 2017-02-20 22:47 (-0800), Benjamin Roth wrote:
> Thanks.
>
> Depending on the whole infrastructure and business requirements, isn't it
> easier to implement throttling at the client side?
> I did this once to throttle bulk inserts to migrate whole CFs from other
> DBs.
>
Sometimes it's be
Thanks.
Depending on the whole infrastructure and business requirements, isn't it
easier to implement throttling at the client side?
I did this once to throttle bulk inserts to migrate whole CFs from other
DBs.
2017-02-21 7:43 GMT+01:00 Jeff Jirsa :
>
>
> On 2017-02-20 21:35 (-0800), Benjamin Ro
On 2017-02-20 21:35 (-0800), Benjamin Roth wrote:
> Stupid question:
> Why do you rate limit a database, especially writes. Wouldn't that cause a
> lot of new issues like back pressure on the rest of your system or timeouts
> in case of blocking requests?
> Also rate limiting has to be based on
On 2017-02-17 18:12 (-0800), Abhishek Verma wrote:
>
>
> Is there a way to throttle read and write queries in Cassandra currently?
> If not, what would be the right place in the code to implement a pluggable
> interface for doing it. I have briefly considered using triggers, but that
> is inv
Stupid question:
Why do you rate limit a database, especially writes. Wouldn't that cause a
lot of new issues like back pressure on the rest of your system or timeouts
in case of blocking requests?
Also rate limiting has to be based on per coordinator calculations and not
cluster wide. It reminds m
Older versions had a request scheduler api.
On Monday, February 20, 2017, Ben Slater > wrote:
> We’ve actually had several customers where we’ve done the opposite - split
> large clusters apart to separate uses cases. We found that this allowed us
> to better align hardware with use case requirem
We’ve actually had several customers where we’ve done the opposite - split
large clusters apart to separate uses cases. We found that this allowed us
to better align hardware with use case requirements (for example using AWS
c3.2xlarge for very hot data at low latency, m4.xlarge for more general
pu
On Sat, Feb 18, 2017 at 3:12 AM, Abhishek Verma wrote:
> Cassandra is being used on a large scale at Uber. We usually create
> dedicated clusters for each of our internal use cases, however that is
> difficult to scale and manage.
>
> We are investigating the approach of using a single shared clu
Cassandra is being used on a large scale at Uber. We usually create
dedicated clusters for each of our internal use cases, however that is
difficult to scale and manage.
We are investigating the approach of using a single shared cluster with
100s of nodes and handle 10s to 100s of different use ca
11 matches
Mail list logo