Re: Cassandra limitations

2018-05-04 Thread Jeff Jirsa
Depends on heap and size of tables (how many columns). Have seen many hundreds work fine. Could think of scenarios where dozens would fail (especially weird schemas or especially small heaps). On Fri, May 4, 2018 at 11:39 AM, Abdul Patel wrote: > Thanks .. > So whats ideal number when we should

Re: Cassandra limitations

2018-05-04 Thread Abdul Patel
Thanks .. So whats ideal number when we should stop ..say 100 ? On Friday, May 4, 2018, Jeff Jirsa wrote: > Cluster. The overhead is per cluster. > > There are two places you'll run into scaling pain here. > > 1) Size of the schema (which we have to serialize to send around) - too > many tables,

Re: Cassandra limitations

2018-05-04 Thread Jeff Jirsa
Cluster. The overhead is per cluster. There are two places you'll run into scaling pain here. 1) Size of the schema (which we have to serialize to send around) - too many tables, or too many columns in tables, can cause serializing schema to get really expensive and cause problems 2) Too many mem

Re: Cassandra limitations

2018-05-04 Thread Abdul Patel
I have 3 projects in pipeline adding 3 different cluster across all environwments would too costly option :) So 200 tables per keyspace or per cluster? On Friday, May 4, 2018, Durity, Sean R wrote: > The issue is more with the number of tables, not the number of keyspaces. > Because each tabl