Depends on heap and size of tables (how many columns).
Have seen many hundreds work fine. Could think of scenarios where dozens
would fail (especially weird schemas or especially small heaps).
On Fri, May 4, 2018 at 11:39 AM, Abdul Patel wrote:
> Thanks ..
> So whats ideal number when we should
Thanks ..
So whats ideal number when we should stop ..say 100 ?
On Friday, May 4, 2018, Jeff Jirsa wrote:
> Cluster. The overhead is per cluster.
>
> There are two places you'll run into scaling pain here.
>
> 1) Size of the schema (which we have to serialize to send around) - too
> many tables,
Cluster. The overhead is per cluster.
There are two places you'll run into scaling pain here.
1) Size of the schema (which we have to serialize to send around) - too
many tables, or too many columns in tables, can cause serializing schema to
get really expensive and cause problems
2) Too many mem
I have 3 projects in pipeline adding 3 different cluster across all
environwments would too costly option :)
So 200 tables per keyspace or per cluster?
On Friday, May 4, 2018, Durity, Sean R wrote:
> The issue is more with the number of tables, not the number of keyspaces.
> Because each tabl
The issue is more with the number of tables, not the number of keyspaces.
Because each table has a memTable, there is a practical limit to the number of
memtables that a node can hold in its memory. (And scaling out doesn’t help,
because every node still has a memTable for every table.) The prac
MV yes,
SASI not sure, I would guess yes.
On 2 May 2018 at 18:00, Hannu Kröger wrote:
> Ah, you are correct!
>
> However, it’s not being updated anymore AFAIK. Do you know if it support
> the latest 3.x features? SASI, MV, etc. ?
>
> Hannu
>
>
> On 24 Apr 2018, at 03:45, Christophe Schmitz
> wr