It is the total table count, across all key spaces. Memory is memory.
-- Jack Krupansky
On Tue, Mar 1, 2016 at 6:26 PM, Brian Sam-Bodden
wrote:
> Eric,
> Is the keyspace as a multitenancy solution as bad as the many tables
> pattern? Is the memory overhead of keyspaces as heavy as that of tab
Eric,
Is the keyspace as a multitenancy solution as bad as the many tables
pattern? Is the memory overhead of keyspaces as heavy as that of tables?
Cheers,
Brian
On Tuesday, March 1, 2016, Eric Stevens wrote:
> It's definitely not true for every use case of a large number of tables,
> but for
It's definitely not true for every use case of a large number of tables,
but for many uses where you'd be tempted to do that, adding whatever would
have driven your table naming instead as a column in your partition key on
a smaller number of tables will meet your needs. This is especially true
if
I don't think Cassandra was "purposefully developed" for some target number
of tables - there is no evidence of any such an explicit intent. Instead,
it would be fair to say that Cassandra was "not purposefully developed"
with a goal of supporting "large numbers of tables." Sometimes features and
c
>If your Jira search fu is strong enoughAnd it is! )
>you should be able to find it yourselfAnd I did! )
I see that this issue originates to problem with Java GC's design, but
according to date it was Java 6 time. Now we have J8 with new GC mechanism.
Is this problem still exists with J8? Any ch
Hi Jack
Being purposefully developed to only handle up to “a few hundred” tables is
reason enough. I accept that, and likely a use case with many tables was never
really considered. But I would still like to understand the design choices made
so perhaps we gain some confidence level in this upp
I'll defer to one of the senior committers as to whether they want that
information disseminated any further than it already is. It was
intentionally not documented since it is not recommended. If your Jira
search fu is strong enough you should be able to find it yourself, but
again, its use is str
Hi Jack,
>you can reduce the overhead per table an undocumented Jira Can you please
>point to this Jira number?
>it is strongly not recommendedWhat is consequences of this (besides
>performance degradation, if any)?
Thanks.
On Tuesday, March 1, 2016 7:23 AM, Jack Krupansky
wrote:
3
I don't think there are any "reasons behind it." It is simply empirical
experience - as reported here.
Cassandra scales in two dimension - number of rows per node and number of
nodes. If some source of information lead you to believe otherwise, please
point out the source so that we can endeavor t
Hi Tommaso
It’s not that I _need_ a large number of tables. This approach maps easily to
the problem we are trying to solve, but it’s becoming clear it’s not the right
approach.
At the moment I’m trying to understand the limitations in Cassandra regarding
number of Tables and the reasons behin
Hi Fernando,
I used to have a cluster with ~300 tables (1 keyspace) on C* 2.0, it was a
real pain in terms of operations. Repairs were terribly slow, boot of C*
slowed down and in general tracking table metrics becomes bit more work.
Why do you need this high number of tables?
Tommaso
On Tue, Ma
Hi Jack
By entry I mean row
Apologies for the “obsolete terminology”. When I first looked at Cassandra it
was still on CQL2, and now that I’m looking at it again I’ve defaulted to the
terms I already knew. I will bear it in mind and call them tables from now on.
Is there any documentation abou
3,000 entries? What's an "entry"? Do you mean row, column, or... what?
You are using the obsolete terminology of CQL2 and Thrift - column family.
With CQL3 you should be creating "tables". The practical recommendation of
an upper limit of a few hundred tables across all key spaces remains.
Techni
Yes, there is memory overhead for each column family, effectively limiting the
number of column families. The general wisdom is that you should limit yourself
to a few hundred.
Robert
On Feb 29, 2016, at 10:30 AM, Fernando Jimenez
mailto:fernando.jime...@wealth-port.com>>
wrote:
Hi all
I ha
Hi all
I have a use case for Cassandra that would require creating a large number of
column families. I have found references to early versions of Cassandra where
each column family would require a fixed amount of memory on all nodes,
effectively imposing an upper limit on the total number of C
15 matches
Mail list logo