In that case, just don't delete the dead node (what I think you should
do anyways. I'm pretty sure it can't be deleted if you're going to
replace it with "-Dcassandra.replace_address=...").
I was speaking about the case that you _do_ want it replaced. You can
just delete it and bootstrap a new node
Hi,
I am doing stress test on Datastax Cassandra Community 2.1.2, not using the
provided stress test tool, but use my own stress-test client code instead(I
write some C++ stress test code). My Cassandra cluster is deployed on Amazon
EC2, using the provided Datastax Community AMI( HVM instances ) i
B would work better in the case where you need to do sequential or ranged
style reads on the id, particularly if id has any significant sparseness
(eg, id is a timeuuid). You can compute the buckets and do reads of entire
buckets within your range. However if you're doing random access by id,
the
The official recommendation is 100k:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html
I wonder if there's an advantage to this over unlimited if you're running
servers which are dedicated to your Cassandra cluster (which you should be
for anything
It depends on the size of your data, but if your data is reasonably small,
there should be no trouble including thousands of records on the same
partition key. So a data model using PRIMARY KEY ((seq_id), seq_type)
ought to work fine.
If the data size per partition exceeds some threshold that rep
Based on recent conversations with Datastax engineers, the recommendation
is definitely still to run a finite and reasonable set of column families.
The best way I know of to support multitenancy is to include tenant id in
all of your partition keys.
On Fri Dec 05 2014 at 7:39:47 PM Kai Wang wro
On Sat, Dec 6, 2014 at 8:05 AM, Eric Stevens wrote:
> The official recommendation is 100k:
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html
>
> I wonder if there's an advantage to this over unlimited if you're running
> servers which are dedicat
On Sat, Dec 6, 2014 at 11:18 AM, Eric Stevens wrote:
> It depends on the size of your data, but if your data is reasonably small,
> there should be no trouble including thousands of records on the same
> partition key. So a data model using PRIMARY KEY ((seq_id), seq_type)
> ought to work fine.
On Sat, Dec 6, 2014 at 11:22 AM, Eric Stevens wrote:
> Based on recent conversations with Datastax engineers, the recommendation
> is definitely still to run a finite and reasonable set of column families.
>
> The best way I know of to support multitenancy is to include tenant id in
> all of your
Generally, limit a Cassandra cluster low hundreds of tables, regardless of
number of keyspaces. Beyond low hundreds is certainly an “expert” feature and
requires great care. Sure, maybe you can have 500 or 750 or maybe even 1,000
tables in a cluster, but don’t be surprised if you start running i
There are two categorically distinct forms of multi-tenancy: 1) You control the
apps and simply want client data isolation, and 2) The client has their own
apps and doing direct access to the cluster and using access control at the
table level to isolate the client data.
Using a tenant ID in th
+1 well said Jack!
On Sun, Dec 7, 2014 at 6:13 AM, Jack Krupansky
wrote:
> Generally, limit a Cassandra cluster low hundreds of tables, regardless
> of number of keyspaces. Beyond low hundreds is certainly an “expert”
> feature and requires great care. Sure, maybe you can have 500 or 750 or
>
12 matches
Mail list logo