Hi Matija,
Thanks for your help! The downtime is minimal, usually less than five minutes.
Since it is so short we're not so concerned about the node that's down missing
data, we just want to make sure that before it goes down it replays all the
hints it has so that there won't be any gaps in d
Your best bet is to use 256bit AES via "TLS_RSA_WITH_AES_256_CBC_SHA" since
that is (usually) hardware accelerated on recent CPUs.
The security page on the docs site has a lot of good information:
http://cassandra.apache.org/doc/latest/operating/security.html
The above contains a link to the foll
I'm trying to enable SSL (internode + client).
But I need to specify the suites but I don't know which ones are supported
by C*..
Any pointers much appreciated.
thx
--
-eric ho
Hi Jerome,
The node being drained stops listening to requests but the other nodes
being coordinators for given requests will store hints for that downed node
for a configured period of time (max_hint_window_in_ms is 3 hours by
default). If the downed node is back online in this time window it will
On Fri, Sep 2, 2016 at 9:33 AM, Mark Rose wrote:
> Hi Kevin,
>
> The tombstones will live in an sstable until it gets compacted. Do you
> have a lot of pending compactions? If so, increasing the number of
> parallel compactors may help.
Nope, we are pretty well managed on compactions. Only ever
Hello,
As part of routine maintenance for our cluster, my colleagues and I will run a
nodetool drain before stopping a Cassandra node, performing maintenance, and
bringing it back up. We run maintenance as a cron-job with a lock stored in a
different cluster to ensure only node is ever down at
I am debugging an issue at our cluster. Trying to find the RCA of it,
according to our application behavior.
Used WhiteList policy(asked a question for the same, some time back) but,
it was stated that it can not guarantee the desired behavior.
Yes, I forgot to mention, i was referring to Java driv
Forwarding to the user@cassandra.apache.org list as this list is specific
for cassandra-development, not general cassandra questions.
Can you check the repository you built the snapshot from contains the
commit 01d5fa8acf05973074482eda497677c161a311ac?
Is java 1.8.0_101 on your $env:PATH ? Can yo
Also, if you can get to at least 2.0 you can use
TimeWindowCompactionStrategy which works a lot better with time series data
w/ TTLs than STCS.
On Fri, Sep 2, 2016 at 9:53 AM Jonathan Haddad wrote:
> What's your gc_grace_seconds set to? Is it possible you have a lot of
> tombstones that haven't
What's your gc_grace_seconds set to? Is it possible you have a lot of
tombstones that haven't reached the GC grace time yet?
On Thu, Sep 1, 2016 at 12:54 AM Kevin O'Connor wrote:
> We're running C* 1.2.11 and have two CFs, one called OAuth2AccessToken and
> one OAuth2AccessTokensByUser. OAuth2A
Hi Kevin,
The tombstones will live in an sstable until it gets compacted. Do you
have a lot of pending compactions? If so, increasing the number of
parallel compactors may help. You may also be able to tun the STCS
parameters. Here's a good explanation of how it works:
https://shrikantbang.wordpre
If I understand the way replication is done, the node in us-east-1d has
all the (data) replicas, right?
No, for this to be correct, you'd need to have one DC per AZ, which is not
this case since you have a single DC encompassing multiple AZs. Right now,
replicas will be spread in 3 distinct AZs,
These sound like driver-side questions that might be better addressed to
your specific driver's mailing list. But from the terminology I'd guess
you're using a DataStax driver, possibly the Java one.
If so, you can look at WhiteListPolicy if you want to target specific
node(s). However aside fro
Hi,
I have Dc1(3 nodes), Dc2(3 nodes),
RF:3 on both Dcs
question 1 : when I create my LoadBalancingPolicy, and override
newQueryPlan, the list of hosts from newQueryPlan is the candidate
coordinator list?
question 2 : Can i force the co-ordintor to hit a particular cassandra node
only. I used con
14 matches
Mail list logo