Re: Is it possible to replay hints after running nodetool drain?

2016-09-02 Thread jerome
Hi Matija, Thanks for your help! The downtime is minimal, usually less than five minutes. Since it is so short we're not so concerned about the node that's down missing data, we just want to make sure that before it goes down it replays all the hints it has so that there won't be any gaps in d

Re: What cipher suites are support in Cassandra 3.7 ?

2016-09-02 Thread Nate McCall
Your best bet is to use 256bit AES via "TLS_RSA_WITH_AES_256_CBC_SHA" since that is (usually) hardware accelerated on recent CPUs. The security page on the docs site has a lot of good information: http://cassandra.apache.org/doc/latest/operating/security.html The above contains a link to the foll

What cipher suites are support in Cassandra 3.7 ?

2016-09-02 Thread Eric Ho
I'm trying to enable SSL (internode + client). But I need to specify the suites but I don't know which ones are supported by C*.. Any pointers much appreciated. thx -- -eric ho

Re: Is it possible to replay hints after running nodetool drain?

2016-09-02 Thread Matija Gobec
Hi Jerome, The node being drained stops listening to requests but the other nodes being coordinators for given requests will store hints for that downed node for a configured period of time (max_hint_window_in_ms is 3 hours by default). If the downed node is back online in this time window it will

Re: STCS Compaction with wide rows & TTL'd data

2016-09-02 Thread Kevin O'Connor
On Fri, Sep 2, 2016 at 9:33 AM, Mark Rose wrote: > Hi Kevin, > > The tombstones will live in an sstable until it gets compacted. Do you > have a lot of pending compactions? If so, increasing the number of > parallel compactors may help. Nope, we are pretty well managed on compactions. Only ever

Is it possible to replay hints after running nodetool drain?

2016-09-02 Thread jerome
Hello, As part of routine maintenance for our cluster, my colleagues and I will run a nodetool drain before stopping a Cassandra node, performing maintenance, and bringing it back up. We run maintenance as a cron-job with a lock stored in a different cluster to ensure only node is ever down at

Re: Return value of newQueryPlan

2016-09-02 Thread Siddharth Verma
I am debugging an issue at our cluster. Trying to find the RCA of it, according to our application behavior. Used WhiteList policy(asked a question for the same, some time back) but, it was stated that it can not guarantee the desired behavior. Yes, I forgot to mention, i was referring to Java driv

Re: CASSANDRA-12278

2016-09-02 Thread Paulo Motta
Forwarding to the user@cassandra.apache.org list as this list is specific for cassandra-development, not general cassandra questions. Can you check the repository you built the snapshot from contains the commit 01d5fa8acf05973074482eda497677c161a311ac? Is java 1.8.0_101 on your $env:PATH ? Can yo

Re: STCS Compaction with wide rows & TTL'd data

2016-09-02 Thread Jonathan Haddad
Also, if you can get to at least 2.0 you can use TimeWindowCompactionStrategy which works a lot better with time series data w/ TTLs than STCS. On Fri, Sep 2, 2016 at 9:53 AM Jonathan Haddad wrote: > What's your gc_grace_seconds set to? Is it possible you have a lot of > tombstones that haven't

Re: STCS Compaction with wide rows & TTL'd data

2016-09-02 Thread Jonathan Haddad
What's your gc_grace_seconds set to? Is it possible you have a lot of tombstones that haven't reached the GC grace time yet? On Thu, Sep 1, 2016 at 12:54 AM Kevin O'Connor wrote: > We're running C* 1.2.11 and have two CFs, one called OAuth2AccessToken and > one OAuth2AccessTokensByUser. OAuth2A

Re: STCS Compaction with wide rows & TTL'd data

2016-09-02 Thread Mark Rose
Hi Kevin, The tombstones will live in an sstable until it gets compacted. Do you have a lot of pending compactions? If so, increasing the number of parallel compactors may help. You may also be able to tun the STCS parameters. Here's a good explanation of how it works: https://shrikantbang.wordpre

Re: nodetool repair uses option '-local' and '-pr' togather

2016-09-02 Thread Paulo Motta
If I understand the way replication is done, the node in us-east-1d has all the (data) replicas, right? No, for this to be correct, you'd need to have one DC per AZ, which is not this case since you have a single DC encompassing multiple AZs. Right now, replicas will be spread in 3 distinct AZs,

Re: Return value of newQueryPlan

2016-09-02 Thread Eric Stevens
These sound like driver-side questions that might be better addressed to your specific driver's mailing list. But from the terminology I'd guess you're using a DataStax driver, possibly the Java one. If so, you can look at WhiteListPolicy if you want to target specific node(s). However aside fro

Return value of newQueryPlan

2016-09-02 Thread Siddharth Verma
Hi, I have Dc1(3 nodes), Dc2(3 nodes), RF:3 on both Dcs question 1 : when I create my LoadBalancingPolicy, and override newQueryPlan, the list of hosts from newQueryPlan is the candidate coordinator list? question 2 : Can i force the co-ordintor to hit a particular cassandra node only. I used con