Insert-only application repair

2018-05-12 Thread onmstester onmstester
In an insert-only use case with TTL (6 months), should i run this command, every 5-7 days on all the nodes of production cluster (according to this: http://cassandra.apache.org/doc/latest/operating/repair.html )? nodetool repair -pr --full When none of the nodes was down in 4 months (ever sin

Re: Insert-only application repair

2018-05-12 Thread Nitan Kainth
If you have RF>CL then Repair needs to be run to make sure data is in sync. Sent from my iPhone > On May 12, 2018, at 3:54 AM, onmstester onmstester > wrote: > > > In an insert-only use case with TTL (6 months), should i run this command, > every 5-7 days on all the nodes of production clus

Re: Insert-only application repair

2018-05-12 Thread onmstester onmstester
Thank you Nitan, That's exactly my case (RF > CL). But as long as there is no node outage, shouldn't the hinted handoff handle data consistency? Sent using Zoho Mail On Sat, 12 May 2018 16:26:13 +0430 Nitan Kainth wrote If you have RF>CL then Repair

Re: Insert-only application repair

2018-05-12 Thread Jeff Jirsa
In a TTL only use case with no explicit deletes, if read CL + write CL > RF you can likely avoid repairs with a few huge caveats: 1) read repair may mess up your ttl expiration if you’re using TWCS 2) if you lose a host you probably need to run repairs or you may not see some data after replacem

Re: Error after 3.1.0 to 3.11.2 upgrade

2018-05-12 Thread Abdul Patel
Yeah found that all had 3 replication factor and system_auth had 1 , chnaged to 3 now ..so was this issue due to system_auth replication facyor mismatch? On Saturday, May 12, 2018, Hannu Kröger wrote: > Hi, > > Did you check replication strategy and amounts of replicas of system_auth > keyspace?

Re: Error after 3.1.0 to 3.11.2 upgrade

2018-05-12 Thread Jeff Jirsa
RF of one means all auth requests go to the same node, so they’re more likely to time out if that host is overloaded or restarts Increasing it distributed the queries among more hosts -- Jeff Jirsa > On May 12, 2018, at 6:11 AM, Abdul Patel wrote: > > Yeah found that all had 3 replication

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-12 Thread Jeff Jirsa
I haven't seen this before, but I have a guess. What client/driver are you using? Are you using a prepared statement that has every column listed for the update, and leaving the un-set columns as null? If so, the null is being translated into a delete, which is clearly not what you want. The dif