Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Hi, On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote: > I strongly suggest you don't use materialized views at all. There are > edge cases that in my opinion make them unsuitable for production, both in > terms of cluster stability as well as data integrity. > Oh, there is already an open and

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Gábor AUTH > On Fri, May 17, 2024 at 8:58 AM Gábor Auth wrote: > >> Hi, >> >> I know, I know, the materialized view is experimental... :) >> >> So, I ran into a strange error. Among others, I have a very small 4-nodes >> cluster, with very minimal data

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Jon Haddad
w, I know, the materialized view is experimental... :) > > So, I ran into a strange error. Among others, I have a very small 4-nodes > cluster, with very minimal data (~100 MB at all), the keyspace's > replication factor is 3, everything is works fine... except: if I restart a > node, I get

Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Hi, I know, I know, the materialized view is experimental... :) So, I ran into a strange error. Among others, I have a very small 4-nodes cluster, with very minimal data (~100 MB at all), the keyspace's replication factor is 3, everything is works fine... except: if I restart a node, I get

Re: write on ONE node vs replication factor

2023-07-16 Thread Anurag Bisht
Thank you Dipan, it makes sense now. Cheers, Anurag On Sun, Jul 16, 2023 at 12:43 AM Dipan Shah wrote: > Hello Anurag, > > In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is > Read consistency, W is Write consistency and N is the Replication Fa

Re: write on ONE node vs replication factor

2023-07-16 Thread Dipan Shah
Hello Anurag, In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is Read consistency, W is Write consistency and N is the Replication Factor. So in your case, R(2) + W(1) = 3 which is NOT greater than your replication factor(3) so you will not be able to gua

Re: write on ONE node vs replication factor

2023-07-15 Thread Anurag Bisht
thank you Jeff, it makes more sense now. How about I write with ONE consistency, replication factor = 3 and read consistency is QUORUM. I am guessing in that case, I will not have the empty read even if it is happened immediately after the write request, let me know your thoughts ? Cheers, Anurag

Re: write on ONE node vs replication factor

2023-07-15 Thread Jeff Jirsa
Consistency level controls when queries acknowledge/succeed Replication factor is where data lives / how many copies If you write at consistency ONE and replication factor 3, the query finishes successfully when the write is durable on one of the 3 copies. It will get sent to all 3, but it’ll

write on ONE node vs replication factor

2023-07-15 Thread Anurag Bisht
Hello Users, I am new to Cassandra and trying to understand the architecture of it. If I write to ONE node for a particular key and have a replication factor of 3, would the written key will get replicated to the other two nodes ? Let me know if I am thinking incorrectly. Thanks, Anurag

RE: Trouble After Changing Replication Factor

2021-10-13 Thread Isaeed Mohanna
Replication Factor The most likely explanation is that repair failed and you didnt notice. Or that you didnt actually repair every host / every range. Which version are you using? How did you run repair? On Tue, Oct 12, 2021 at 4:33 AM Isaeed Mohanna mailto:isa...@xsense.co>> wrote: Hi

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Jeff Jirsa
request will actually return a correct result? > > > > Thanks > > > > *From:* Bowen Song > *Sent:* Monday, October 11, 2021 5:13 PM > *To:* user@cassandra.apache.org > *Subject:* Re: Trouble After Changing Replication Factor > > > > You have RF=3

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Dmitry Saprykin
; > Thanks > > > > *From:* Bowen Song > *Sent:* Monday, October 11, 2021 5:13 PM > *To:* user@cassandra.apache.org > *Subject:* Re: Trouble After Changing Replication Factor > > > > You have RF=3 and both read & write CL=1, which means you are asking >

Re: Trouble After Changing Replication Factor

2021-10-12 Thread Bowen Song
onday, October 11, 2021 5:13 PM *To:* user@cassandra.apache.org *Subject:* Re: Trouble After Changing Replication Factor You have RF=3 and both read & write CL=1, which means you are asking Cassandra to give up strong consistency in order to gain higher availability and perhaps slight faster s

RE: Trouble After Changing Replication Factor

2021-10-12 Thread Isaeed Mohanna
request will actually return a correct result? Thanks From: Bowen Song Sent: Monday, October 11, 2021 5:13 PM To: user@cassandra.apache.org Subject: Re: Trouble After Changing Replication Factor You have RF=3 and both read & write CL=1, which means you are asking Cassandra to give up st

Re: Trouble After Changing Replication Factor

2021-10-11 Thread Bowen Song
write CL) > RF. On 10/10/2021 11:55, Isaeed Mohanna wrote: Hi We had a cluster with 3 Nodes with Replication Factor 2 and we were using read with consistency Level One. We recently added a 4^th node and changed the replication factor to 3, once this was done apps reading from DB with CL1 w

Trouble After Changing Replication Factor

2021-10-10 Thread Isaeed Mohanna
Hi We had a cluster with 3 Nodes with Replication Factor 2 and we were using read with consistency Level One. We recently added a 4th node and changed the replication factor to 3, once this was done apps reading from DB with CL1 would receive an empty record, Looking around I was surprised to

Re: Anti-entropy repair with a 4 node cluster replication factor 4

2020-10-27 Thread manish khandelwal
If you run full repair then it should be fine, since all the replicas are present on all the nodes. If you are using -pr option then you need to run on all the nodes. On Tue, Oct 27, 2020 at 4:11 PM Fred Al wrote: > Hello! > Running Cassandra 2.2.9 with a 4 node cluster with replication

Anti-entropy repair with a 4 node cluster replication factor 4

2020-10-27 Thread Fred Al
Hello! Running Cassandra 2.2.9 with a 4 node cluster with replication factor 4. When running anti-entropy repair is it required to run repair on all 4 nodes or is it sufficient to run it on only one node? Since all data is replicated on all nodes i.m.o. only one node would need to be repaired to

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-27 Thread Leena Ghatpande
: Tuesday, May 26, 2020 11:33 PM To: user@cassandra.apache.org Subject: Re: any risks with changing replication factor on live production cluster without downtime and service interruption? By retry logic, I’m going to guess you are doing some kind of version consistency trick where you have a

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-26 Thread Reid Pinchback
to LOCAL_QUORUM until you’re done to buffer yourself from that risk. From: Leena Ghatpande Reply-To: "user@cassandra.apache.org" Date: Tuesday, May 26, 2020 at 1:20 PM To: "user@cassandra.apache.org" Subject: Re: any risks with changing replication factor on live production clust

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-26 Thread Leena Ghatpande
From: Leena Ghatpande Sent: Friday, May 22, 2020 11:51 AM To: cassandra cassandra Subject: any risks with changing replication factor on live production cluster without downtime and service interruption? We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-24 Thread Oleksandr Shulgin
On Fri, May 22, 2020 at 9:51 PM Jeff Jirsa wrote: > With those consistency levels it’s already possible you don’t see your > writes, so you’re already probably seeing some of what would happen if you > went to RF=5 like that - just less common > > If you did what you describe you’d have a 40% cha

Re: any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-22 Thread Jeff Jirsa
and thinking of changing > the replication factor to 5 for each DC. > > Our application uses the below consistency level > read-level: LOCAL_ONE > write-level: LOCAL_QUORUM > > if we change the RF=5 on live cluster, and run full repairs, would we see > read/write errors w

any risks with changing replication factor on live production cluster without downtime and service interruption?

2020-05-22 Thread Leena Ghatpande
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in each DC. RF=3 We have around 150M rows across tables. We are planning to add more nodes to the cluster, and thinking of changing the replication factor to 5 for each DC. Our application uses the below consistency level

Re: system_auth keyspace replication factor

2018-11-26 Thread Sam Tunnicliffe
> I suspect some of the intermediate queries (determining role, etc) happen at > quorum in 2.2+, but I don’t have time to go read the code and prove it. This isn’t true. Aside from when using the default superuser, only CRM::getAllRoles reads at QUORUM (because the resultset would include the

Re: system_auth keyspace replication factor

2018-11-26 Thread Oleksandr Shulgin
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote: > > We have recently met a problem when we added 60 nodes in 1 region to the > cluster > and set an RF=60 for the system_auth ks, following this documentation > https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html > Sad

Re: system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
Attaching the runner log snippet, where we can see that "Rebuilding token map" took most of the time. getAllroles is using quorum, don't if it is used during login https://github.com/apache/cassandra/blob/cc12665bb7645d17ba70edcf952ee6a1ea63127b/src/java/org/apache/cassandra/auth/CassandraRoleManag

Re: system_auth keyspace replication factor

2018-11-23 Thread Jeff Jirsa
I suspect some of the intermediate queries (determining role, etc) happen at quorum in 2.2+, but I don’t have time to go read the code and prove it. In any case, RF > 10 per DC is probably excessive Also want to crank up the validity times so it uses cached info longer -- Jeff Jirsa > On N

Re: system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
no its not a cassandra user and as i understood all other users login local_one. On Fri, 23 Nov 2018, 19:30 Jonathan Haddad Any chance you’re logging in with the Cassandra user? It uses quorum > reads. > > > On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk > wrote: > >> Hi, >> We have recently me

Re: system_auth keyspace replication factor

2018-11-23 Thread Jonathan Haddad
Any chance you’re logging in with the Cassandra user? It uses quorum reads. On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk wrote: > Hi, > We have recently met a problem when we added 60 nodes in 1 region to the > cluster > and set an RF=60 for the system_auth ks, following this documentation >

system_auth keyspace replication factor

2018-11-23 Thread Vitali Dyachuk
Hi, We have recently met a problem when we added 60 nodes in 1 region to the cluster and set an RF=60 for the system_auth ks, following this documentation https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html However we've started to see increased login latencies in the cluste

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-11 Thread Jürgen Albersdorfer
Data by Key, but not for searching. High Availibility is a nice giveaway here. If you end having only one Table in C*, maybe something like Redis would work for your needs, too. Some hints from my own expierience with it - if you choose to use Cassandra: Have at least as much Racks as Replication F

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Jeff Jirsa
adjacent racks, the link between the two racks goes down, but both are otherwise functional - a query at ONE in either rack would be able to read and write data, but it would diverge between the two racks for some period of time). > > When I go to set up the database though, I am requi

Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Code Wiget
around 1s, then there shouldn’t be an issue. When I go to set up the database though, I am required to set a replication factor to a number - 1,2,3,etc. So I can’t just say “ALL” and have it replicate to all nodes. Right now, I have a 2 node cluster with replication factor 3. Will this cause

Re: Reducing the replication factor

2018-01-09 Thread Jeff Jirsa
Run repair first to ensure the data is properly replicated, then cleanup. -- Jeff Jirsa > On Jan 9, 2018, at 9:36 AM, Alessandro Pieri wrote: > > Dear Everyone, > > We are running Cassandra v2.0.15 on our production cluster. > > We would like to reduce the replicati

Reducing the replication factor

2018-01-09 Thread Alessandro Pieri
Dear Everyone, We are running Cassandra v2.0.15 on our production cluster. We would like to reduce the replication factor from 3 to 2 but we are not sure if it is a safe operation. We would like to get some feedback from you guys. Have anybody tried to shrink the replication factor? Does

Cassandra Replication Factor change from 2 to 3 for each data center

2017-12-15 Thread Harika Vangapelli -T (hvangape - AKRAYA INC at Cisco)
This is just basic question to ask..but it is worth to ask. We changed Replication factor from 2 to 3 in our production cluster. We have 2 data centers. Does nodetool repair -dcpar from single node in one data center is sufficient for the whole replication to take effect? Please confirm. Do I

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Nate McCall
Regardless, if you are not modifying users frequently (with five you most likely are not), make sure turn the permission cache wyyy up. In 2.1 that is just: permissions_validity_in_ms (default is 2000 or 2 seconds). Feel free to set it to 1 day or some such. The corresponding async update para

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Erick Ramirez
8-30 > 10:51:25.015000 | xx.xx.xx.113 | 601190 > > REQUEST_RESPONSE message received from > /xx.xx.xx.116 [MessagingService-Incoming-/xx.xx.xx.116] | 2017-08-30 > 10:51:25.015000 | xx.xx.xx.113 | 601771 > > > Processing re

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread kurt greaves
For that many nodes mixed with vnodes you probably want a lower RF than N per datacenter. 5 or 7 would be reasonable. The only down side is that auth queries may take slightly longer as they will often have to go to other nodes to be resolved, but in practice this is likely not a big deal as the da

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
Request complete | 2017-08-30 10:51:25.014874 | xx.xx.xx.113 | 601874 From: Oleksandr Shulgin Reply-To: "user@cassandra.apache.org" Date: Wednesday, August 30, 2017 at 10:42 AM To: User Subject: Re: system_auth replication

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds wrote: > How many users do you have (or expect to be found in system_auth.users)? > > 5 users. > > What are the current RF for system_auth and consistency level you are > using in cqlsh? > > 135 in one DC and 227 in the other DC. Consistency lev

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
TRACING ON)? Tracing timeout even though I increased it to 120 seconds. From: Oleksandr Shulgin Reply-To: "user@cassandra.apache.org" Date: Wednesday, August 30, 2017 at 10:19 AM To: User Subject: Re: system_auth replication factor in Cassandra 2.1 On Wed, Aug 30, 2017 at 5:50 PM, Chuc

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 6:20 PM, Chuck Reynolds wrote: > So I tried to run a repair with the following on one of the server. > > nodetool repair system_auth -pr –local > > > > After two hours it hadn’t finished. I had to kill the repair because of > another issue and haven’t tried again. > > > >

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
if I set the RF back to a lower number like 5? Thanks From: on behalf of Sam Tunnicliffe Reply-To: "user@cassandra.apache.org" Date: Wednesday, August 30, 2017 at 10:10 AM To: "user@cassandra.apache.org" Subject: Re: system_auth replication factor in Cassandra 2.1 It'

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Oleksandr Shulgin
On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds wrote: > So I’ve read that if your using authentication in Cassandra 2.1 that your > replication factor should match the number of nodes in your datacenter. > > > > *Is that true?* > > > > I have two datacenter clus

Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Sam Tunnicliffe
tup of your own users & superusers, the link above also has info on this. Thanks, Sam On 30 August 2017 at 16:50, Chuck Reynolds wrote: > So I’ve read that if your using authentication in Cassandra 2.1 that your > replication factor should match the number of nodes in your datacenter

RE: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Jonathan Baynes
into a secure cluster, set the replication factor of the system_auth and dse_security keyspaces to a value that is greater than 1. In a multi-node cluster, using the default of 1 prevents logging into any node when the node that stores the user data is down. From: Chuck Reynolds [mailto:creyno

system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Chuck Reynolds
So I’ve read that if your using authentication in Cassandra 2.1 that your replication factor should match the number of nodes in your datacenter. Is that true? I have two datacenter cluster, 135 nodes in datacenter 1 & 227 nodes in an AWS datacenter. Why do I want to replicate the system_

Re: Dropping down replication factor

2017-08-15 Thread Erick Ramirez
ng and possible OOM > bugs due to metadata writing at end of streaming (sorry don't have ticket > handy). I'm worried I might not be able to do much with these since the > disk space usage is high and they are under a lot of load given the small > number of them for this rack.

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
Thanks Kurt. We had one sstable from a cf of ours. I am actually running a repair on that cf now and then plan to try and join the additional nodes as you suggest. I deleted the opscenter corrupt sstables as well but will not bother repairing that before adding capacity. Been keeping an eye acr

Re: Dropping down replication factor

2017-08-13 Thread kurt greaves
On 14 Aug. 2017 00:59, "Brian Spindler" wrote: Do you think with the setup I've described I'd be ok doing that now to recover this node? The node died trying to run the scrub; I've restarted it but I'm not sure it's going to get past a scrub/repair, this is why I deleted the other files as a bru

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
for >>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack >>> probably not necessary, streaming is being marked as failed and it’s >>> turning itself off. Not sure why streaming is marked as failing, though, >>> anything on the sending sides? >

Re: Dropping down replication factor

2017-08-13 Thread Jeff Jirsa
econdary index build. Hard to say for >>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack >>> probably not necessary, streaming is being marked as failed and it’s >>> turning itself off. Not sure why streaming is marked as failing, though, >>> anythin

Re: Dropping down replication factor

2017-08-13 Thread Brian Spindler
e jstack >> probably not necessary, streaming is being marked as failed and it’s >> turning itself off. Not sure why streaming is marked as failing, though, >> anything on the sending sides? >> >> >> >> >> >> From: Brian Spindler >> Reply-T

Re: Dropping down replication factor

2017-08-12 Thread Brian Spindler
> > > From: Brian Spindler > Reply-To: > Date: Saturday, August 12, 2017 at 6:34 PM > To: > Subject: Re: Dropping down replication factor > > Thanks for replying Jeff. > > Responses below. > > On Sat, Aug 12, 2017 at 8:33 PM Jeff Jirsa wrote: > >> Answers

Re: Dropping down replication factor

2017-08-12 Thread Jeffrey Jirsa
itself off. Not sure why streaming is marked as failing, though, anything on the sending sides? From: Brian Spindler Reply-To: Date: Saturday, August 12, 2017 at 6:34 PM To: Subject: Re: Dropping down replication factor Thanks for replying Jeff. Responses below. On Sat, Aug 12, 2017 at

Re: Dropping down replication factor

2017-08-12 Thread Brian Spindler
RANGE_SLICE 15 _TRACE 0 MUTATION 2949001 COUNTER_MUTATION 0 BINARY 0 REQUEST_RESPONSE 0 PAGED_RANGE 0 READ_REPAIR 8571 I can get a jstack if needed. > > > >

Re: Dropping down replication factor

2017-08-12 Thread Jeff Jirsa
not busy doing something? Like building secondary index or similar? jstack thread dump would be useful, or at least nodetool tpstats > > Rather than troubleshoot this further, what I was thinking about doing was: > - drop the replication factor on our keyspace to two Repair before y

Dropping down replication factor

2017-08-12 Thread brian . spindler
what I was thinking about doing was: - drop the replication factor on our keyspace to two - hopefully this would reduce load on these two remaining nodes - run repairs/cleanup across the cluster - then shoot these two nodes in the 'c' rack - run repairs/cleanup across the cluster W

RE: Question about replica and replication factor

2016-09-20 Thread Jun Wu
Great explanation! For the single partition read, it makes sense to read data from only one replica. Thank you so much Ben! Jun From: ben.sla...@instaclustr.com Date: Tue, 20 Sep 2016 05:30:43 + Subject: Re: Question about replica and replication factor To: wuxiaomi...@hotmail.com CC: user

Re: Question about replica and replication factor

2016-09-19 Thread Ben Slater
one replica, and operate read repair for the left >> replicas. >> >> Also, how could read accross all nodes in the cluster? >> >> Thanks! >> >> Jun >> >> >> From: ben.sla...@instaclustr.com >> Date: Tue, 20 Sep 2016 04:18:59 +

Re: Question about replica and replication factor

2016-09-19 Thread Jun Wu
words in the post shows that the coordinator only >> contact/read data from one replica, and operate read repair for the left >> replicas. >> >> Also, how could read accross all nodes in the cluster? >> >> Thanks! >> >> Jun >

Re: Question about replica and replication factor

2016-09-19 Thread Ben Slater
Date: Tue, 20 Sep 2016 04:18:59 + > Subject: Re: Question about replica and replication factor > To: user@cassandra.apache.org > > > Each individual read (where a read is a single row or single partition) > will read from one node (ignoring read repairs) as each partition will

RE: Question about replica and replication factor

2016-09-19 Thread Jun Wu
Jun From: ben.sla...@instaclustr.com Date: Tue, 20 Sep 2016 04:18:59 + Subject: Re: Question about replica and replication factor To: user@cassandra.apache.org Each individual read (where a read is a single row or single partition) will read from one node (ignoring read repairs) as each partiti

Re: Question about replica and replication factor

2016-09-19 Thread Ben Slater
distributed across all the nodes in your cluster). Cheers Ben On Tue, 20 Sep 2016 at 14:09 Jun Wu wrote: > Hi there, > > I have a question about the replica and replication factor. > > For example, I have a cluster of 6 nodes in the same data center. > Replication factor R

Question about replica and replication factor

2016-09-19 Thread Jun Wu
Hi there, I have a question about the replica and replication factor. For example, I have a cluster of 6 nodes in the same data center. Replication factor RF is set to 3 and the consistency level is default 1. According to this calculator http://www.ecyrd.com/cassandracalculator

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
e when I added it in it never synced data I >>>>>> guess? It was at around 50 MB when it first came up and transitioned to >>>>>> "UN". After it was in I did the 1->2 replication change and tried repair >>>>>> but it didn't fix i

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
p and transitioned to >>>>> "UN". After it was in I did the 1->2 replication change and tried repair >>>>> but it didn't fix it. From what I can tell all the data on it is stuff >>>>> that has been written since it came up. We never del

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Mike Yeap
t; |/ State=Normal/Leaving/Joining/Moving >>>>>> -- Address Load Tokens Owns (effective) Host ID >>>>>> Rack >>>>>> UN 10.142.0.14 6.4 GB 256 100.0% >>>>>> c3a5c39d-e1c9-4

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bryan Cheng
lower and then of course 10.128.0.20 which is missing >>> over 5 GB of data. I tried running nodetool -local on both DCs and it >>> didn't fix either one. >>> >>> Am I running into a bug of some kind? >>> >>> On Tue, May 24, 2016 at 4:06 P

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread kurt Greaves
t;> >> Am I running into a bug of some kind? >> >> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal wrote: >> >>> Hi Luke, >>> >>> You mentioned that replication factor was increased from 1 to 2. In that >>> case was the node bearing ip 10.128.0.20

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bhuvan Rawal
nd then of course 10.128.0.20 which is missing over > 5 GB of data. I tried running nodetool -local on both DCs and it didn't > fix either one. > > Am I running into a bug of some kind? > > On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal wrote: > >> Hi Luke, >&g

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Luke Jolly
ke, > > You mentioned that replication factor was increased from 1 to 2. In that > case was the node bearing ip 10.128.0.20 carried around 3GB data earlier? > > You can run nodetool repair with option -local to initiate repair local > datacenter for gce-us-central1. > > Also

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Bhuvan Rawal
Hi Luke, You mentioned that replication factor was increased from 1 to 2. In that case was the node bearing ip 10.128.0.20 carried around 3GB data earlier? You can run nodetool repair with option -local to initiate repair local datacenter for gce-us-central1. Also you may suspect that if a lot

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread Luke Jolly
t; On 23 May 2016 at 19:31, Luke Jolly wrote: > >> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and >> gce-us-east1. I increased the replication factor of gce-us-central1 from 1 >> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Own

Re: Increasing replication factor and repair doesn't seem to work

2016-05-23 Thread kurt Greaves
t1. I increased the replication factor of gce-us-central1 from 1 > to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for > the node switched to 100% as it should but the Load showed that it didn't > actually sync the data. I then ran a full &#x

Increasing replication factor and repair doesn't seem to work

2016-05-23 Thread Luke Jolly
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and gce-us-east1. I increased the replication factor of gce-us-central1 from 1 to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for the node switched to 100% as it should but the Load showed tha

Re: Replication Factor Change

2015-11-05 Thread Yulian Oifa
hange your > availability model). > > On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa wrote: > >> Hello to all. >> I am planning to change replication factor from 1 to 3. >> Will it cause data read errors in time of nodes repair? >> >> Best regards >> Yulian Oifa >> >

RE: Replication Factor Change

2015-11-05 Thread aeljami.ext
Hello, If current CL = ONE, Be careful on production at the time of change replication factor, 3 nodes will be queried while data is being transformed ==> So data read errors! De : Yulian Oifa [mailto:oifa.yul...@gmail.com] Envoyé : jeudi 5 novembre 2015 16:02 À : user@cassandra.apache.

Re: Replication Factor Change

2015-11-05 Thread Eric Stevens
e for a node failure, so that doesn't really change your availability model). On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa wrote: > Hello to all. > I am planning to change replication factor from 1 to 3. > Will it cause data read errors in time of nodes repair? > > Best regards > Yulian Oifa >

Replication Factor Change

2015-11-05 Thread Yulian Oifa
Hello to all. I am planning to change replication factor from 1 to 3. Will it cause data read errors in time of nodes repair? Best regards Yulian Oifa

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-11-01 Thread sai krishnam raju potturi
tion of 3 is maintained? >>>>> >>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta >>>>> wrote: >>>>> >>>>>> You have to do few things before unsafe as sanitation . First run the >>>>>> nodetool decom

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
removenode did not work. We did not >>>>>>> capture the tokens of the dead node. Any way we could make sure the >>>>>>> replication of 3 is maintained? >>>>>>> >>>>>>> >>>>>>>> On Sat,

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
sure the >>>> replication of 3 is maintained? >>>> >>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta >>>> wrote: >>>> >>>>> You have to do few things before unsafe as sanitation . First run the >>>>> nodetool decommissio

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
2015, 11:14 Surbhi Gupta >>>>>> wrote: >>>>>> You have to do few things before unsafe as sanitation . First run the >>>>>> nodetool decommission if the node is up and wait till streaming happens >>>>>> . You can check is th

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
ned? >>> >>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta >>> wrote: >>> >>>> You have to do few things before unsafe as sanitation . First run the >>>> nodetool decommission if the node is up and wait till streaming happens . >>>>

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
detool decommission if the node is up and wait till streaming happens . >>> You can check is the streaming is completed by nodetool netstats . If >>> streaming is completed you can do unsafe assanitation . >>> >>> To answer your question unsafe assanitation will

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
ion . >> >> To answer your question unsafe assanitation will not take care of >> replication factor . >> It is like forcing a node out from the cluster . >> >> Hope this helps. >> >> Sent from my iPhone >> >> > On Oct 31, 2015, at 5:

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
; You can check is the streaming is completed by nodetool netstats . If >> streaming is completed you can do unsafe assanitation . >> >> To answer your question unsafe assanitation will not take care of >> replication factor . >> It is like forcing a node out from the c

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
the > nodetool decommission if the node is up and wait till streaming happens . > You can check is the streaming is completed by nodetool netstats . If > streaming is completed you can do unsafe assanitation . > > To answer your question unsafe assanitation will not take care of > replica

Re: Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread Surbhi Gupta
unsafe assanitation will not take care of replication factor . It is like forcing a node out from the cluster . Hope this helps. Sent from my iPhone > On Oct 31, 2015, at 5:12 AM, sai krishnam raju potturi > wrote: > > hi; >would unsafeassasinating a dead node maintain

Re : will Unsafeassaniate a dead node maintain the replication factor

2015-10-31 Thread sai krishnam raju potturi
hi; would unsafeassasinating a dead node maintain the replication factor like decommission process or removenode process? thanks

Re: Re : Replication factor for system_auth keyspace

2015-10-16 Thread sai krishnam raju potturi
thanks guys for the advice. We were running parallel repairs earlier, with cassandra version 2.0.14. As pointed out having set the replication factor really huge for system_auth was causing the repair to take really long. thanks Sai On Fri, Oct 16, 2015 at 9:56 AM, Victor Chen wrote: >

Re: Re : Replication factor for system_auth keyspace

2015-10-16 Thread Victor Chen
DC. >> For the system_auth keyspace, what should be the ideal replication_factor >> set? >> >> We tried setting the replication factor equal to the number of nodes in a >> datacenter, and the repair for the system_auth keyspace took really long. >> Your suggestions would be of great help. >> > > More than 1 and a lot less than 48. > > =Rob > >

Re: Re : Replication factor for system_auth keyspace

2015-10-15 Thread Robert Coli
On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi < pskraj...@gmail.com> wrote: > we are deploying a new cluster with 2 datacenters, 48 nodes in each DC. > For the system_auth keyspace, what should be the ideal replication_factor > set? > > We tried setting t

Re : Replication factor for system_auth keyspace

2015-10-15 Thread sai krishnam raju potturi
hi; we are deploying a new cluster with 2 datacenters, 48 nodes in each DC. For the system_auth keyspace, what should be the ideal replication_factor set? We tried setting the replication factor equal to the number of nodes in a datacenter, and the repair for the system_auth keyspace took

Run witch repair cmd when increase replication factor

2015-03-06 Thread 曹志富
I want fo increase replication factor in my C* 2.1.3 cluster(rf chang from 2 to 3 for some keyspaces). I read the doc of Updating the replication factor <http://www.datastax.com/documentation/cql/3.1/cql/cql_using/update_ks_rf_t.html> . The step two is run the nodetool repair.But as

Re: Changing replication factor of Cassandra cluster

2015-01-06 Thread Robert Coli
On Tue, Jan 6, 2015 at 4:40 PM, Pranay Agarwal wrote: > Thanks Robert. Also, I have seen the node-repair operation to fail for > some nodes. What are the chances of the data getting corrupt if node-repair > fails? > If repair does not complete before gc_grace_seconds, chance of data getting corr

Re: Changing replication factor of Cassandra cluster

2015-01-06 Thread Pranay Agarwal
Thanks Robert. Also, I have seen the node-repair operation to fail for some nodes. What are the chances of the data getting corrupt if node-repair fails? I am okay with data availability issues for some time as long as I don't loose or corrupt data. Also, is there way to restore the graph without h

  1   2   3   >