Hi,
On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote:
> I strongly suggest you don't use materialized views at all. There are
> edge cases that in my opinion make them unsuitable for production, both in
> terms of cluster stability as well as data integrity.
>
Oh, there is already an open and
Gábor AUTH
> On Fri, May 17, 2024 at 8:58 AM Gábor Auth wrote:
>
>> Hi,
>>
>> I know, I know, the materialized view is experimental... :)
>>
>> So, I ran into a strange error. Among others, I have a very small 4-nodes
>> cluster, with very minimal data
w, I know, the materialized view is experimental... :)
>
> So, I ran into a strange error. Among others, I have a very small 4-nodes
> cluster, with very minimal data (~100 MB at all), the keyspace's
> replication factor is 3, everything is works fine... except: if I restart a
> node, I get
Hi,
I know, I know, the materialized view is experimental... :)
So, I ran into a strange error. Among others, I have a very small 4-nodes
cluster, with very minimal data (~100 MB at all), the keyspace's
replication factor is 3, everything is works fine... except: if I restart a
node, I get
Thank you Dipan, it makes sense now.
Cheers,
Anurag
On Sun, Jul 16, 2023 at 12:43 AM Dipan Shah wrote:
> Hello Anurag,
>
> In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is
> Read consistency, W is Write consistency and N is the Replication Fa
Hello Anurag,
In Cassandra, Strong consistency is guaranteed when "R + W > N" where R is
Read consistency, W is Write consistency and N is the Replication Factor.
So in your case, R(2) + W(1) = 3 which is NOT greater than your replication
factor(3) so you will not be able to gua
thank you Jeff,
it makes more sense now. How about I write with ONE consistency,
replication factor = 3 and read consistency is QUORUM. I am guessing in
that case, I will not have the empty read even if it is happened
immediately after the write request, let me know your thoughts ?
Cheers,
Anurag
Consistency level controls when queries acknowledge/succeed
Replication factor is where data lives / how many copies
If you write at consistency ONE and replication factor 3, the query finishes
successfully when the write is durable on one of the 3 copies.
It will get sent to all 3, but it’ll
Hello Users,
I am new to Cassandra and trying to understand the architecture of it. If I
write to ONE node for a particular key and have a replication factor of 3,
would the written key will get replicated to the other two nodes ? Let me
know if I am thinking incorrectly.
Thanks,
Anurag
Replication Factor
The most likely explanation is that repair failed and you didnt notice.
Or that you didnt actually repair every host / every range.
Which version are you using?
How did you run repair?
On Tue, Oct 12, 2021 at 4:33 AM Isaeed Mohanna
mailto:isa...@xsense.co>> wrote:
Hi
request will actually return a correct result?
>
>
>
> Thanks
>
>
>
> *From:* Bowen Song
> *Sent:* Monday, October 11, 2021 5:13 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Trouble After Changing Replication Factor
>
>
>
> You have RF=3
;
> Thanks
>
>
>
> *From:* Bowen Song
> *Sent:* Monday, October 11, 2021 5:13 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Trouble After Changing Replication Factor
>
>
>
> You have RF=3 and both read & write CL=1, which means you are asking
>
onday, October 11, 2021 5:13 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Trouble After Changing Replication Factor
You have RF=3 and both read & write CL=1, which means you are asking
Cassandra to give up strong consistency in order to gain higher
availability and perhaps slight faster s
request will actually return a correct result?
Thanks
From: Bowen Song
Sent: Monday, October 11, 2021 5:13 PM
To: user@cassandra.apache.org
Subject: Re: Trouble After Changing Replication Factor
You have RF=3 and both read & write CL=1, which means you are asking Cassandra
to give up st
write CL) > RF.
On 10/10/2021 11:55, Isaeed Mohanna wrote:
Hi
We had a cluster with 3 Nodes with Replication Factor 2 and we were
using read with consistency Level One.
We recently added a 4^th node and changed the replication factor to 3,
once this was done apps reading from DB with CL1 w
Hi
We had a cluster with 3 Nodes with Replication Factor 2 and we were using read
with consistency Level One.
We recently added a 4th node and changed the replication factor to 3, once this
was done apps reading from DB with CL1 would receive an empty record, Looking
around I was surprised to
If you run full repair then it should be fine, since all the replicas are
present on all the nodes. If you are using -pr option then you need to run
on all the nodes.
On Tue, Oct 27, 2020 at 4:11 PM Fred Al wrote:
> Hello!
> Running Cassandra 2.2.9 with a 4 node cluster with replication
Hello!
Running Cassandra 2.2.9 with a 4 node cluster with replication factor 4.
When running anti-entropy repair is it required to run repair on all 4
nodes or is it sufficient to run it on only one node?
Since all data is replicated on all nodes i.m.o. only one node would need
to be repaired to
: Tuesday, May 26, 2020 11:33 PM
To: user@cassandra.apache.org
Subject: Re: any risks with changing replication factor on live production
cluster without downtime and service interruption?
By retry logic, I’m going to guess you are doing some kind of version
consistency trick where you have a
to LOCAL_QUORUM until you’re done to buffer yourself from that risk.
From: Leena Ghatpande
Reply-To: "user@cassandra.apache.org"
Date: Tuesday, May 26, 2020 at 1:20 PM
To: "user@cassandra.apache.org"
Subject: Re: any risks with changing replication factor on live production
clust
From: Leena Ghatpande
Sent: Friday, May 22, 2020 11:51 AM
To: cassandra cassandra
Subject: any risks with changing replication factor on live production cluster
without downtime and service interruption?
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in
On Fri, May 22, 2020 at 9:51 PM Jeff Jirsa wrote:
> With those consistency levels it’s already possible you don’t see your
> writes, so you’re already probably seeing some of what would happen if you
> went to RF=5 like that - just less common
>
> If you did what you describe you’d have a 40% cha
and thinking of changing
> the replication factor to 5 for each DC.
>
> Our application uses the below consistency level
> read-level: LOCAL_ONE
> write-level: LOCAL_QUORUM
>
> if we change the RF=5 on live cluster, and run full repairs, would we see
> read/write errors w
We are on Cassandra 3.7 and have a 12 node cluster , 2DC, with 6 nodes in each
DC. RF=3
We have around 150M rows across tables.
We are planning to add more nodes to the cluster, and thinking of changing the
replication factor to 5 for each DC.
Our application uses the below consistency level
> I suspect some of the intermediate queries (determining role, etc) happen at
> quorum in 2.2+, but I don’t have time to go read the code and prove it.
This isn’t true. Aside from when using the default superuser, only
CRM::getAllRoles reads at QUORUM (because the resultset would include the
On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk wrote:
>
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
>
Sad
Attaching the runner log snippet, where we can see that "Rebuilding token
map" took most of the time.
getAllroles is using quorum, don't if it is used during login
https://github.com/apache/cassandra/blob/cc12665bb7645d17ba70edcf952ee6a1ea63127b/src/java/org/apache/cassandra/auth/CassandraRoleManag
I suspect some of the intermediate queries (determining role, etc) happen at
quorum in 2.2+, but I don’t have time to go read the code and prove it.
In any case, RF > 10 per DC is probably excessive
Also want to crank up the validity times so it uses cached info longer
--
Jeff Jirsa
> On N
no its not a cassandra user and as i understood all other users login
local_one.
On Fri, 23 Nov 2018, 19:30 Jonathan Haddad Any chance you’re logging in with the Cassandra user? It uses quorum
> reads.
>
>
> On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk
> wrote:
>
>> Hi,
>> We have recently me
Any chance you’re logging in with the Cassandra user? It uses quorum reads.
On Fri, Nov 23, 2018 at 11:38 AM Vitali Dyachuk wrote:
> Hi,
> We have recently met a problem when we added 60 nodes in 1 region to the
> cluster
> and set an RF=60 for the system_auth ks, following this documentation
>
Hi,
We have recently met a problem when we added 60 nodes in 1 region to the
cluster
and set an RF=60 for the system_auth ks, following this documentation
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html
However we've started to see increased login latencies in the cluste
Data by Key, but not for searching. High
Availibility is a nice giveaway here.
If you end having only one Table in C*, maybe something like Redis would
work for your needs, too.
Some hints from my own expierience with it - if you choose to use Cassandra:
Have at least as much Racks as Replication F
adjacent racks, the link between the
two racks goes down, but both are otherwise functional - a query at ONE in
either rack would be able to read and write data, but it would diverge
between the two racks for some period of time).
>
> When I go to set up the database though, I am requi
around 1s, then there shouldn’t be an issue.
When I go to set up the database though, I am required to set a replication
factor to a number - 1,2,3,etc. So I can’t just say “ALL” and have it replicate
to all nodes. Right now, I have a 2 node cluster with replication factor 3.
Will this cause
Run repair first to ensure the data is properly replicated, then cleanup.
--
Jeff Jirsa
> On Jan 9, 2018, at 9:36 AM, Alessandro Pieri wrote:
>
> Dear Everyone,
>
> We are running Cassandra v2.0.15 on our production cluster.
>
> We would like to reduce the replicati
Dear Everyone,
We are running Cassandra v2.0.15 on our production cluster.
We would like to reduce the replication factor from 3 to 2 but we are not
sure if it is a safe operation. We would like to get some feedback from you
guys.
Have anybody tried to shrink the replication factor?
Does
This is just basic question to ask..but it is worth to ask.
We changed Replication factor from 2 to 3 in our production cluster. We have 2
data centers.
Does nodetool repair -dcpar from single node in one data center is sufficient
for the whole replication to take effect? Please confirm.
Do I
Regardless, if you are not modifying users frequently (with five you most
likely are not), make sure turn the permission cache wyyy up.
In 2.1 that is just: permissions_validity_in_ms (default is 2000 or 2
seconds). Feel free to set it to 1 day or some such. The corresponding
async update para
8-30
> 10:51:25.015000 | xx.xx.xx.113 | 601190
>
> REQUEST_RESPONSE message received from
> /xx.xx.xx.116 [MessagingService-Incoming-/xx.xx.xx.116] | 2017-08-30
> 10:51:25.015000 | xx.xx.xx.113 | 601771
>
>
> Processing re
For that many nodes mixed with vnodes you probably want a lower RF than N
per datacenter. 5 or 7 would be reasonable. The only down side is that auth
queries may take slightly longer as they will often have to go to other
nodes to be resolved, but in practice this is likely not a big deal as the
da
Request complete | 2017-08-30 10:51:25.014874
| xx.xx.xx.113 | 601874
From: Oleksandr Shulgin
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, August 30, 2017 at 10:42 AM
To: User
Subject: Re: system_auth replication
On Wed, Aug 30, 2017 at 6:40 PM, Chuck Reynolds
wrote:
> How many users do you have (or expect to be found in system_auth.users)?
>
> 5 users.
>
> What are the current RF for system_auth and consistency level you are
> using in cqlsh?
>
> 135 in one DC and 227 in the other DC. Consistency lev
TRACING ON)?
Tracing timeout even though I increased it to 120 seconds.
From: Oleksandr Shulgin
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, August 30, 2017 at 10:19 AM
To: User
Subject: Re: system_auth replication factor in Cassandra 2.1
On Wed, Aug 30, 2017 at 5:50 PM, Chuc
On Wed, Aug 30, 2017 at 6:20 PM, Chuck Reynolds
wrote:
> So I tried to run a repair with the following on one of the server.
>
> nodetool repair system_auth -pr –local
>
>
>
> After two hours it hadn’t finished. I had to kill the repair because of
> another issue and haven’t tried again.
>
>
>
>
if I set the RF back to a lower number like 5?
Thanks
From: on behalf of Sam Tunnicliffe
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, August 30, 2017 at 10:10 AM
To: "user@cassandra.apache.org"
Subject: Re: system_auth replication factor in Cassandra 2.1
It'
On Wed, Aug 30, 2017 at 5:50 PM, Chuck Reynolds
wrote:
> So I’ve read that if your using authentication in Cassandra 2.1 that your
> replication factor should match the number of nodes in your datacenter.
>
>
>
> *Is that true?*
>
>
>
> I have two datacenter clus
tup of your own users &
superusers, the link above also has info on this.
Thanks,
Sam
On 30 August 2017 at 16:50, Chuck Reynolds wrote:
> So I’ve read that if your using authentication in Cassandra 2.1 that your
> replication factor should match the number of nodes in your datacenter
into a secure cluster, set
the replication factor of the system_auth and dse_security keyspaces to a value
that is greater than 1. In a multi-node cluster, using the default of 1
prevents logging into any node when the node that stores the user data is down.
From: Chuck Reynolds [mailto:creyno
So I’ve read that if your using authentication in Cassandra 2.1 that your
replication factor should match the number of nodes in your datacenter.
Is that true?
I have two datacenter cluster, 135 nodes in datacenter 1 & 227 nodes in an AWS
datacenter.
Why do I want to replicate the system_
ng and possible OOM
> bugs due to metadata writing at end of streaming (sorry don't have ticket
> handy). I'm worried I might not be able to do much with these since the
> disk space usage is high and they are under a lot of load given the small
> number of them for this rack.
Thanks Kurt.
We had one sstable from a cf of ours. I am actually running a repair on
that cf now and then plan to try and join the additional nodes as you
suggest. I deleted the opscenter corrupt sstables as well but will not
bother repairing that before adding capacity.
Been keeping an eye acr
On 14 Aug. 2017 00:59, "Brian Spindler" wrote:
Do you think with the setup I've described I'd be ok doing that now to
recover this node?
The node died trying to run the scrub; I've restarted it but I'm not sure
it's going to get past a scrub/repair, this is why I deleted the other
files as a bru
for
>>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack
>>> probably not necessary, streaming is being marked as failed and it’s
>>> turning itself off. Not sure why streaming is marked as failing, though,
>>> anything on the sending sides?
>
econdary index build. Hard to say for
>>> sure. ‘nodetool compactionstats’ if you’re able to provide it. The jstack
>>> probably not necessary, streaming is being marked as failed and it’s
>>> turning itself off. Not sure why streaming is marked as failing, though,
>>> anythin
e jstack
>> probably not necessary, streaming is being marked as failed and it’s
>> turning itself off. Not sure why streaming is marked as failing, though,
>> anything on the sending sides?
>>
>>
>>
>>
>>
>> From: Brian Spindler
>> Reply-T
>
>
> From: Brian Spindler
> Reply-To:
> Date: Saturday, August 12, 2017 at 6:34 PM
> To:
> Subject: Re: Dropping down replication factor
>
> Thanks for replying Jeff.
>
> Responses below.
>
> On Sat, Aug 12, 2017 at 8:33 PM Jeff Jirsa wrote:
>
>> Answers
itself
off. Not sure why streaming is marked as failing, though, anything on the
sending sides?
From: Brian Spindler
Reply-To:
Date: Saturday, August 12, 2017 at 6:34 PM
To:
Subject: Re: Dropping down replication factor
Thanks for replying Jeff.
Responses below.
On Sat, Aug 12, 2017 at
RANGE_SLICE 15
_TRACE 0
MUTATION 2949001
COUNTER_MUTATION 0
BINARY 0
REQUEST_RESPONSE 0
PAGED_RANGE 0
READ_REPAIR 8571
I can get a jstack if needed.
>
> >
>
not busy doing something? Like
building secondary index or similar? jstack thread dump would be useful, or at
least nodetool tpstats
>
> Rather than troubleshoot this further, what I was thinking about doing was:
> - drop the replication factor on our keyspace to two
Repair before y
what I was thinking about doing was:
- drop the replication factor on our keyspace to two
- hopefully this would reduce load on these two remaining nodes
- run repairs/cleanup across the cluster
- then shoot these two nodes in the 'c' rack
- run repairs/cleanup across the cluster
W
Great explanation!
For the single partition read, it makes sense to read data from only one
replica.
Thank you so much Ben!
Jun
From: ben.sla...@instaclustr.com
Date: Tue, 20 Sep 2016 05:30:43 +
Subject: Re: Question about replica and replication factor
To: wuxiaomi...@hotmail.com
CC: user
one replica, and operate read repair for the left
>> replicas.
>>
>> Also, how could read accross all nodes in the cluster?
>>
>> Thanks!
>>
>> Jun
>>
>>
>> From: ben.sla...@instaclustr.com
>> Date: Tue, 20 Sep 2016 04:18:59 +
words in the post shows that the coordinator only
>> contact/read data from one replica, and operate read repair for the left
>> replicas.
>>
>> Also, how could read accross all nodes in the cluster?
>>
>> Thanks!
>>
>> Jun
>
Date: Tue, 20 Sep 2016 04:18:59 +
> Subject: Re: Question about replica and replication factor
> To: user@cassandra.apache.org
>
>
> Each individual read (where a read is a single row or single partition)
> will read from one node (ignoring read repairs) as each partition will
Jun
From: ben.sla...@instaclustr.com
Date: Tue, 20 Sep 2016 04:18:59 +
Subject: Re: Question about replica and replication factor
To: user@cassandra.apache.org
Each individual read (where a read is a single row or single partition) will
read from one node (ignoring read repairs) as each partiti
distributed across all the nodes in your cluster).
Cheers
Ben
On Tue, 20 Sep 2016 at 14:09 Jun Wu wrote:
> Hi there,
>
> I have a question about the replica and replication factor.
>
> For example, I have a cluster of 6 nodes in the same data center.
> Replication factor R
Hi there,
I have a question about the replica and replication factor.
For example, I have a cluster of 6 nodes in the same data center.
Replication factor RF is set to 3 and the consistency level is default 1.
According to this calculator http://www.ecyrd.com/cassandracalculator
e when I added it in it never synced data I
>>>>>> guess? It was at around 50 MB when it first came up and transitioned to
>>>>>> "UN". After it was in I did the 1->2 replication change and tried repair
>>>>>> but it didn't fix i
p and transitioned to
>>>>> "UN". After it was in I did the 1->2 replication change and tried repair
>>>>> but it didn't fix it. From what I can tell all the data on it is stuff
>>>>> that has been written since it came up. We never del
t; |/ State=Normal/Leaving/Joining/Moving
>>>>>> -- Address Load Tokens Owns (effective) Host ID
>>>>>> Rack
>>>>>> UN 10.142.0.14 6.4 GB 256 100.0%
>>>>>> c3a5c39d-e1c9-4
lower and then of course 10.128.0.20 which is missing
>>> over 5 GB of data. I tried running nodetool -local on both DCs and it
>>> didn't fix either one.
>>>
>>> Am I running into a bug of some kind?
>>>
>>> On Tue, May 24, 2016 at 4:06 P
t;>
>> Am I running into a bug of some kind?
>>
>> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal wrote:
>>
>>> Hi Luke,
>>>
>>> You mentioned that replication factor was increased from 1 to 2. In that
>>> case was the node bearing ip 10.128.0.20
nd then of course 10.128.0.20 which is missing over
> 5 GB of data. I tried running nodetool -local on both DCs and it didn't
> fix either one.
>
> Am I running into a bug of some kind?
>
> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal wrote:
>
>> Hi Luke,
>&g
ke,
>
> You mentioned that replication factor was increased from 1 to 2. In that
> case was the node bearing ip 10.128.0.20 carried around 3GB data earlier?
>
> You can run nodetool repair with option -local to initiate repair local
> datacenter for gce-us-central1.
>
> Also
Hi Luke,
You mentioned that replication factor was increased from 1 to 2. In that
case was the node bearing ip 10.128.0.20 carried around 3GB data earlier?
You can run nodetool repair with option -local to initiate repair local
datacenter for gce-us-central1.
Also you may suspect that if a lot
t; On 23 May 2016 at 19:31, Luke Jolly wrote:
>
>> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
>> gce-us-east1. I increased the replication factor of gce-us-central1 from 1
>> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Own
t1. I increased the replication factor of gce-us-central1 from 1
> to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
> the node switched to 100% as it should but the Load showed that it didn't
> actually sync the data. I then ran a full
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed tha
hange your
> availability model).
>
> On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa wrote:
>
>> Hello to all.
>> I am planning to change replication factor from 1 to 3.
>> Will it cause data read errors in time of nodes repair?
>>
>> Best regards
>> Yulian Oifa
>>
>
Hello,
If current CL = ONE, Be careful on production at the time of change replication
factor, 3 nodes will be queried while data is being transformed ==> So data
read errors!
De : Yulian Oifa [mailto:oifa.yul...@gmail.com]
Envoyé : jeudi 5 novembre 2015 16:02
À : user@cassandra.apache.
e for a node failure, so that doesn't really change your
availability model).
On Thu, Nov 5, 2015 at 8:01 AM Yulian Oifa wrote:
> Hello to all.
> I am planning to change replication factor from 1 to 3.
> Will it cause data read errors in time of nodes repair?
>
> Best regards
> Yulian Oifa
>
Hello to all.
I am planning to change replication factor from 1 to 3.
Will it cause data read errors in time of nodes repair?
Best regards
Yulian Oifa
tion of 3 is maintained?
>>>>>
>>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta
>>>>> wrote:
>>>>>
>>>>>> You have to do few things before unsafe as sanitation . First run the
>>>>>> nodetool decom
removenode did not work. We did not
>>>>>>> capture the tokens of the dead node. Any way we could make sure the
>>>>>>> replication of 3 is maintained?
>>>>>>>
>>>>>>>
>>>>>>>> On Sat,
sure the
>>>> replication of 3 is maintained?
>>>>
>>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta
>>>> wrote:
>>>>
>>>>> You have to do few things before unsafe as sanitation . First run the
>>>>> nodetool decommissio
2015, 11:14 Surbhi Gupta
>>>>>> wrote:
>>>>>> You have to do few things before unsafe as sanitation . First run the
>>>>>> nodetool decommission if the node is up and wait till streaming happens
>>>>>> . You can check is th
ned?
>>>
>>> On Sat, Oct 31, 2015, 11:14 Surbhi Gupta
>>> wrote:
>>>
>>>> You have to do few things before unsafe as sanitation . First run the
>>>> nodetool decommission if the node is up and wait till streaming happens .
>>>>
detool decommission if the node is up and wait till streaming happens .
>>> You can check is the streaming is completed by nodetool netstats . If
>>> streaming is completed you can do unsafe assanitation .
>>>
>>> To answer your question unsafe assanitation will
ion .
>>
>> To answer your question unsafe assanitation will not take care of
>> replication factor .
>> It is like forcing a node out from the cluster .
>>
>> Hope this helps.
>>
>> Sent from my iPhone
>>
>> > On Oct 31, 2015, at 5:
; You can check is the streaming is completed by nodetool netstats . If
>> streaming is completed you can do unsafe assanitation .
>>
>> To answer your question unsafe assanitation will not take care of
>> replication factor .
>> It is like forcing a node out from the c
the
> nodetool decommission if the node is up and wait till streaming happens .
> You can check is the streaming is completed by nodetool netstats . If
> streaming is completed you can do unsafe assanitation .
>
> To answer your question unsafe assanitation will not take care of
> replica
unsafe assanitation will not take care of replication
factor .
It is like forcing a node out from the cluster .
Hope this helps.
Sent from my iPhone
> On Oct 31, 2015, at 5:12 AM, sai krishnam raju potturi
> wrote:
>
> hi;
>would unsafeassasinating a dead node maintain
hi;
would unsafeassasinating a dead node maintain the replication factor
like decommission process or removenode process?
thanks
thanks guys for the advice. We were running parallel repairs earlier, with
cassandra version 2.0.14. As pointed out having set the replication factor
really huge for system_auth was causing the repair to take really long.
thanks
Sai
On Fri, Oct 16, 2015 at 9:56 AM, Victor Chen
wrote:
>
DC.
>> For the system_auth keyspace, what should be the ideal replication_factor
>> set?
>>
>> We tried setting the replication factor equal to the number of nodes in a
>> datacenter, and the repair for the system_auth keyspace took really long.
>> Your suggestions would be of great help.
>>
>
> More than 1 and a lot less than 48.
>
> =Rob
>
>
On Thu, Oct 15, 2015 at 10:24 AM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> we are deploying a new cluster with 2 datacenters, 48 nodes in each DC.
> For the system_auth keyspace, what should be the ideal replication_factor
> set?
>
> We tried setting t
hi;
we are deploying a new cluster with 2 datacenters, 48 nodes in each DC.
For the system_auth keyspace, what should be the ideal replication_factor
set?
We tried setting the replication factor equal to the number of nodes in a
datacenter, and the repair for the system_auth keyspace took
I want fo increase replication factor in my C* 2.1.3 cluster(rf chang from
2 to 3 for some keyspaces).
I read the doc of Updating the replication factor
<http://www.datastax.com/documentation/cql/3.1/cql/cql_using/update_ks_rf_t.html>
.
The step two is run the nodetool repair.But as
On Tue, Jan 6, 2015 at 4:40 PM, Pranay Agarwal
wrote:
> Thanks Robert. Also, I have seen the node-repair operation to fail for
> some nodes. What are the chances of the data getting corrupt if node-repair
> fails?
>
If repair does not complete before gc_grace_seconds, chance of data getting
corr
Thanks Robert. Also, I have seen the node-repair operation to fail for some
nodes. What are the chances of the data getting corrupt if node-repair
fails? I am okay with data availability issues for some time as long as I
don't loose or corrupt data. Also, is there way to restore the graph
without h
1 - 100 of 280 matches
Mail list logo