Hi Ben
Thanks a lot. From my analysis of the code it looks like you are right.
When global read repair kicks in all live endpoints are queried for data,
regardless of consistency level. Only EACH_QUORUM is treated differently.
Cheers
Grzegorz
2018-04-22 1:45 GMT+02:00 Ben Slater :
> I haven't
I haven't checked the code to make sure this is still the case but last
time I checked:
- For any read, if an inconsistency between replicas is detected then this
inconsistency will be repaired. This obviously wouldn’t apply with CL=ONE
because you’re not reading multiple replicas to find inconsist
I haven't asked about "regular" repairs. I just wanted to know how read
repair behaves in my configuration (or is it doing anything at all).
2018-04-21 14:04 GMT+02:00 Rahul Singh :
> Read repairs are one anti-entropy measure. Continuous repairs is another.
> If you do repairs via Reaper or your
Read repairs are one anti-entropy measure. Continuous repairs is another. If
you do repairs via Reaper or your own method it will resolve your discrepencies.
On Apr 21, 2018, 3:16 AM -0400, Grzegorz Pietrusza ,
wrote:
> Hi all
>
> I'm a bit confused with how read repair works in my case, which i
8 Jul 2015 15:06:46 -0700
Subject: Re: Read Repair
From: rc...@eventbrite.com
To: user@cassandra.apache.org; naidusp2...@yahoo.com
On Wed, Jul 8, 2015 at 2:07 PM, Saladi Naidu wrote:
Suppose I have a row of existing data with set of values for attributes I call
this State1, and issue an update
On Wed, Jul 8, 2015 at 2:07 PM, Saladi Naidu wrote:
> Suppose I have a row of existing data with set of values for attributes I
> call this State1, and issue an update to some columns with Quorum
> consistency. If the write is succeeded in one node, Node1 and and failed
> on remaining nodes. As
The request would return with the latest data.
The read request would fire against node 1 and node 3. The coordinator would
get answers from both and would merge the answers and return the latest.
Then read repair might run to update node 3.
QUORUM does not take into consideration whether an an
On Wed, Nov 19, 2014 at 4:51 PM, Jimmy Lin wrote:
>
> #
> When you said send "read digest request" to the rest of the replica, do
> you mean all replica(s) in current and other DC? or just the one last
> replica in my current DC and one of the co-ordinate node in other DC?
>
> (our read and write
Tyler,
thanks for the detail explanation.
Still have few questions in my mind
#
When you said send "read digest request" to the rest of the replica, do you
mean all replica(s) in current and other DC? or just the one last replica
in my current DC and one of the co-ordinate node in other DC?
(
On Sun, Nov 16, 2014 at 5:13 PM, Jimmy Lin wrote:
> I have read that read repair suppose to be running as background, but
> does the co-ordinator node need to wait for the response(along with other
> normal read tasks) before return the entire result back to the caller?
>
For the 10% of request
Yes, it helps. Thanks
--- Original Message ---
From: "Aaron Morton"
Sent: October 31, 2013 3:51 AM
To: "Cassandra User"
Subject: Re: Read repair
(assuming RF 3 and NTS is putting a replica in each rack)
> Rack1 goes down and some writes happen in quorum against ra
> mins, there is no quorum until failed rack comes back up.
>
> Hope this explains the scenario.
> From: Aaron Morton
> Sent: 10/28/2013 2:42 AM
> To: Cassandra User
> Subject: Re: Read repair
>
>> As soon as it came back up, due to some human error, rack1 goes
hour and 30 mins,
there is no quorum until failed rack comes back up.
Hope this explains the scenario.
From: Aaron Morton<mailto:aa...@thelastpickle.com>
Sent: 10/28/2013 2:42 AM
To: Cassandra User<mailto:user@cassandra.apache.org>
Subject: Re: Read
> As soon as it came back up, due to some human error, rack1 goes down. Now for
> some rows it is possible that Quorum cannot be established.
Not sure I follow here.
if the first rack has come up I assume all nodes are available, if you then
lose a different rack I assume you have 2/3 of the n
> CL.ONE : this is primarily for performance reasons …
This makes reasoning about "correct" behaviour a little harder.
If there is anyway you can run some tests with R + W > N strong consistency I
would encourage you to do so. You will then have a baseline of what works.
> (say I make 100 requ
Hi Aaron,
Many thanks for your reply - answers below.
Cheers,
Brian
> What CL are you using for reads and writes?
> I would first build a test case to ensure correct operation when using strong
> consistency. i.e. QUOURM write and read. Because you are using RF 2 per DC I
> assume you
> I’d request data, nothing would be returned, I would then re-request the data
> and it would correctly be returned:
>
What CL are you using for reads and writes?
> I see a number of dropped ‘MUTATION’ operations : just under 5% of the total
> ‘MutationStage’ count.
>
Dropped mutations in a m
inline...
On Mon, Oct 1, 2012 at 7:46 PM, Hiller, Dean wrote:
> Thanks, (actually new it was configurable) BUT what I don't get is why I
> have to run a repair. IF all nodes became consistent on the delete, it
> should not be possible to get a forgotten delete, correct. The forgotten
> delete w
Oh, and I have been reading Aaron Mortan's article here
http://thelastpickle.com/2011/05/15/Deletes-and-Tombstones/
On 10/1/12 12:46 PM, "Hiller, Dean" wrote:
>Thanks, (actually new it was configurable) BUT what I don't get is why I
>have to run a repair. IF all nodes became consistent on the
Thanks, (actually new it was configurable) BUT what I don't get is why I
have to run a repair. IF all nodes became consistent on the delete, it
should not be possible to get a forgotten delete, correct. The forgotten
delete will only occur if I have a node down and out for 10 days and it
comes ba
the 10 days is actually configurable... look into gc_grace.
Basically, you always need to run repair once per/gc_grace period.
You won't see empty/deleted rows go away until they're compacted away.
On Mon, Oct 1, 2012 at 6:32 PM, Hiller, Dean wrote:
> I know there is a 10 day limit if you have a
> sorry to be dense, but which is it? do i get the old version or the new
> version? or is it indeterminate?
Indeterminate, depending on which nodes happen to be participating in
the read. Eventually you should get the new version, unless the node
that took the new version permanently crashed wi
sorry to be dense, but which is it? do i get the old version or the new
version? or is it indeterminate?
On 02/02/2012 01:42, Peter Schuller wrote:
i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
a timed-out write at quorum) node 1 has a 'new' version of the row/co
Peter Schuller wrote:
>> i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
>> a timed-out write at quorum) node 1 has a 'new' version of the row/column
>> (eg clock = 10), but node 2 and 3 have 'old' versions (clock = 5), when i
>> try to read my row/column at quorum,
> i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
> a timed-out write at quorum) node 1 has a 'new' version of the row/column
> (eg clock = 10), but node 2 and 3 have 'old' versions (clock = 5), when i
> try to read my row/column at quorum, what do i get back?
You eithe
The digest is based on the results of the same query as applied on
different replicas. See the following for more details:
http://wiki.apache.org/cassandra/ReadRepair
http://www.datastax.com/docs/1.0/dml/data_consistency
On Wed, Nov 30, 2011 at 11:38 PM, Thorsten von Eicken
wrote:
> Looking at th
tacenters?
>
> From: Jonathan Ellis [jbel...@gmail.com]
> Sent: Monday, December 27, 2010 6:59 PM
> To: user
> Subject: Re: read repair across datacenters?
>
> https://issues.apache.org/jira/browse/CASSANDRA-982
>
> On Mon, Dec 27, 2010 at 5
: Monday, December 27, 2010 6:59 PM
To: user
Subject: Re: read repair across datacenters?
https://issues.apache.org/jira/browse/CASSANDRA-982
On Mon, Dec 27, 2010 at 5:55 PM, Shu Zhang wrote:
> Brandon, for a read with quorum CL, a response is returned to the client
> after half (rounded u
> To: user@cassandra.apache.org
> Subject: Re: read repair across datacenters?
>
> On Mon, Dec 27, 2010 at 4:44 PM, Narendra Sharma
> mailto:narendra.sha...@gmail.com>> wrote:
> The comment in the cassandra.yaml says:
> "specifies the probability with which read repa
plicas are not RR'ed?
From: Brandon Williams [dri...@gmail.com]
Sent: Monday, December 27, 2010 3:00 PM
To: user@cassandra.apache.org
Subject: Re: read repair across datacenters?
On Mon, Dec 27, 2010 at 4:44 PM, Narendra Sharma
mailto:narendra.s
On Mon, Dec 27, 2010 at 4:44 PM, Narendra Sharma
wrote:
> The comment in the cassandra.yaml says:
> "specifies the probability with which read repairs should be invoked on *
> non-quorum* reads"
>
> Does this mean RR chance is applicable only for non-quorum reads?
>
Yes, because on quorum or grea
The comment in the cassandra.yaml says:
"specifies the probability with which read repairs should be invoked on *
non-quorum* reads"
Does this mean RR chance is applicable only for non-quorum reads?
Another question on same topic:
Will RR use one of the node in the other datacenter as coordinat
On Mon, Dec 27, 2010 at 3:17 PM, Shu Zhang wrote:
> Hi, I'm pretty new to cassandra and read a couple of contradictory things
> on this topic. Does read repair get triggered across datacenters if you
> query with a consistency level of local_quorum?
>
If the RR chance is 100% (default), it's tri
34 matches
Mail list logo