Hi,
Day before yesterday, I had issued a full sequential repair on one of my
nodes in a 5 node cassandra cluster for a single table using the below
command.
nodetool repair -full -seq -tr >
Now the node on which the command was issued was repaired properly as can
be infered from the below com
Probably you want to read this blog:
https://academy.datastax.com/support-blog/read-repair
-Arvinder
On Tue, Jan 1, 2019, 12:43 PM Jeff Jirsa There are two types of read repair
>
> - Blocking/foreground due to reading you consistency level (local_quorum
> for you) and one is the responses not m
We (the last pickle) maintain an open source tool for dealing with this:
http://cassandra-reaper.io
On Tue, Jan 1, 2019 at 12:31 PM Rahul Reddy
wrote:
> Hello,
>
> Is it possible to find subrange needed for repair in Apache Cassandra like
> dse which uses dsetool list_subranges like below doc
>
You can select the token for the key (select token()), and then repair the
surrounding range
Don’t try to repair a single token, try to repair some small range like 2^10
above/below the token you care about.
--
Jeff Jirsa
> On Jan 1, 2019, at 12:31 PM, Rahul Reddy wrote:
>
> Hello,
>
> I
There are two types of read repair
- Blocking/foreground due to reading you consistency level (local_quorum for
you) and one is the responses not matching
- Probabilistic read repair which queries extra hosts in advance and read
repairs them if they mismatch AFTER responding to the caller/clie
Hello,
Is it possible to find subrange needed for repair in Apache Cassandra like
dse which uses dsetool list_subranges like below doc
https://docs.datastax.com/en/archived/datastax_enterprise/4.8/datastax_enterprise/srch/srchRepair.html?hl=repair
Hi, thanks for answer.
what I don't understand is:
- why there are attempts of read repair if repair chances are 0.0 ?- what can
be cause for big mutation size?- why hinted handoffs didn't prevent
inconsistency? (because of big mutation size?)
Thanks.
On Tuesday, January 1, 2019 9:41 PM,
Read repair due to digest mismatch and speculative retry can both cause
some behaviors that are hard to reason about (usually seen if a host stops
accepting writes due to bad disk, which you havent described, but generally
speaking, there are times when reads will block on writing to extra
replicas
Also I see
WARN [ReadRepairStage:341] 2018-12-31 17:57:58,594 DataResolver.java:507 -
Encountered an oversized (37264537/16777216) read repair mutation for table
for about several hours (5-7) after cluster restart, next it disappeared.
On Tuesday, January 1, 2019 1:10 PM, Vlad
wrote:
Actually what happened is that Cassandra instances was upgraded to bigger size
one by one, with downtime about one-two minutes each. There are logs
INFO [HintsDispatcher:3] 2018-12-31 11:23:48,305
HintsDispatchExecutor.java:282 - Finished hinted handoff of file
d2c7bb82-3d7a-43b2-8791-ef9c7c02b
Hi All and Happy New Year!!!
This year started with Cassandra 3.11.3 sometimes forces level ALL despite
query level LOCAL_QUORUM (actually there is only one DC) and it fails with
timeout.
As far as I understand, it can be caused by read repair attempts (we see
"DigestMismatch" errors in Cassandr
11 matches
Mail list logo