In effect you're saying "I require data centers to be consistent at write
time except when they can't". Basically you've gotten the worst of both
worlds and bad performance during healthy times and less than desired
consistency during unhealthy times.
I believe you may have some misconceptions abo
@Jonathan. I read my previous message. GC grace period is 10 days (default)
not 10 sec, my bad. Repairs are run every 7 days. I should be fine
regarding this.
@Ryan
Indeed I might want to use Each_Quorum with a customised fallback to
local_quorum + alerting in case of partition (like a whole clus
Your gc grace should be longer than your repair schedule. You're likely
going to have deleted data resurface.
On Fri Dec 19 2014 at 8:31:13 AM Alain RODRIGUEZ wrote:
> All that you said match the idea I had of how it works except this part:
>
> "The request blocks however until all CL is satis
replies inline
On Fri, Dec 19, 2014 at 10:30 AM, Alain RODRIGUEZ
wrote:
>
> All that you said match the idea I had of how it works except this part:
>
> "The request blocks however until all CL is satisfied" --> Does this mean
> that the client will see an error if the local DC write the data cor
All that you said match the idea I had of how it works except this part:
"The request blocks however until all CL is satisfied" --> Does this mean
that the client will see an error if the local DC write the data correctly
(i.e. CL reached) but the remote DC fails ? This is not the idea I had of
so
More accurately,the write path of Cassandra in a multi dc sense is kinda
like the following
1. write goes to a node which acts as coordinator
2. writes go out to all replicas in that DC, and then one write per remote
DC goes out to another node which takes responsibility for writing to all
replica
Hi Jens, thanks for your insight.
Replication lag in Cassandra terms is probably “Hinted handoff” --> Well I
think hinted handoff are only used when a node is down, and are not even
mandatory enabled. I guess that cross DC async replication is something
else, taht has nothing to see with hinted ha
Alain,
AFAIK, the DC replication is not linearizable. That is, writes are are not
replicated according to a binlog or similar like MySQL. They are replicated
concurrently.
To answer you questions:
1 - Replication lag in Cassandra terms is probably “Hinted handoff”. You’d want
to check t
Hi guys,
We expanded our cluster to a multiple DC configuration.
Now I am wondering if there is any way to know:
1 - The replication lag between these 2 DC (Opscenter, nodetool, other ?)
2 - Make sure that sync is ok at any time
I guess big companies running Cassandra are interested in these ki