Because if you lose a node you have chances to lose some data forever if it was 
not yet replicated.

--
Jacques-Henri Berthemet

From: Jonathan Haddad [mailto:j...@jonhaddad.com]
Sent: vendredi 25 mars 2016 19:37
To: user@cassandra.apache.org
Subject: Re: How many nodes do we require

Why would using CL-ONE make your cluster fragile? This isn't obvious to me. 
It's the most practical setting for high availability, which very much says 
"not fragile".
On Fri, Mar 25, 2016 at 10:44 AM Jacques-Henri Berthemet 
<jacques-henri.berthe...@genesys.com<mailto:jacques-henri.berthe...@genesys.com>>
 wrote:
I found this calculator very convenient:
http://www.ecyrd.com/cassandracalculator/

Regardless of your other DCs you need RF=3 if you write at LOCAL_QUORUM, RF=2 
if you write/read at ONE.

Obviously using ONE as CL makes your cluster very fragile.
--
Jacques-Henri Berthemet


-----Original Message-----
From: Rakesh Kumar 
[mailto:rakeshkumar46...@gmail.com<mailto:rakeshkumar46...@gmail.com>]
Sent: vendredi 25 mars 2016 18:14
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: How many nodes do we require

On Fri, Mar 25, 2016 at 11:45 AM, Jack Krupansky
<jack.krupan...@gmail.com<mailto:jack.krupan...@gmail.com>> wrote:
> It depends on how much data you have. A single node can store a lot of data,
> but the more data you have the longer a repair or node replacement will
> take. How long can you tolerate for a full repair or node replacement?

At this time, for a foreseeable future, size of data will not be
significant. So we can safely disregard the above as a decision
factor.

>
> Generally, RF=3 is both sufficient and recommended.

Are you telling a SimpleReplication topology with RF=3
or NetworkTopology with RF=3.


taken from:

https://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html

"
Three replicas in each data center: This configuration tolerates
either the failure of a one node per replication group at a strong
consistency level of LOCAL_QUORUM or multiple node failures per data
center using consistency level ONE."

In our case, with only 3 nodes in each DC, wouldn't a RF=3 effectively mean ALL.

I will state our requirement clearly:

If we are going with six nodes (3 in each DC), we should be able to
write even with a loss of one DC and loss of one node of the surviving
DC. I am open to hearing what compromise we have to do with the reads
during the time a DC is down. For us write is critical, more than
reads.

May be this is not possible with 6 nodes, and requires more.  Pls advise.

Reply via email to