Maybe that's a great definition of a modern distributed cluster: each
person (node) has a different notion of priority.

I'll wait for the next user email in which they complain that their data is
"too stable" (missing updates.)

-- Jack Krupansky

On Thu, Mar 31, 2016 at 12:04 PM, Jacques-Henri Berthemet <
jacques-henri.berthe...@genesys.com> wrote:

> You’re right. I meant about data integrity, I understand it’s not
> everybody’s priority!
>
>
>
> *--*
>
> *Jacques-Henri Berthemet*
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* jeudi 31 mars 2016 17:48
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: How many nodes do we require
>
>
>
> Losing a write is very different from having a fragile cluster.  A fragile
> cluster implies that whole thing will fall apart, that it breaks easily.
> Writing at CL=ONE gives you a pretty damn stable cluster at the potential
> risk of losing a write that hasn't replicated (but has been ack'ed) which
> for a lot of people is preferable to downtime.  CL=ONE gives you the *most
> stable* cluster you can have.
>
> On Tue, Mar 29, 2016 at 12:57 AM Jacques-Henri Berthemet <
> jacques-henri.berthe...@genesys.com> wrote:
>
> Because if you lose a node you have chances to lose some data forever if
> it was not yet replicated.
>
>
>
> *--*
>
> *Jacques-Henri Berthemet*
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* vendredi 25 mars 2016 19:37
>
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: How many nodes do we require
>
>
>
> Why would using CL-ONE make your cluster fragile? This isn't obvious to
> me. It's the most practical setting for high availability, which very much
> says "not fragile".
>
> On Fri, Mar 25, 2016 at 10:44 AM Jacques-Henri Berthemet <
> jacques-henri.berthe...@genesys.com> wrote:
>
> I found this calculator very convenient:
> http://www.ecyrd.com/cassandracalculator/
>
> Regardless of your other DCs you need RF=3 if you write at LOCAL_QUORUM,
> RF=2 if you write/read at ONE.
>
> Obviously using ONE as CL makes your cluster very fragile.
> --
> Jacques-Henri Berthemet
>
>
> -----Original Message-----
> From: Rakesh Kumar [mailto:rakeshkumar46...@gmail.com]
> Sent: vendredi 25 mars 2016 18:14
> To: user@cassandra.apache.org
> Subject: Re: How many nodes do we require
>
> On Fri, Mar 25, 2016 at 11:45 AM, Jack Krupansky
> <jack.krupan...@gmail.com> wrote:
> > It depends on how much data you have. A single node can store a lot of
> data,
> > but the more data you have the longer a repair or node replacement will
> > take. How long can you tolerate for a full repair or node replacement?
>
> At this time, for a foreseeable future, size of data will not be
> significant. So we can safely disregard the above as a decision
> factor.
>
> >
> > Generally, RF=3 is both sufficient and recommended.
>
> Are you telling a SimpleReplication topology with RF=3
> or NetworkTopology with RF=3.
>
>
> taken from:
>
>
> https://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html
>
> "
> Three replicas in each data center: This configuration tolerates
> either the failure of a one node per replication group at a strong
> consistency level of LOCAL_QUORUM or multiple node failures per data
> center using consistency level ONE."
>
> In our case, with only 3 nodes in each DC, wouldn't a RF=3 effectively
> mean ALL.
>
> I will state our requirement clearly:
>
> If we are going with six nodes (3 in each DC), we should be able to
> write even with a loss of one DC and loss of one node of the surviving
> DC. I am open to hearing what compromise we have to do with the reads
> during the time a DC is down. For us write is critical, more than
> reads.
>
> May be this is not possible with 6 nodes, and requires more.  Pls advise.
>
>

Reply via email to