Re: [ceph-users] Ceph replication factor of 2

2018-05-25 Thread Paul Emmerich
If you are so worried about the storage efficiency: why not use erasure coding? EC performs really well with Luminous in our experience. Yes, you generate more IOPS and somewhat more CPU load and a higher latency. But it's often worth a try. Simple example for everyone considering 2/1 replicas: co

Re: [ceph-users] Ceph replication factor of 2

2018-05-25 Thread Donny Davis
Nobody cares about their data until they don't have it anymore. Using replica 3 is the same logic as RAID6. Its likely if one drive has crapped out, more will meet the maker soon. If you care about your data, then do what you can to keep it around. If its a lab like mine, who cares its all ephe

Re: [ceph-users] Ceph replication factor of 2

2018-05-25 Thread Janne Johansson
Den fre 25 maj 2018 kl 00:20 skrev Jack : > On 05/24/2018 11:40 PM, Stefan Kooman wrote: > >> What are your thoughts, would you run 2x replication factor in > >> Production and in what scenarios? > Me neither, mostly because I have yet to read a technical point of view, > from someone who read and

Re: [ceph-users] Ceph replication factor of 2

2018-05-24 Thread Jack
On 05/24/2018 11:40 PM, Stefan Kooman wrote: >> What are your thoughts, would you run 2x replication factor in >> Production and in what scenarios? Me neither, mostly because I have yet to read a technical point of view, from someone who read and understand the code I do not buy Janne's "trust me,

Re: [ceph-users] Ceph replication factor of 2

2018-05-24 Thread Stefan Kooman
Quoting Anthony Verevkin (anth...@verevkin.ca): > My thoughts on the subject are that even though checksums do allow to > find which replica is corrupt without having to figure which 2 out of > 3 copies are the same, this is not the only reason min_size=2 was > required. Even if you are running all

Re: [ceph-users] Ceph replication factor of 2

2018-05-24 Thread Alexandre DERUMIER
" À: c...@jack.fr.eu.org Cc: "ceph-users" Envoyé: Jeudi 24 Mai 2018 08:33:32 Objet: Re: [ceph-users] Ceph replication factor of 2 Den tors 24 maj 2018 kl 00:20 skrev Jack < [ mailto:c...@jack.fr.eu.org | c...@jack.fr.eu.org ] >: Hi, I have to say, this is a common yet wor

Re: [ceph-users] Ceph replication factor of 2

2018-05-23 Thread Daniel Baumann
Hi, I coudn't agree more, but just to re-emphasize what others already said: the point of replica 3 is not to have extra safety for (human|software|server) failures, but to have enough data around to allow rebalancing the cluster when disks fail. after a certain amount of disks in a cluste

Re: [ceph-users] Ceph replication factor of 2

2018-05-23 Thread Janne Johansson
Den tors 24 maj 2018 kl 00:20 skrev Jack : > Hi, > > I have to say, this is a common yet worthless argument > If I have 3000 OSD, using 2 or 3 replica will not change much : the > probability of losing 2 devices is still "high" > On the other hand, if I have a small cluster, less than a hundred OS

Re: [ceph-users] Ceph replication factor of 2

2018-05-23 Thread Jack
Hi, About Bluestore, sure there are checksum, but are they fully used ? Rumors said that on a replicated pool, during recovery, they are not > My thoughts on the subject are that even though checksums do allow to find > which replica is corrupt without having to figure which 2 out of 3 copies a

[ceph-users] Ceph replication factor of 2

2018-05-23 Thread Anthony Verevkin
This week at the OpenStackSummit Vancouver I can hear people entertaining the idea of running Ceph with replication factor of 2. Karl Vietmeier of Intel suggested that we use 2x replication because Bluestore comes with checksums. https://www.openstack.org/summit/vancouver-2018/summit-schedule/ev