> Redhat/Micron/Samsung/Supermicro have all put out white papers backing the 
> idea of 2 copies on NVMe's as safe for production.

It's not like you can just jump from "unsafe" to "safe" -- it is about
comparing the probability of losing data against how valuable that
data is.

A vendor's decision on size --  when they have a vested interest in
making the price lower vs the competition -- may be a different
decision than you would make as the person who stands to lose your
data and potentially your career.  And I say this as someone who works
for a hardware vendor... listen to their advice but make your own
decision.

I have lost data on a size 2 cluster before and learned first-hand how
easy it is for this to happen.  Luckily it was just my home NAS.  But
if anyone has Roger Federer's 2018 tennis matches archived we need to
talk :D

Mark



On Wed, Feb 3, 2021 at 8:50 AM Adam Boyhan <ad...@medent.com> wrote:
>
> Isn't this somewhat reliant on the OSD type?
>
> Redhat/Micron/Samsung/Supermicro have all put out white papers backing the 
> idea of 2 copies on NVMe's as safe for production.
>
>
> From: "Magnus HAGDORN" <magnus.hagd...@ed.ac.uk>
> To: pse...@avalon.org.ua
> Cc: "ceph-users" <ceph-users@ceph.io>
> Sent: Wednesday, February 3, 2021 4:43:08 AM
> Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2
>
> On Wed, 2021-02-03 at 09:39 +0000, Max Krasilnikov wrote:
> > > if a OSD becomes unavailble (broken disk, rebooting server) then
> > > all
> > > I/O to the PGs stored on that OSD will block until replication
> > > level of
> > > 2 is reached again. So, for a highly available cluster you need a
> > > replication level of 3
> >
> >
> > AFAIK, with min_size 1 it is possible to write even to only active
> > OSD serving
> >
> yes, that's correct but then you seriously risk trashing your data
>
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to