Adam;

I'd like to see that / those white papers.
I suspect what they're advocating is multiple OSD daemon processes per NVMe 
device.  This is something which can improve performance.  Though I've never 
done it, I believe you partition the device, and then create your OSD pointing 
at a partition.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com

-----Original Message-----
From: Adam Boyhan [mailto:ad...@medent.com] 
Sent: Wednesday, February 3, 2021 8:50 AM
To: Magnus HAGDORN
Cc: ceph-users
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2

Isn't this somewhat reliant on the OSD type? 

Redhat/Micron/Samsung/Supermicro have all put out white papers backing the idea 
of 2 copies on NVMe's as safe for production. 


From: "Magnus HAGDORN" <magnus.hagd...@ed.ac.uk> 
To: pse...@avalon.org.ua 
Cc: "ceph-users" <ceph-users@ceph.io> 
Sent: Wednesday, February 3, 2021 4:43:08 AM 
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2 

On Wed, 2021-02-03 at 09:39 +0000, Max Krasilnikov wrote: 
> > if a OSD becomes unavailble (broken disk, rebooting server) then 
> > all 
> > I/O to the PGs stored on that OSD will block until replication 
> > level of 
> > 2 is reached again. So, for a highly available cluster you need a 
> > replication level of 3 
> 
> 
> AFAIK, with min_size 1 it is possible to write even to only active 
> OSD serving 
> 
yes, that's correct but then you seriously risk trashing your data 

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. 
_______________________________________________ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to