I'm pretty certain that the write returns as complete only after all active 
OSDs for a PG have completed the write regardless of min_size.

________________________________

[cid:image87d2ad.JPG@6e2c58b3.4d9df465]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Oliver 
Humpage [oli...@watershed.co.uk]
Sent: Friday, December 09, 2016 2:31 PM
To: ceph-us...@ceph.com
Subject: Re: [ceph-users] 2x replication: A BIG warning


On 7 Dec 2016, at 15:01, Wido den Hollander 
<w...@42on.com<mailto:w...@42on.com>> wrote:

I would always run with min_size = 2 and manually switch to min_size = 1 if the 
situation really requires it at that moment.

Thanks for this thread, it’s been really useful.

I might have misunderstood, but does min_size=2 also mean that writes have to 
wait for at least 2 OSDs to have data written before the write is confirmed? I 
always assumed this would have a noticeable effect on performance and so left 
it at 1.

Our use case is RBDs being exported as iSCSI for ESXi. OSDs are journalled on 
enterprise SSDs, servers are linked with 10Gb, and we’re generally getting very 
acceptable speeds. Any idea as to how upping min_size to 2 might affect things, 
or should we just try it and see?

Oliver.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to