Yes, I realized that, I updated it to 3.

On 10/7/2017 8:41 PM, Sinan Polat wrote:
> You are talking about the min_size, which should be 2 according to
> your text.
>
> Please be aware, the min_size in your CRUSH is _not_ the replica size.
> The replica size is set with your pools.
>
> Op 7 okt. 2017 om 19:39 heeft Peter Linder
> <peter.lin...@fiberdirekt.se <mailto:peter.lin...@fiberdirekt.se>> het
> volgende geschreven:
>
>> On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
>>> Hello!
>>>
>>> 2017-10-07 19:12 GMT+05:00 Peter Linder <peter.lin...@fiberdirekt.se
>>> <mailto:peter.lin...@fiberdirekt.se>>:
>>>
>>>     The idea is to select an nvme osd, and
>>>     then select the rest from hdd osds in different datacenters (see
>>>     crush
>>>     map below for hierarchy). 
>>>
>>> It's a little bit aside of the question, but why do you want to mix
>>> SSDs and HDDs in the same pool? Do you have read-intensive workload
>>> and going to use primary-affinity to get all reads from nvme?
>>>  
>>>
>> Yes, this is pretty much the idea, getting the performance from NVMe
>> reads, while still maintaining triple redundancy and a reasonable cost.
>>
>>
>>> -- 
>>> Regards,
>>> Vladimir
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to