On 10/7/2017 8:08 PM, David Turner wrote:
>
> Just to make sure you understand that the reads will happen on the
> primary osd for the PG and not the nearest osd, meaning that reads
> will go between the datacenters. Also that each write will not ack
> until all 3 writes happen adding the latency to the writes and reads both.
>
>

Yes, I understand this. It is actually fine, the datacenters have been
selected so that they are about 10-20km apart. This yields around a 0.1
- 0.2ms round trip time due to speed of light being too low.
Nevertheless, latency due to network shouldn't be a problem and it's all
40G (dedicated) TRILL network for the moment.

I just want to be able to select 1 SSD and 2 HDDs, all spread out. I can
do that, but one of the HDDs end up in the same datacenter, probably
because I'm using the "take" command 2 times (resets selecting buckets?).


> On Sat, Oct 7, 2017, 1:48 PM Peter Linder <peter.lin...@fiberdirekt.se
> <mailto:peter.lin...@fiberdirekt.se>> wrote:
>
>     On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
>>     Hello!
>>
>>     2017-10-07 19:12 GMT+05:00 Peter Linder
>>     <peter.lin...@fiberdirekt.se <mailto:peter.lin...@fiberdirekt.se>>:
>>
>>         The idea is to select an nvme osd, and
>>         then select the rest from hdd osds in different datacenters
>>         (see crush
>>         map below for hierarchy). 
>>
>>     It's a little bit aside of the question, but why do you want to
>>     mix SSDs and HDDs in the same pool? Do you have read-intensive
>>     workload and going to use primary-affinity to get all reads from
>>     nvme?
>>      
>>
>     Yes, this is pretty much the idea, getting the performance from
>     NVMe reads, while still maintaining triple redundancy and a
>     reasonable cost.
>
>
>>     -- 
>>     Regards,
>>     Vladimir
>
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to