Hi,

> On Sep 18, 2017, at 10:06 AM, Christian Theune <c...@flyingcircus.io> wrote:
> 
> We’re doing the typical SSD/non-SSD pool separation. Currently we effectively 
> only use 2 pools: rbd.hdd and rbd.ssd. The ~4TB OSDs in the rbd.hdd pool are 
> “capacity endurance” SSDs (Micron S610DC). We have 10 machines at the moment 
> with 10 OSDs on average (2 SSD, 1-2 capacity SSD and 6-7 HDDs).

Maybe this might be too confusing to how our pools are structured, so I’ll try 
to clear this up again:

We have a pool “rbd.ssd” which uses the OSDs in “datacenter rzob-ssd”.
This is an all-flash pool using inline journals and runs on Intel DC S3610.

The other pool is “rbd.hdd” which generally uses different disks:

* 2TB 7.k SATA HDDs which have a primary affinity of 0
* a couple of 8x600 GB SAS II HGST 3,5” 15k, which have a small primary affinity
* 1-2 Micron S610DC 3.8TB with a primary affinity of 1

The HDD pool has grown over time and we’re slowly moving it towards “endurance 
capacity” SSD models (using external journals on Intel NVME). That’s why it’s 
not a single OSD configuration.

Hope this helps,
Christian

Liebe Grüße,
Christian Theune

--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to