That PDF specifically calls for P3700 NVMe SSDs, not the consumer 750. You need 
high endurance drives usually.

I’m using 1x400GB Intel P3700 per 9 OSDs (so 4xP3700 per 36 disk chassis).

> On 11 Feb 2016, at 17:56, Michael <mabarkd...@gmail.com> wrote:
> 
> Alex Leake <A.M.D.Leake@...> writes:
> 
>> 
>> Hello Michael​,
>> 
>> I maintain a small Ceph cluster at the University of Bath, our cluster 
> consists of:
>> 
>> Monitors:
>> 3 x Dell PowerEdge R630
>> 
>> - 2x Intel(R) Xeon(R) CPU E5-2609 v3
>> - 64GB RAM
>> - 4x 300GB SAS (RAID 10)
>> 
>> OSD Nodes:
>> 6 x Dell PowerEdge R730XD & MD1400 Shelves
>> 
>> - 2x Intel(R) Xeon(R) CPU E5-2650
>> - 128GB RAM
>> - 2x 600GB SAS (OS - RAID1)
>> - 2x 200GB SSD (PERC H730)
>> - 14x 6TB NL-SAS (PERC H730)
>> - 12x 4TB NL-SAS (PERC H830 - MD1400)
>> 
>> Please let me know if you want any more info.
>> 
>> In my experience thus far, I've found this ratio is not useful for cache 
> tiering etc - the SSDs are in a
>> separate pool.
>> 
>> If I could start over, I'd go for fewer OSDs / host - and no SSDs (or a 
> much better ratio - like 4:1).
>> 
>> Kind Regards,
>> Alex.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users <at> lists.ceph.com <http://lists.ceph.com/>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
> 
> I'm really glad you noted this, I was just following Redhat/SuperMicro 
> deployment reference architecture 
> (https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro- 
> <https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro->
> INC0270868_v2_0715.pdf) page 11 noted 12 disks per 7xx intel ssd.  So I was 
> debating if it might have been suitable.  I try and have only 4 spinning 
> disks per SSD cache.
> 
> If I get 4TB NL-SAS drives, how big would the SSD need to be?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to