The RocksDB rings are 256MB, 2.5GB, 25GB, and 250GB.  Unless you have a 
workload that uses a lot of metadata, taking care of the first 3 and providing 
room for compaction should be fine.  To allow for compaction room, 60GB  should 
be sufficient.  Add 4GB to accommodate WAL and you're at a nice multiple of 2, 
64GB.

David Byte
Sr. Technology Strategist
SCE Enterprise Linux 
SCE Enterprise Storage
Alliances and SUSE Embedded
db...@suse.com
918.528.4422

On 1/31/20, 8:16 AM, "ad...@medent.com" <ad...@medent.com> wrote:

    vitalif@yourcmc.ru wrote:
    > I think 800 GB NVMe per 2 SSDs is an overkill. 1 OSD usually only 
    > requires 30 GB block.db, so 400 GB per an OSD is a lot. On the other 
    > hand, does 7300 have twice the iops of 5300? In fact, I'm not sure if a 
    > 7300 + 5300 OSD will perform better than just a 5300 OSD at all.
    > 
    > It would be interesting if you could benchmark & compare it though :)
    
    The documentation I read said it was 4% of the block device.  Also been 
told the rule of thumb is basically 3/30/300.  
    
    The 7.68TB 5300 pro does 11k random write IOPS, the 800GB 7300 MAX NVMe 
does 60k random write IOPS.  The micron white paper is using 9200 MAX's with 
the 5210 SATA SSD's.  Only reason I am going for the 5300's is for a bit more 
write endurance.
    _______________________________________________
    ceph-users mailing list -- ceph-users@ceph.io
    To unsubscribe send an email to ceph-users-le...@ceph.io
    

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to