Hi,

I'm running a 3-node Ceph cluster for VM block storage (Proxmox/KVM).

Replication is set to 3.

Previously, we were running 1 x Intel Optane 905P 960B
<https://ark.intel.com/content/www/us/en/ark/products/129834/intel-optane-ssd-905p-series-960gb-1-2-height-pcie-x4-20nm-3d-xpoint.html>
disk per node, with 4 x OSDs per drive, for total usable storage of 960 GB.

Performance was good, even without significant tuning, I assume largely
because of the Optane disks.

However, we need more storage space.

We have some old 800 GB SSDs we could potentially use (Intel S3610
<https://ark.intel.com/content/www/us/en/ark/products/82936/intel-ssd-dc-s3610-series-800gb-2-5in-sata-6gb-s-20nm-mlc.html>
).

I know it's possible to put the WAL/RocksDB on an Optane disks, and have
normal SSDs for the OSDs. I assume we'd go down to a single OSD per disk if
running normal SATA SSDs. However, other people are saying the performance
gain from this isn't that great (e.g.
https://yourcmc.ru/wiki/Ceph_performance)

Each of our 3 nodes has 8 drive bays, so we could populate this for 24 x
800GB SSDs in total. My question is:

   1. For the Intel S3610 - should we still run with 1 OSD per disk?
   2. How does performance (IOPS and latency) scale as the number of disks
   increase? (This is for VM block storage).

Thanks,
Victor
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to