Le 13/11/2025 à 18:25, Matthias Riße a écrit :
Hey Patrick! Thanks, that's very insightful.
We also have HDDs only and 25Gb/s ethernet between the machines, so
similar to your setup, except we have >=8 disks per machine.
Your workload seems more sequential than what I am expecting from our
VMs, so maybe that favors EC for you. Out of curiosity, did you try
the same benchmarks with 3x replication too?
Typical use of this storage is infrequent I/O operations, but
potentially by several hundred processes on several dozen HDF5 files.
My nodes can be upgraded up to 12 OSDs/node.
Replication is not a strategy for this storage as it increase
significantly the strorage cost/TB. I have an other cluster, 3 nodes,
running Proxmox and hosting VMs on a replicated Ceph storage. This
allows very fast VM migration/restart when a node crashes. But this
cluster is quite old (>10 years) and benchmarking will not give any
pertinent informations. It will be replaced these next years.
The raw numbers look promising, what I am seeing currently is about
350MB/s max sequential writes with the local RAID6's we have, which
Ceph was able to saturate with a single OSD on a LV for testing. You
already get about double of that, and we have twice as many disks, so
should get even more.
Have you tried 8+4 or 7+5 with 3 OSDs per host too?
No I do not. With this setup each PG will rely on more OSDs and PGs osd
will overlap more frequently with few HDDs ?
Patrick
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]