> On Jul 10, 2025, at 9:47 AM, Peter Eisch <pe...@boku.net> wrote: > > My storage is NVMe, U.2 in R640's (Intel P4600/P4610s). The cluster started > with Luminious an running Reef now -- it's been migrated through many types > of orchestrators. > If there are a studies on multiple OSDs per drive I am open to learn.
I suspected so. You may find this blog post useful: https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/ TL,DR: with Reef the benefit should be gone and you can save CPU and memory by deploying one OSD per drive, especially at that capacity. With 60+ TB QLC-class SSDs I speculate that there may be different dynamics. > Did you select 3DWPD SSDs empirically, or out of an abundance of caution? I asked this because both the 3.2T nominal size and the SKUs you mentioned are 3DWPD models aka “mixed use”. These are identical hardware to 1DWPD “read intensive” SKUs, just with the overprovisioning slider adjusted. I have yet to see a Ceph cluster that would burn through “read intensive” endurance in less than 8 years, ymmv. A few years back I had a conversation with a certain system manufacturer who was specking mixed use, advising them that they could improve their COGS and customer pricing by switching to read-intensive and in many cases QLC. The response was that they needed high endurance because of balancing and scrubs. Balancing contributes a trivial level of writes and scrubs are only reads, so that was very much a non-sequitur. That said, if you haven’t updated the firmware on those to the latest, I highly recommend doing so. You can download sst from the Solidigm web site; if your drives came from Dell it may not want to update them and you might need to use DSU instead. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io