The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on 
workload spillover could of course still be a thing.

> 
> I have certainly seen cases where the OMAPS have not stayed within the 
> RocksDB/WAL NVME space and have been going down to disk.
> 
> This was on a large cluster with a lot of objects but the disks that where 
> being used for the non-ec pool where seeing a lot more actual disk activity 
> than the other disks in the system.
> 
> Moving the non-ec pool onto NVME helped with a lot of operations that needed 
> to be done to cleanup a lot of orphaned objects.
> 
> Yes this was a large cluster with a lot of ingress data admitedly.
> 
> Darren Soothill
> 
> Want a meeting with me: https://calendar.app.google/MUdgrLEa7jSba3du9
> 
> Looking for help with your Ceph cluster? Contact us at https://croit.io/
> 
> croit GmbH, Freseniusstr. 31h, 81247 Munich 
> CEO: Martin Verges - VAT-ID: DE310638492 
> Com. register: Amtsgericht Munich HRB 231263 
> Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
> 
> 
> 
> 
>> On 29 May 2024, at 21:24, Anthony D'Atri <a...@dreamsnake.net> wrote:
>> 
>> 
>> 
>>> You also have the metadata pools used by RGW that ideally need to be on 
>>> NVME.
>> 
>> The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on NVMe 
>> that way.
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to