[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Lukasz Borek
> > You could check if your devices support NVMe namespaces and create more > than one namespace on the device. Wow, tricky. Will give it a try. Thanks! Łukasz Borek luk...@borek.org.pl On Tue, 4 Jun 2024 at 16:26, Robert Sander wrote: > Hi, > > On 6/4/24 16:15, Anthony D'Atri wrote: >

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
Hi, On 6/4/24 16:15, Anthony D'Atri wrote: I've wondered for years what the practical differences are between using a namespace and a conventional partition. Namespaces show up as separate block devices in the kernel. The orchestrator will not touch any devices that contain a partition tab

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Anthony D'Atri
Or partition, or use LVM. I've wondered for years what the practical differences are between using a namespace and a conventional partition. > On Jun 4, 2024, at 07:59, Robert Sander wrote: > > On 6/4/24 12:47, Lukasz Borek wrote: > >> Using cephadm, is it possible to cut part of the NVME dr

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
On 6/4/24 12:47, Lukasz Borek wrote: Using cephadm, is it possible to cut part of the NVME drive for OSD and leave rest space for RocksDB/WALL? Not out of the box. You could check if your devices support NVMe namespaces and create more than one namespace on the device. The kernel then sees m

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Lukasz Borek
> > I have certainly seen cases where the OMAPS have not stayed within the > RocksDB/WAL NVME space and have been going down to disk. How to monitor OMAPS size and if it does not get out of NVME? The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on > workload spillover coul

[ceph-users] Re: tuning for backup target cluster

2024-06-03 Thread Anthony D'Atri
The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on workload spillover could of course still be a thing. > > I have certainly seen cases where the OMAPS have not stayed within the > RocksDB/WAL NVME space and have been going down to disk. > > This was on a large clust

[ceph-users] Re: tuning for backup target cluster

2024-06-03 Thread Darren Soothill
I have certainly seen cases where the OMAPS have not stayed within the RocksDB/WAL NVME space and have been going down to disk. This was on a large cluster with a lot of objects but the disks that where being used for the non-ec pool where seeing a lot more actual disk activity than the other d

[ceph-users] Re: tuning for backup target cluster

2024-05-29 Thread Anthony D'Atri
> You also have the metadata pools used by RGW that ideally need to be on NVME. The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on NVMe that way. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: tuning for backup target cluster

2024-05-29 Thread Darren Soothill
So a few questions I have around this. What is the network you have for this cluster? Changing the bluestone_min_alloc_size would be the last thing I would even consider. In fact I wouldn’t be changing it as you are in untested territory. The challenge with making these sort of things perform i

[ceph-users] Re: tuning for backup target cluster

2024-05-27 Thread Anthony D'Atri
> >> Is this a chassis with universal slots, or is that NVMe device maybe M.2 >> or rear-cage? > > 12 * HDD via LSI jbod + 1 PCI NVME. All NVMe devices are PCI ;). > Now it's 1.6TB, for the production plan > is to use 3.2TB. > > > `ceph df` >> `ceph osd dump | grep pool` >> So we can see wh

[ceph-users] Re: tuning for backup target cluster

2024-05-27 Thread Lukasz Borek
Anthony, Darren Thanks for response. Answering your questions: What is the network you have for this cluster? 25GB/s > Is this a chassis with universal slots, or is that NVMe device maybe M.2 > or rear-cage? 12 * HDD via LSI jbod + 1 PCI NVME. Now it's 1.6TB, for the production plan is to use

[ceph-users] Re: tuning for backup target cluster

2024-05-25 Thread Anthony D'Atri
> Hi Everyone, > > I'm putting together a HDD cluster with an ECC pool dedicated to the backup > environment. Traffic via s3. Version 18.2, 7 OSD nodes, 12 * 12TB HDD + > 1NVME each, QLC, man. QLC. That said, I hope you're going to use that single NVMe SSD for at least the index pool. Is t