[ceph-users] Best practice and expected benefits of using separate WAL and DB devices with Bluestore

2024-04-19 Thread Niklaus Hofer
l use cases. I am looking for best practices and in general just trying to avoid any obvious mistakes. Any advice is much appreciated. Sincerely Niklaus Hofer -- stepping stone AG Wasserwerkgasse 7 CH-3011 Bern Telefon: +41 31 332 53 63 www.stepping-stone.ch niklaus.ho...@stepping-sto

[ceph-users] Autoscale warnings depite autoscaler being off

2025-06-19 Thread Niklaus Hofer
n my pools!)? Sincerely Niklaus Hofer -- stepping stone AG Wasserwerkgasse 7 CH-3011 Bern Telefon: +41 31 332 53 63 www.stepping-stone.ch niklaus.ho...@stepping-stone.ch ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to cep

[ceph-users] Re: RadosGW: Even more large omap objects after resharding

2025-06-19 Thread Niklaus Hofer
too. Glad to hear others are thinking alike. I think I saw an object map with 1.3M object references, so I guess 50'000 might still be too high. But we'll probably do 50'000 anyway at first and see whether it helps at all. I'll definitely let you know how it's going! S

[ceph-users] Re: RadosGW: Even more large omap objects after resharding

2025-06-19 Thread Niklaus Hofer
that pool. Now I have 167 omap objects that are not quite as big, but still too large. Sincerely Niklaus Hofer On 19/06/2025 14.48, Eugen Block wrote: Hi, the warnings about large omap objects are reported when deep-scrubs happen. So if you resharded the bucket (or Ceph did that for you

[ceph-users] RadosGW: Even more large omap objects after resharding

2025-06-19 Thread Niklaus Hofer
uld reduce `rgw_max_objs_per_shard` from 100'000 to something like 10'000 to have the buckets resharded more aggressively? But then again, that assumes a lot. For example, that assumes that the num_objects counter in the bucket stats does not count up on versioned objects. So

[ceph-users] Re: Autoscale warnings depite autoscaler being off

2025-06-19 Thread Niklaus Hofer
Dear Eugen My hero! This resolved this issue - the warnings are now gone. We did try to restart the mons before, but never thought to restart the mgrs... Sincerely Niklaus Hofer On 19/06/2025 14.52, Eugen Block wrote: Default question: have you tried to fail the mgr? ;-) ceph mgr fail

[ceph-users] Re: RadosGW: Even more large omap objects after resharding

2025-06-19 Thread Niklaus Hofer
PGs into the queue, it doesn't mean they will be scrubbed immediately. And depending on the PG size, the scrubbing can take some time, too. I did check the OSD logs at the time, so yes, I can confirm that they all went through. Sincerely Niklaus Hofer Zitat von Niklaus Hofer :

[ceph-users] Re: RadosGW buckets stuck in lifecycle "PROCESSING"

2025-06-25 Thread Niklaus Hofer
Hi Just letting you know the situation has been resolved. The bucket is no longer in status processing. I didn't end up needing to do anything, just wait for long enough. In the end it was like 60 hours. Sincerely Niklaus Hofer On 24/06/2025 08.35, Niklaus Hofer wrote: Dear all I ma

[ceph-users] RadosGW buckets stuck in lifecycle "PROCESSING"

2025-06-24 Thread Niklaus Hofer
question has been asked before on this ML [1] but in that thread, the bucket reverted back to UNINITIAL. I was hoping that maybe it would be the same here, but after waiting for a good while, I've lost hope on that... Many thanks in advance Niklaus Hofer Links: [1] https://www.mail-arch