Hi,
You may want to check out this doc:
https://docs.ceph.com/en/quincy/radosgw/config-ref/#lifecycle-settings
As I understand it in short:
- if there are thousands of buckets, we should increase the rgw_lc_max_worker.
- if there are a few buckets that have hundreds of thousands of objects, we
s
I'm hoping to see at least one more, if not more than that, but I have no
crystal ball. I definitely support this idea, and strongly suggest it's given
some thought. There have been a lot of delays/missed releases due to all of the
lab issues, and it's significantly impacted the release cadence
Hi Experts,
We plan to setup a Ceph Object to support a S3 workload, that will need to
delete 100M file daily via lifecycle.
Appreciate your suggestion and setting to handle this kind of scenario
Best Regards,
Ha
___
ceph-users mailing list -- ceph-user
Hi,
> On 17 Jul 2023, at 12:53, Ponnuvel Palaniyappan wrote:
>
> The typical EOL date (2023-06-01) has already passed for Pacific. Just
> wondering if there's going to be another Pacific point release (16.2.14) in
> the pipeline.
Good point! At least, for possibility upgrade RBD clusters from N
Hi all,
now that host masks seem to work, could somebody please shed some light at the
relative priority of these settings:
ceph config set osd memory_target X
ceph config set osd/host:A memory_target Y
ceph config set osd/class:B memory_target Z
Which one wins for an OSD on host A in class B?
Hi,
The typical EOL date (2023-06-01) has already passed for Pacific. Just
wondering if there's going to be another Pacific point release (16.2.14) in
the pipeline.
--
Regards,
Ponnuvel P
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
It looks indeed to be that bug that I hit.
Thanks.
Luis Domingues
Proton AG
--- Original Message ---
On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee
wrote:
> Hello Luis,
>
> Please see my response below:
>
> But when I took a look on the memory usage of my OSDs, I was belo