Hi,
can you tell a bit more about your setup? Are RGWs and OSDs colocated
on the same servers? Are there any signs of a server overload like OOM
killers or anything else related to the recovery? Are disks saturated?
Is this cephadm managed? What's the current ceph status?
Thanks,
Eugen
Z
What I only see in rgw logs in that "hang time" is something like:
2024-07-23T20:00:45.666+0200 7fc751496700 2 rgw data changes log:
RGWDataChangesLog::ChangesRenewThread: start
2024-07-23T20:00:57.072+0200 7fc740c75700 20 rgw notify: INFO: next queues
processing will happen at: Tue Jul 23 20:01:2
Hi,
we've just updated from pacific(16.2.15) to quincy(17.2.7) and everything
seems to work, however after some time radosgw stops responding and we have
to restart it.
At first look, it seems that radosgw stops responding sometimes during
recovery.
Does this maybe have to do something with mclo
Hi Patrick,
Thanks for pointing this issue, it looks coherent with the timing of
RHEL9 kernel client update on our side.
We are going to confirm this by using only older clients on CentOS7.
Cheers,
Adrien
Le 22/07/2024 à 16:23, Patrick Donnelly a écrit :
Hi Adrien,
On Mon, Jul 22, 2024 at 5
> Why would you want to do that?
You would want to do that to have minimal data movement, that is, limit the
wear on disks to the absolutely necessary minimum. If you replace a disk and
re-deploy the OSD with the same ID on the same host with the same device class,
only the PGs that land on thi
On 23/07/24 08:41, Robert Sander wrote:
On 7/23/24 08:24, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are non
way to specify an id during the OSD creation?
Why would you want to do that?
For me there wasn't a "real need", I could imagine a scenario in