log:
RGWDataChangesLog::ChangesRenewThread: start
2022-12-31T20:09:32.028+ 7fea52ab9700 2 rgw data changes log:
RGWDataChangesLog::ChangesRenewThread: start
2022-12-31T20:09:54.027+ 7fea52ab9700 2 rgw data changes log:
RGWDataChangesLog::ChangesRenewThread: start
On Sat, Dec 31, 202
Hey there,
Sorry for the late reply.
If the pg issue isn't solved yet, could you run these:
ceph pg repeer
ceph pg repair
Pavin.
On 29-Dec-22 4:08 AM, Deep Dish wrote:
Hi Pavin,
The following are additional developments.. There's one PG that's
stuck and unable to recover. I've attached
fe36debb700 1 mds.fs01.ceph02mon03.rjcxat Updating MDS map to version
131280 from mon.4
I suspect that the file in the log above int's the culprit. How can I get
to the root cause of MDS slowdowns?
On Tue, Dec 27, 2022 at 3:32 PM Pavin Joseph wrote:
Interesting, the logs show the cras
to primary after it comes online and switching back. Too complicated IMHO.
[0]: https://docs.ceph.com/en/quincy/rados/operations/crush-map/
Kind regards,
Pavin Joseph.
On 28-Dec-22 11:27 AM, Isaiah Tang Yue Shun wrote:
Hi all,
From the documentation, I can only find Ceph Object Gateway multi-s
ack from only that node? I am not too worried
about it as the data is on two other nodes in a 3x replication setup.
Thank you.
Kind regards,
Pavin Joseph.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le