Hello,
I believe you are hitting https://tracker.ceph.com/issues/50249. I've
also ended up configuring my rgw instances directly using
/etc/ceph/ceph.conf for the time being.
Hope this helps.
Arnaud
On Fri, 14 May 2021 at 22:04, Jan Kasprzak wrote:
>
> Hello,
>
> I have just upgraded my cluste
On Fri, May 14, 2021 at 09:12:07PM +0200, Mark Schouten wrote:
> It seems (documentation was no longer available, so ik took some
> searching) that I needed to run ceph mds deactivate $fs:$rank for every
> MDS I wanted to deactivate.
Ok, so that helped for one of the MDS'es. Trying to deactivate a
Hi,
Today I had a very similar case: 2 nvme OSDs got down and out. I had freshly
installed 16.2.1 version. Before failure disks were under some load ~1.5k read
IOPS + ~600 write IOPS. When they failed, nothing helped. After every trial of
resterting them I was finding in logs messages containin
Actually both our solutions don't work very well. Frequently the same OSD was
chosen for multiple chunks:
8.72 9751 0 00 408955125760
0 1302 active+clean 2h 224790'12801 225410:49810
[13,1,14,11,18,2,19,13]p13
Hi,
The user deleted 20-30 snapshots and clones from the cluster and it seems like
slows down the whole system.
I’ve set the snaptrim parameters to the lowest as possible, set bufferred_io to
true so at least have some speed for the user, but I can see the objects
removal from the cluster is s