maps from the mon, burning CPU while
parsing them.
I wasn't able to find any good documentation on the OSDMap, in
particular why its historical versions need to be kept and why the OSD
seemingly needs so many of them. Can anybody point me in the right
direction? Or is something
Hi Rich,
> I've noticed this a couple of times on Nautilus after doing some large
> backfill operations. It seems the osd map doesn't clear properly after
> the cluster returns to Health OK and builds up on the mons. I do a
> "du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows
>
Hi Konstantin,
I mean freshly deployed OSDs. Restarted OSDs don't exhibit that behavior.
Best regards,
Jan-Philipp
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
since I just read that documentation page [1] on Friday, I can't tell
you anything that isn't on that page. But that particular problem of
which monitor gets elected should be solvable simply by using
connectivity election mode [2], shouldn't it?
Apart from the latency to the mon, the stretch
Hi again,
turns out the long bootstrap time was my own fault. I had some down&out
OSDs for quite a long time, which prohibited the monitor from pruning
the OSD maps. Makes sense, when I think about it, but I didn't before.
Rich's hint to get the cluster to health OK first pointed me in the
right d
Hi Jay,
I'm having the same problem, the setting doesn't affect the warning at all.
I'm currently muting the warning every week or so (because it doesn't
even seem to be present consistently, and every time it disappears for a
moment, the mute is cancelled) with
ceph health mute BLUESTORE_SP
That package probably contains the vfs_ceph module for Samba. However,
further down, the same page says:
> The above share configuration uses the Linux kernel CephFS client, which is
> recommended for performance reasons.
> As an alternative, the Samba vfs_ceph module can also be used to communic
Hi everyone,
I had the autoscale_mode set to "on" and the autoscaler went to work and
started adjusting the number of PGs in that pool. Since this implies a
huge shift in data, the reweights that the balancer had carefully
adjusted (in crush-compat mode) are now rubbish, and more and more OSDs
bec
the reweights
>>> (visible in "ceph osd tree"), whereas the balancer adjusts the "compat
>>> weight-set", which I don't know how to convert back to the old-style
>>> reweights.
>>>
>>> Best regards,
>>> Jan-Philipp
>>
You are basically listing all the reasons one shouldn't have too much
misplacement at once. ;-)
Your best bet probably is pgremapper [1] that I've recently learned
about on this list. With `cancel-backfill`, you could stop any running
backfill. With `undo-upmaps` you could then specifically start
Hi everyone,
hope this is the right place to raise this issue.
I stumbled upon a tracker issue [1] that has been stuck in state
"Pending Backport" for 11 months, without even a single backport issue
created - unusually long in my (limited) experience.
Upon investigation, I found that according t
Hey Angelo,
what you're asking for is "Live Migration".
https://docs.ceph.com/en/latest/rbd/rbd-live-migration/ says:
The live-migration copy process can safely run in the background while the new
target image is in use. There is currently a requirement to temporarily stop
using the source ima
12 matches
Mail list logo