IIRC it can be changed/takes effect immediately. The message is only an
implementation detail:
there is no observer registered that explicitly takes some action when it's
changed, but the value is re-read
anyways. But it's some time since I had to change this value on run-time
but I'm pretty sure i
It seems that this might interesting - unfortunately this cannot be
changed dynamically:
# ceph tell osd.* injectargs '--osd_snap_trim_sleep 0.025'
osd.0: osd_snap_trim_sleep = '0.025000' (not observed, change may
require restart)
osd.1: osd_snap_trim_sleep = '0.025000' (not observed, change may
r
It's usually the snapshot deletion that triggers slowness. Are you also
deleting/rotating old snapshots when creating new ones?
In this case: try to increase osd_snap_trim_sleep a little bit. Even to
0.025 can help a lot with a lot of concurrent snapshot deletions.
(That's what we set as default f
Hi Gregory,
thanks for the link - very interesting talk.
You mentioned the following settings in your talk, but i was not able to
find some documentation in the osd config reference:
(http://docs.ceph.com/docs/luminous/rados/configuration/osd-config-ref/)
My clusters settings look like this (lumi
You may find my talk at OpenStack Boston’s Ceph day last year to be useful:
https://www.youtube.com/watch?v=rY0OWtllkn8
-Greg
On Wed, Jun 27, 2018 at 9:06 AM Marc Schöchlin wrote:
> Hello list,
>
> i currently hold 3 snapshots per rbd image for my virtual systems.
>
> What i miss in the current d
Hello list,
i currently hold 3 snapshots per rbd image for my virtual systems.
What i miss in the current documentation:
* details about the implementation of snapshots
o implementation details
o which scenarios create high overhead per snapshot
o what causes the really short
Hi John
Have you looked at ceph documentation?
RBD: http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/
The ceph project documentation is really good for most areas. Have a
look at what you can find then come back with more specific questions!
Thanks
Brian
On Wed, Jun 27, 2018 at 2:24 PM,