Anybody know about changes in rbd feature 'striping'? May be is
deprecated feature? What I mean:
I have volume created by Jewel client on Luminous cluster.
# rbd --user=cinder info
solid_rbd/volume-12b5df1e-df4c-4574-859d-22a88415aaf7
rbd image 'volume-12b5df1e-df4c-4574-859d-22a88415aaf7':
Hi,
I used these settings and there are no more slow requests in the cluster.
-
ceph tell osd.* injectargs '--osd_scrub_sleep 0.1'
ceph tell osd.* injectargs '--osd_scrub_load_threshold 0.3'
ceph tell osd.* injectargs '--osd_scrub_chunk_max 6'
--
Yes, scrubbing is slower now, but
Use a get with the second syntax to see the currently running config.
On Sun, Jan 28, 2018, 3:41 AM Karun Josy wrote:
> Hello,
>
> Sorry for bringing this up again.
>
> What is the proper way to adjust the scrub settings ?
> Can I use injectargs ?
> ---
> ceph tell osd.* injectargs '--osd_sc
In my case I was able to assume all writes in the last 24 hours from before
the problem were bad, delete them and re-write them when I got things up.
For historical account, I was able to fix this on the osd level by choosing
some PGs on each osd and using the ceph-objectstore-tool to export-remov
Still the issue is continuing. Any one else has noticed it ?
When this happens, the Ceph Dashboard GUI gets stuck and we have to restart
the manager daemon to make it work again
Karun Josy
On Wed, Jan 17, 2018 at 6:16 AM, Karun Josy wrote:
> Hello,
>
> In one of our cluster set up, there is fr
On Sat, Jan 27, 2018 at 3:47 PM, David Turner wrote:
> I looked up the procedure to rebuild the metadata pool for CephFS and it
> looks really doable. I have the increased complication of the cache tier in
> here. I was curious if it's possible to create a new CephFS with an
> existing data pool
Hello,
Sorry for bringing this up again.
What is the proper way to adjust the scrub settings ?
Can I use injectargs ?
---
ceph tell osd.* injectargs '--osd_scrub_sleep .1'
---
Or do I have to use set manually in each osd daemons ?
---
ceph daemon osd.21 set osd_scrub_sleep .1
While