Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-18 Thread Nils Fahldieck - Profihost AG
Hello Mark, I'm answering on behalf of Stefan. Am 18.01.19 um 00:22 schrieb Mark Nelson: > > On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote: >> Hello Mark, >> >> after reading >> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/ >> >> again i'm really confused how

Re: [ceph-users] Troubleshooting hanging storage backend whenever there is any cluster change

2018-10-12 Thread Nils Fahldieck - Profihost AG
x27;s what you've meant. If I got you wrong, would you mind to point to one of those threads you mentioned? Thanks :) Am 12.10.2018 um 14:03 schrieb Burkhard Linke: > Hi, > > > On 10/12/2018 01:55 PM, Nils Fahldieck - Profihost AG wrote: >> I rebooted a Ceph host and lo

Re: [ceph-users] Troubleshooting hanging storage backend whenever there is any cluster change

2018-10-12 Thread Nils Fahldieck - Profihost AG
e `ceph status` and `ceph health detail` outputs > will be helpful while it's happening. > > On Thu, Oct 11, 2018 at 3:02 PM Nils Fahldieck - Profihost AG > mailto:n.fahldi...@profihost.ag>> wrote: > > Thanks for your reply. I'll capture a `ceph status` the n

Re: [ceph-users] Troubleshooting hanging storage backend whenever there is any cluster change

2018-10-11 Thread Nils Fahldieck - Profihost AG
,d84ce~1,d84d0~1,d84d2~2,d84d6~2,d84db~1,d84dd~2,d84e2~2,d84e6~1,d84e9~1,d84eb~4,d84f0~4] pool 6 'cephfs_cephstor1_data' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1214952 flags hashpspool stripe_width 0 application cephfs pool 7 '

[ceph-users] Troubleshooting hanging storage backend whenever there is any cluster change

2018-10-11 Thread Nils Fahldieck - Profihost AG
Hi everyone, since some time we experience service outages in our Ceph cluster whenever there is any change to the HEALTH status. E. g. swapping storage devices, adding storage devices, rebooting Ceph hosts, during backfills ect. Just now I had a recent situation, where several VMs hung after I r