Hello Mark,
I'm answering on behalf of Stefan.
Am 18.01.19 um 00:22 schrieb Mark Nelson:
>
> On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote:
>> Hello Mark,
>>
>> after reading
>> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
>>
>> again i'm really confused how
x27;s what you've meant. If I got you wrong,
would you mind to point to one of those threads you mentioned?
Thanks :)
Am 12.10.2018 um 14:03 schrieb Burkhard Linke:
> Hi,
>
>
> On 10/12/2018 01:55 PM, Nils Fahldieck - Profihost AG wrote:
>> I rebooted a Ceph host and lo
e `ceph status` and `ceph health detail` outputs
> will be helpful while it's happening.
>
> On Thu, Oct 11, 2018 at 3:02 PM Nils Fahldieck - Profihost AG
> mailto:n.fahldi...@profihost.ag>> wrote:
>
> Thanks for your reply. I'll capture a `ceph status` the n
,d84ce~1,d84d0~1,d84d2~2,d84d6~2,d84db~1,d84dd~2,d84e2~2,d84e6~1,d84e9~1,d84eb~4,d84f0~4]
pool 6 'cephfs_cephstor1_data' replicated size 3 min_size 1 crush_rule 0
object_hash rjenkins pg_num 128 pgp_num 128 last_change 1214952 flags
hashpspool stripe_width 0 application cephfs
pool 7 '
Hi everyone,
since some time we experience service outages in our Ceph cluster
whenever there is any change to the HEALTH status. E. g. swapping
storage devices, adding storage devices, rebooting Ceph hosts, during
backfills ect.
Just now I had a recent situation, where several VMs hung after I
r