you
provide from the time leading up to when the issue was first seen?
>
> Cheers
>
> Andrei
> - Original Message -
>> From: "Brad Hubbard"
>> To: "Andrei Mikhailovsky"
>> Cc: "ceph-users"
>> Sent: Thursday, 28 June, 201
uot;hash" : 1156456354,
>>"key" : "",
>>"oid" : ".dir.default.80018061.2",
>> "namespace" : "",
>>"snapid" : -2,
>>
>"created": 24431,
>>"last_epoch_clean": 121145,
>>"parent": "0.0",
>> "parent_split_bits": 0,
>>"last_scrub": "121131'654251"
"stats_invalid": false,
>"dirty_stats_invalid": false,
>"omap_stats_invalid": false,
> "hitset_stats_invalid": false,
>"hitset_bytes_stats_invalid": false,
>"pin
"num_bytes_recovered": 0,
"num_keys_recovered": 9482826,
"num_objects_omap": 60,
"num_objects_hit_set_archive": 0,
"num_bytes_hit_set_archive": 0,
"n
bd_children, allow rwx
> pool=ssdcs
>
> mgr.arh-ibstorage1-ib
> caps: [mds] allow *
> caps: [mon] allow profile mgr
> caps: [osd] allow *
> mgr.arh-ibstorage2-ib
> caps: [mds] allow *
> caps: [mon] allow profile mgr
> caps:
rd"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Tuesday, 26 June, 2018 01:10:34
> Subject: Re: [ceph-users] fixing unrepairable inconsistent PG
> Interesing...
>
> Can I see the output of "ceph auth list" and can you test whethe
.203:6828/43673 0x7fe240180f20
> 2018-06-25 10:59:12.112549 7fe244b28700 5 -- 192.168.168.201:0/3046734987
> shutdown_connections mark down 192.168.168.202:6789/0 0x7fe240176dc0
> 2018-06-25 10:59:12.112554 7fe244b28700 5 -- 192.168.168.201:0/3046734987
> shutdown_connections delete 0x7fe224
192.168.168.201:0/3046734987 conn(0x7fe240167220 :-1 s=STATE_NONE pgs=0 cs=0
l=0)._stop
--
Thanks
- Original Message -
> From: "Brad Hubbard"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Monday, 25 June, 2018 02:28:55
&g
_calc_signature seq 1 front_crc_ =
> 2696387361 middle_crc = 0 data_crc = 0 sig = 929021353460216573
> 2018-06-22 10:47:27.679026 7f70eda45700 20 Putting signature in client
> message(seq # 1): sig = 929021353460216573
> 2018-06-22 10:47:27.679520 7f70eda45700 10 _calc_signature seq 1 fron
i Brad,
>>
>> Yes, but it doesn't show much:
>>
>> ceph pg 18.2 query
>> Error EPERM: problem getting command descriptions from pg.18.2
>>
>> Cheers
>>
>>
>>
>> - Original Message -----
>>> From: "Brad Hubbard&quo
problem getting command descriptions from pg.18.2
>
> Cheers
>
>
>
> - Original Message -
>> From: "Brad Hubbard"
>> To: "andrei"
>> Cc: "ceph-users"
>> Sent: Wednesday, 20 June, 2018 00:02:07
>> Subject: Re: [cep
d no
>> 'snapset' attr
>> 2018-06-19 13:51:09.810878 osd.21 osd.21 192.168.168.203:6828/24339 7 :
>> cluster [ERR] 18.2 repair 4 errors, 0 fixed
>>
>> It mentions that there is an incorrect omap_digest . How do I go about
>> fixing this?
>>
>> Cheers
>>
t
> fixing this?
>
> Cheers
>
> ____
>
> From: "andrei"
> To: "ceph-users"
> Sent: Tuesday, 19 June, 2018 11:16:22
> Subject: [ceph-users] fixing unrepairable inconsistent PG
>
> Hello everyone
>
> I am
ons that there is an incorrect omap_digest . How do I go about fixing
this?
Cheers
> From: "andrei"
> To: "ceph-users"
> Sent: Tuesday, 19 June, 2018 11:16:22
> Subject: [ceph-users] fixing unrepairable inconsistent PG
> Hello everyone
> I am
Hello everyone
I am having trouble repairing one inconsistent and stubborn PG. I get the
following error in ceph.log:
2018-06-19 11:00:00.000225 mon.arh-ibstorage1-ib mon.0 192.168.168.201:6789/0
675 : cluster [ERR] overall HEALTH_ERR noout flag(s) set; 4 scrub errors;
Possible data damage
16 matches
Mail list logo