Hello,
I managed to resolve the issue. OSD 21 had corrupted data. I removed from
cluster and formatted hard drive then re-added to the cluster.

After backfill finished I ran repair again and fixed the problem

--
Lomayani

On Tue, Apr 25, 2017 at 11:42 AM, Lomayani S. Laizer <lomlai...@gmail.com>
wrote:

> Hello,
> Am having this error in my cluster of inconsistent of pgs due to
> attr_value_mismatch. Looks all pgs having these error is hosting one vm
> with ID 3fb4c238e1f29. Am using using replication of 3 with min of 2.
>
> Pg repair is not working. Please any suggestions to resolve this issue.
> More logs are available http://www.heypasteit.com/clip/0BOJ36
>
>  ceph health detail
> HEALTH_ERR 12 pgs inconsistent; 16 scrub errors
> pg 7.765 is active+clean+inconsistent, acting [16,21,3]
> pg 7.6e7 is active+clean+inconsistent, acting [12,21,4]
> pg 7.335 is active+clean+inconsistent, acting [8,17,21]
> pg 7.304 is active+clean+inconsistent, acting [14,6,21]
> pg 7.2e0 is active+clean+inconsistent, acting [21,17,6]
> pg 7.138 is active+clean+inconsistent, acting [11,17,21]
> pg 7.6c is active+clean+inconsistent, acting [21,11,14]
> pg 7.102 is active+clean+inconsistent, acting [21,5,12]
> pg 7.198 is active+clean+inconsistent, acting [14,11,21]
> pg 7.5fc is active+clean+inconsistent, acting [6,16,21]
> pg 7.65b is active+clean+inconsistent, acting [21,17,2]
> pg 7.67a is active+clean+inconsistent, acting [16,21,6]
>
> rados list-inconsistent-obj   7.67a  --format=json-pretty
> {
>     "epoch": 5699,
>     "inconsistents": [
>         {
>             "object": {
>                 "name": "rbd_data.3fb4c238e1f29.0000000000017bef",
>                 "nspace": "",
>                 "locator": "",
>                 "snap": "head",
>                 "version": 346953
>             },
>             "errors": [
>                 "object_info_inconsistency",
>                 "attr_value_mismatch"
>             ],
>             "union_shard_errors": [],
>             "selected_object_info": "7:5e76a45a:::rbd_data.3fb4c238e1f29.
> 0000000000017bef:head(5640'346953 client.2930592.0:2368795
> dirty|omap_digest s 3792896 uv 346953 od ffffffff)",
>             "shards": [
>                 {
>                     "osd": 6,
>                     "errors": [],
>                     "size": 3792896,
>                     "object_info": "7:5e76a45a:::rbd_data.3fb4c238e1f29.
> 0000000000017bef:head(5640'346953 client.2930592.0:2368795
> dirty|omap_digest s 3792896 uv 346953 od ffffffff)",
>                     "attrs": [
>
>
> 2017-04-25 08:56:23.333835 7f8a0835e700 -1 log_channel(cluster) log [ERR]
> : 7.102 shard 21: soid 
> 7:4081eee7:::rbd_data.3fb4c238e1f29.0000000000017b03:head
> size 3076096 != size 2633728 from auth oi 7:4081eee7:::rbd_data.
> 3fb4c238e1f29.0000000000017b03:head(5640'990157 client.2930592.0:2367433
> dirty|omap_digest s 2633728 uv 990157 od ffffffff), size 3076096 != size
> 2633728 from shard 5, attr value mismatch '_'
>
> --
> Lomayani
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to