On Fri, Feb 16, 2018 at 12:17 PM Graham Allan <g...@umn.edu> wrote: > On 02/16/2018 12:31 PM, Graham Allan wrote: > > > > If I set debug rgw=1 and demug ms=1 before running the "object stat" > > command, it seems to stall in a loop of trying communicate with osds for > > pool 96, which is .rgw.control > > > >> 10.32.16.93:0/2689814946 --> 10.31.0.68:6818/8969 -- > >> osd_op(unknown.0.0:541 96.e 96:7759931f:::notify.3:head [watch ping > >> cookie 139709246356176] snapc 0=[] ondisk+write+known_if_redirected > >> e507695) v8 -- 0x7f10ac033610 con 0 > >> 10.32.16.93:0/2689814946 <== osd.38 10.31.0.68:6818/8969 59 ==== > >> osd_op_reply(541 notify.3 [watch ping cookie 139709246356176] v0'0 > >> uv3933745 ondisk = 0) v8 ==== 152+0+0 (2536111836 <(253)%20611-1836> 0 > 0) 0x7f1158003e20 > >> con 0x7f117afd8390 > > > > Prior to that, probably more relevant, this was the only communication > > logged with the primary osd of the pg: > > > >> 10.32.16.93:0/1552085932 --> 10.31.0.71:6838/66301 -- > >> osd_op(unknown.0.0:96 70.438s0 > >> 70:1c20c157:::default.325674.85_bellplants_images%2f1042066.jpg:head > >> [getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected e507695) > >> v8 -- 0x7fab79889fa0 con 0 > >> 10.32.16.93:0/1552085932 <== osd.175 10.31.0.71:6838/66301 1 ==== > >> osd_backoff(70.438s0 block id 1 > >> > [70:1c20c157:::default.325674.85_bellplants_images%2f1042066.jpg:head,70:1c20c157:::default.325674.85_bellplants_images%2f1042066.jpg:head) > >> e507695) v1 ==== 209+0+0 (1958971312 0 0) 0x7fab5003d3c0 con > >> 0x7fab79885980 > >> 210.32.16.93:0/1552085932 --> 10.31.0.71:6838/66301 -- > >> osd_backoff(70.438s0 ack-block id 1 > >> > [70:1c20c157:::default.325674.85_bellplants_images%2f1042066.jpg:head,70:1c20c157:::default.325674.85_bellplants_images%2f1042066.jpg:head) > >> e507695) v1 -- 0x7fab48065420 con 0 > > > > so I guess the backoff message above is saying the object is > > unavailable. OK, that certainly makes sense. Not sure that it helps me > > understand how to fix the inconsistencies.... > > If I restart the primary osd for the pg, that makes it forget its state > and return to active+clean+inconsistent. I can then download the > previously-unfound objects again, as well as run "radosgw-admin object > stat". > > So the interesting bit is probably figuring out why it decides these > objects are unfound, when clearly they aren't. > > What would be the best place to enable additional logging to understand > this - perhaps the primary osd? >
David, this sounds like one of the bugs where an OSD can mark objects as inconsistent locally but then doesn't actually trigger recovery on them. Or it doesn't like any copy but doesn't persist that. Do any known issues around that apply to 12.2.2? -Greg > > Thanks for all your help, > > Graham > -- > Graham Allan > Minnesota Supercomputing Institute - g...@umn.edu >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com