Hello,
thanks for your reply.

I have stopped this osd and the cluster managed to recover. All is well now
for the past 4hrs.

My understanding of the unfound object was wrong. I thought it means the
object cant be found in all replicas

On Thu, Apr 29, 2021 at 9:47 AM Stefan Kooman <ste...@bit.nl> wrote:

> On 4/29/21 4:58 AM, Lomayani S. Laizer wrote:
> > Hello,
> >
> > Any advice on this. Am stuck because one VM is not working now. Looks
> there
> > is a read error in primary osd(15) for this pg. Should i mark osd 15 down
> > or out? Is there any risk of doing this?
> >
> > Apr 28 20:22:31 ceph-node3 kernel: [369172.974734] sd 0:2:4:0: [sde]
> > tag#358 CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
> > Apr 28 20:22:31 ceph-node3 kernel: [369172.974739] blk_update_request:
> I/O
> > error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16
> prio
> > class 0
> > Apr 28 21:14:11 ceph-node3 kernel: [372273.275801] sd 0:2:4:0: [sde]
> tag#28
> > FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
> > Apr 28 21:14:11 ceph-node3 kernel: [372273.275809] sd 0:2:4:0: [sde]
> tag#28
> > CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
> > Apr 28 21:14:11 ceph-node3 kernel: [372273.275813] blk_update_request:
> I/O
> > error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16
> prio
> > class 0
>
> So this looks like a broken disk. I would take it out and let the
> cluster recover (ceph osd out 15).
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to