Hi,

You should crush reweight this OSD (sde) to zero, and ceph will remap all PG to 
another OSD, after draining you may replace your drive



k

Sent from my iPhone

> On 29 Apr 2021, at 06:00, Lomayani S. Laizer <lomlai...@gmail.com> wrote:
> 
> Any advice on this. Am stuck because one VM is not working now. Looks there
> is a read error in primary osd(15) for this pg. Should i mark osd 15 down
> or out? Is there any risk of doing this?
> 
> Apr 28 20:22:31 ceph-node3 kernel: [369172.974734] sd 0:2:4:0: [sde]
> tag#358 CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
> Apr 28 20:22:31 ceph-node3 kernel: [369172.974739] blk_update_request: I/O
> error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16 prio
> class 0
> Apr 28 21:14:11 ceph-node3 kernel: [372273.275801] sd 0:2:4:0: [sde] tag#28
> FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
> Apr 28 21:14:11 ceph-node3 kernel: [372273.275809] sd 0:2:4:0: [sde] tag#28
> CDB: Read(16) 88 00 00 00 00 00 51 be e7 80 00 00 00 80 00 00
> Apr 28 21:14:11 ceph-node3 kernel: [372273.275813] blk_update_request: I/O
> error, dev sde, sector 1371465600 op 0x0:(READ) flags 0x0 phys_seg 16 prio
> class 0
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to