Update: I wonder if I cah follow advice here:
http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image

There is shown how to delete rbd objects directly via rados:
$rados -p rbd rm rbd_id.rbdname
$rados -p rbd rm rbd_header.18b3c2ae8944a
$rados -p temp1 ls | grep '^rbd_data.18b3c2ae8944a.'

Could that help and can I run this in parallel with stuck "rbd rm"?
I can reboot client that issued rbd rm command(if rbd rm is not simply
killable proces)  but would ceph cluster drop that operation there or
I would get some hanged forever operations internal to cluster? I need
to delete that rbd anyway.

Ugis


2015-06-06 8:53 GMT+03:00 Ugis <ugi...@gmail.com>:
> Hi,
>
> I had recent problem with flapping hdd and in result I need to delete
> broken rbd.
> Problem is all operations towards this rbd stuck. I even cannot delete
> rbd - it sits on 6% done and I found this line in one of osds logs:
> 2015-06-06 08:03:31.770812 7fe5002c2700  0 log_channel(default) log
> [WRN] : slow request 30720.717642 seconds old, received at 2015-06-05
> 23:31:31.032740: osd_op(client.2457394.0:8430
> rbd_data.18b3c2ae8944a.00000000000020e5 [delete] 4.fac8e26
> ack+ondisk+write+known_if_redirected e136905) currently reached_pg
>
> How can I remove broken rbd? Fast way would be desirable, but any way
> will do that eventually helps to delete that rbd.
>
> Ugis
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to