To speed up the deletion, you can remove the rbd_header (if the image is empty) 
and then remove it.

For example:

$ rados -p rbd ls
huge.rbd
rbd_directory


$ rados -p rbd rm huge.rbd
$ time 
rbd rm huge
2013-12-10 09:35:44.168695 7f9c4a87d780 -1 librbd::ImageCtx: error finding 
header: (2)
 No such file or directory
Removing image: 100% complete...done.

Cheers.
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 21 Apr 2014, at 17:03, Gonzalo Aguilar Delgado <gagui...@aguilardelgado.com> 
wrote:

> Hi, 
> 
> I did my first mistake so big... I did a rbd disk of about  300 TB, yes  300 
> TB
> rbd info test-disk -p high_value
> rbd image 'test-disk':
>       size 300 TB in 78643200 objects
>       order 22 (4096 kB objects)
>       block_name_prefix: rb.0.18d7.2ae8944a
>       format: 1
> 
> but even more. I made an error with the name (I thought it was 300GB) and 
> deleted it and created again. 
> 
> 
> rbd info homes -p high_value
> rbd image 'homes':
>  size 300 TB in 78643200 objects
>  order 22 (4096 kB objects)
>  block_name_prefix: rb.0.193e.238e1f29
>  format: 1
> 
> Great mistake, eh?!
> 
> When I realized I deleted them. But it takes a lot to remove just one. 
> 
> Removing image: 21% complete... (1-2h)
> 
> What's incredible is that ceph didn't break. 
> 
> Question is. How can I delete them without waiting and breaking something?
> 
> I also moving my 300GB disk to the ceph cluster:
> 
> /dev/sdd1 307468468 265789152 26037716 92% /mnt/temp
> /dev/rbd1 309506048 4888396 304601268 2% /mnt/rbd/homes
> 
> So I have:
> [1]   Running                 rbd rm test-disk -p high_value &  (wd: ~)
> [2]-  Running                 rbd rm homes -p high_value &  (wd: ~)
> [3]+  Running                 cp -rapx * /mnt/rbd/homes/ &  (wd: /mnt/temp)
> 
> 
> It copied about 4GB but takes long. I don't know if it's because the "rm" or 
> because the problem Michael told me about btrfs. 
> 
> Any help on this, also?
> 
> Best regards,
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to