I don’t know the packages but for me it looks like a bug…

–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 04 Apr 2014, at 09:56, Mariusz Gronczewski 
<mariusz.gronczew...@efigence.com> wrote:

> Nope, one from RDO packages http://openstack.redhat.com/Main_Page
> 
> On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han
> <sebastien....@enovance.com> wrote:
> 
>> Are you running Havana with josh’s branch?
>> (https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd)
>> 
>> –––– 
>> Sébastien Han 
>> Cloud Engineer 
>> 
>> "Always give 100%. Unless you're giving blood.” 
>> 
>> Phone: +33 (0)1 49 70 99 72 
>> Mail: sebastien....@enovance.com 
>> Address : 11 bis, rue Roquépine - 75008 Paris
>> Web : www.enovance.com - Twitter : @enovance 
>> 
>> On 03 Apr 2014, at 13:24, Mariusz Gronczewski 
>> <mariusz.gronczew...@efigence.com> wrote:
>> 
>>> Hi,
>>> 
>>> some time ago I build small Openstack cluster with Ceph as main/only
>>> storage backend. I managed to get all parts working (removing/adding
>>> volumes works in cinder/glance/nova).
>>> 
>>> I get no errors in logs but I've noticed that after deleting an
>>> instance (booted from image) I get leftover RBD volumes:
>>> 
>>> hqblade201(hqstack1):~☠ nova list
>>> +--------------------------------------+-------------------------------+--------+------------+-------------+-------------------------+
>>> | ID                                   | Name                          | 
>>> Status | Task State | Power State | Networks                |
>>> +--------------------------------------+-------------------------------+--------+------------+-------------+-------------------------+
>>> | 5c6261a5-0290-4db6-89a2-f0c81f47d044 | template.devops.non.3dart.com | 
>>> ACTIVE | None       | Running     | ext_vlan_102=10.0.102.2 |
>>> +--------------------------------------+-------------------------------+--------+------------+-------------+-------------------------+
>>> 
>>> [10:25:00]hqblade201(hqstack1):~☠ nova volume-list
>>> +--------------------------------------+-----------+-----------------+------+-------------+-------------+
>>> | ID                                   | Status    | Display Name    | Size 
>>> | Volume Type | Attached to |
>>> +--------------------------------------+-----------+-----------------+------+-------------+-------------+
>>> | 11aae1a0-48c9-4606-a2be-f44624adb583 | available | stackdev.root   | 10   
>>> | None        |             |
>>> | 4dacfa9c-dfea-4a15-8ede-0cbdebb5a2e5 | available | cloud-init-test | 10   
>>> | None        |             |
>>> | ecf26742-e79e-4d7a-b8a4-9b4dc85dd41f | available | deb-net3        | 10   
>>> | None        |             |
>>> | 91ec34e3-d597-49e9-80f6-364f5879c6c0 | available | deb-net2        | 10   
>>> | None        |             |
>>> | 2acee1b6-16ec-4409-b5ad-3af7903f7d5c | available | deb-net1        | 10   
>>> | None        |             |
>>> | dba790ec-60a3-48ef-ba40-dfb5946a6a1d | available | deb3            | 10   
>>> | None        |             |
>>> | 57600343-b488-4da6-beb6-94ed351f4f6a | available | deb2            | 10   
>>> | None        |             |
>>> | 8ff0be71-a36e-40f8-84ad-a8dffa1157fd | available | cvcxvcxv        | 10   
>>> | None        |             |
>>> | 32a1a61d-698c-4131-bb60-75d95b487b9a | available | deb             | 10   
>>> | None        |             |
>>> | 5faae133-3e9e-4048-b2bb-ba636f74e8d1 | available | sr              | 3    
>>> | None        |             |
>>> +--------------------------------------+-----------+-----------------+------+-------------+-------------+
>>> 
>>> hqblade201(hqstack1):~☠ rbd ls volumes
>>> # those are "orphaned" volumes
>>> 003c2a30-240c-4a42-930c-9a81bc9f743d_disk
>>> 003c2a30-240c-4a42-930c-9a81bc9f743d_disk.local
>>> 003c2a30-240c-4a42-930c-9a81bc9f743d_disk.swap
>>> 1026039e-2cb9-4ff1-8f3d-2b270a765858_disk
>>> 1026039e-2cb9-4ff1-8f3d-2b270a765858_disk.local
>>> 1026039e-2cb9-4ff1-8f3d-2b270a765858_disk.swap
>>> 1986fb8e-df4a-40a8-9d1e-762665e60db2_disk
>>> 1a0500ad-9311-472b-9c7a-82046ac7aeab_disk
>>> 1a0500ad-9311-472b-9c7a-82046ac7aeab_disk.local
>>> 1a0500ad-9311-472b-9c7a-82046ac7aeab_disk.swap
>>> 1d87569d-db74-480e-af6c-68716460010c_disk
>>> 1d87569d-db74-480e-af6c-68716460010c_disk.local
>>> 1d87569d-db74-480e-af6c-68716460010c_disk.swap
>>> ...
>>> 5c6261a5-0290-4db6-89a2-f0c81f47d044_disk
>>> 5c6261a5-0290-4db6-89a2-f0c81f47d044_disk.local
>>> 5c6261a5-0290-4db6-89a2-f0c81f47d044_disk.swap
>>> ...
>>> fc9bff9c-fa37-4412-992e-5d1c9d5f4fac_disk
>>> fc9bff9c-fa37-4412-992e-5d1c9d5f4fac_disk.local
>>> fc9bff9c-fa37-4412-992e-5d1c9d5f4fac_disk.swap
>>> volume-11aae1a0-48c9-4606-a2be-f44624adb583
>>> volume-2acee1b6-16ec-4409-b5ad-3af7903f7d5c
>>> volume-32a1a61d-698c-4131-bb60-75d95b487b9a
>>> volume-4dacfa9c-dfea-4a15-8ede-0cbdebb5a2e5
>>> volume-57600343-b488-4da6-beb6-94ed351f4f6a
>>> volume-5faae133-3e9e-4048-b2bb-ba636f74e8d1
>>> volume-8ff0be71-a36e-40f8-84ad-a8dffa1157fd
>>> volume-91ec34e3-d597-49e9-80f6-364f5879c6c0
>>> volume-dba790ec-60a3-48ef-ba40-dfb5946a6a1d
>>> volume-ecf26742-e79e-4d7a-b8a4-9b4dc85dd41f
>>> 
>>> 
>>> Is that something specific to RBD backend or it's just nova not deleting 
>>> volumes after instance deletion ?
>>> 
>>> 
>>> -- 
>>> Mariusz Gronczewski, Administrator
>>> 
>>> Efigence S. A.
>>> ul. Wołoska 9a, 02-583 Warszawa
>>> T: [+48] 22 380 13 13
>>> F: [+48] 22 380 13 14
>>> E: mariusz.gronczew...@efigence.com
>>> <mailto:mariusz.gronczew...@efigence.com>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> 
> 
> -- 
> Mariusz Gronczewski, Administrator
> 
> Efigence S. A.
> ul. Wołoska 9a, 02-583 Warszawa
> T: [+48] 22 380 13 13
> F: [+48] 22 380 13 14
> E: mariusz.gronczew...@efigence.com
> <mailto:mariusz.gronczew...@efigence.com>

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to