[Openstack] [Cinder] Attach the volume in Local Disk failed. The log in nova/compute.log said "libvirtError: unsupported configuration: disk bus 'ide' cannot be hotplugged"

2013-09-18 Thread Qixiaozhen

Hi,all

The openstack was running on one server in my experiment. The VolumeGroup named 
'cinder-volumes' was comprised by a local disk partition whose name was 
'/dev/sda2'.

A volume was created in the dashboard, and I tried to attach it to a running 
instance. However, the operation was failed.

The exception in nova/compute.log said that the disk bus ide could not be 
hotplugged.

As known to all, the ide bus cannot be hotplugged to a running instance.

It seems that the current version used the bus type 'ide' as the default one.

In this case, all the volumes created in the 'cinder-volumes' cannot be 
hotplugged into the running vm. You can never attach a volume to a suspend 
instance in the dashboard. There is no choice to this in the portal.

How can I choose the type 'virtio' when the volume has been attached?

[root@localhost nova]# nova --version
2.13.0
[root@localhost nova]# uname -a
Linux localhost 3.9.4-200.fc18.x86_64 #1 SMP Fri May 24 20:10:49 UTC 2013 
x86_64 x86_64 x86_64 GNU/Linux
[root@localhost nova]#


Thank you.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] 'livecd iso' of the image is the reason. RE: [Cinder] Attach the volume in Local Disk failed. The log in nova/compute.log said "libvirtError: unsupported configuration: disk bus 'ide' cann

2013-09-21 Thread Qixiaozhen
Appreciate for your reply, Yannick.

In my host, the service of tgtd and configuration in /etc/cinder/cinder.conf  
is normal.

With the kindly help of Wanghao, I find out the reason here.

The image in my server is a livecd iso. The 'root_device_name' of these 
instances created by the image are '/dev/hda'.
mysql> select * from instances;
+-+-+-++-+--+--+--+---++--+--+--+-+--+---+---+--+---+---++-+-+-+--+-+---++-+-+--+-+--+--+--+--+--+--++--+-+--+--++---+-+--+---+---+-+
| created_at  | updated_at  | deleted_at  | id | 
internal_id | user_id  | project_id 
  | image_ref| kernel_id | ramdisk_id | 
launch_index | key_name | key_data | power_state | vm_state | memory_mb | vcpus 
| hostname | host  | user_data | reservation_id | scheduled_at| 
launched_at | terminated_at   | display_name | display_description 
| availability_zone | locked | os_type | launched_on | instance_type_id | 
vm_mode | uuid | architecture | 
root_device_name | access_ip_v4 | access_ip_v6 | config_drive | task_state | 
default_ephemeral_device | default_swap_device | progress | auto_disk_config | 
shutdown_terminate | disable_terminate | root_gb | ephemeral_gb | cell_name | 
node  | deleted |
+-+-+-++-+--+--+--+---++--+--+--+-+--+---+---+--+---+---++-+-+-+--+-+---++-+-+--+-+--+--+--+--+--+--++--+-+--+--++---+-+--+---+---+-+

| 2013-09-22 04:05:22 | 2013-09-22 04:06:20 | 2013-09-22 04:06:21 |  7 |
NULL | 7b0e8dc57dba4612aafbb7ec199076ac | ffce15ba3b6549b28857679dcb1b8660 | 
7c4faf22-b132-46eb-89ad-cee9ecea4c85 |   ||0 | 
NULL | NULL |   1 | deleted  |  2048 | 1 | test | 
localhost | NULL  | r-q09d30oo | 2013-09-22 04:05:22 | 2013-09-22 
04:05:43 | 2013-09-22 04:06:20 | test | test| NULL  
|  0 | NULL| localhost   |5 | NULL| 
6ffed2fc-a9e6-4c0d-b958-bf302b01dbb1 | NULL | /dev/hda | NULL   
  | NULL |  | NULL   | NULL | 
NULL|0 | NULL |  0 |
 0 |  20 |0 | NULL  | localhost |   7 |


The instance gets its disk type in nova/block_device.py.

def properties_root_device_name(properties):
"""get root device name from image meta data.
If it isn't specified, return None.
"""
root_device_name = None

# NOTE(yamahata): see image_service.s3.s3create()
for bdm in properties.get('mappings', []):
if bdm['virtual'] == 'root':
root_device_name = bdm['device']

# NOTE(yamahata): register_image's command line can override
# .manifest.xml
if 'root_device_name' in properties:
root_device_name = properties['root_device_name']

return root_device_name

Best Regards,

Qi

From: Yannick Foeillet [mailto:yannick.foeil...@alterway.fr]
Sent: Wednesday, September 18, 2013 10:05 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] [Cinder] Attach the volume in Local Disk failed. The 
log in nova/compute.log said "libvirtError: unsupported configuration: disk bus 
'ide' cannot be hotplugged"

Hi,

The openstack was running on one server in my experiment. The VolumeGroup named 
'cinder-volumes' was comprised by a local disk partition whose name was 
'/dev/sda2'.

A volume was created in the dashboard, and I tried to attach it to a

[Openstack] the way of managing the shared block storage. RE: Announcing Manila Project (Shared Filesystems Management)

2013-09-26 Thread Qixiaozhen
Hi, all

Is there a common way to manage the block storage of an unknown vendor san? 

For example, a linux server shares its local disks by the target 
software(iscsitarget, lio and etc.). The computing nodes are connected to the 
target with iscsi session, and the LUNs are already rescaned.

VMFS is introduced in VMware to manage the LUNs shared by the san. Ovirt VDSM 
organize the metadata of the volumes in the LUN with LVM2 and 
StoragePoolManager. How about openstack?

Best regards,

Qi


亓晓振 Qi Xiaozhen 
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group 
Mobile: +86 13609283376 
Email: qixiaoz...@huawei.com 
中国(China)-西安(Xian)

-Original Message-
From: Caitlin Bestler [mailto:caitlin.best...@nexenta.com] 
Sent: Friday, September 27, 2013 7:31 AM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Announcing Manila Project (Shared Filesystems 
Management)

On 9/24/2013 5:50 PM, Swartzlander, Ben wrote:
> I'm proud to announce the official launching of the Manila project.
> Manila is a new service designed to do for shared filesystems what
> Cinder has done for blocks storage. The project provides a vendor
> neutral API for provisioning and attaching filesystem-based storage such
> as NFS, CIFS, and hopefully many other network filesystems. The actual
> code is heavily based on Cinder. The project has been under development
> for quite some time and is now on StackForge and ready for contributions
> from the wider community.
>
> Our project page is here: https://launchpad.net/manila
>
> We hold weekly meetings on Thursdays at 15:00 UTC in
> #openstack-meeting-alt. You can also find us more or less any time in
> #openstack-manila.
>
> thanks,
>
> Ben Swartzlander (bswartz)
>
> NetApp, Inc.
>

This project looks to be off to a solid start. Providing file as well as
object and block is really needed to provide complete storage services
for OpenStack.




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] the way of managing the shared block storage. RE: Announcing Manila Project (Shared Filesystems Management)

2013-09-29 Thread Qixiaozhen

Is there any plan in openstack for managing the shared block storage using only 
the data plane? Just like the VMFS for VMware and "SPM+LVM2" for Ovirt.

If the management plane of the SAN was unreachable, how about openstack for 
this?



Qi Xiaozhen 
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group 
Mobile: +86 13609283376Tel: +86 29-89191578
Email: qixiaoz...@huawei.com 
enterprise.huawei.com 

-Original Message-
From: Caitlin Bestler [mailto:caitlin.best...@nexenta.com] 
Sent: Saturday, September 28, 2013 12:08 AM
To: Qixiaozhen
Cc: openstack@lists.openstack.org
Subject: Re: the way of managing the shared block storage. RE: [Openstack] 
Announcing Manila Project (Shared Filesystems Management)

On 9/26/2013 7:09 PM, Qixiaozhen wrote:
> Hi, all
> 
> Is there a common way to manage the block storage of an unknown vendor san?
> 
> For example, a linux server shares its local disks by the target 
> software(iscsitarget, lio and etc.). The computing nodes are connected to the 
> target with iscsi session, and the LUNs are already rescaned.
> 
> VMFS is introduced in VMware to manage the LUNs shared by the san. Ovirt VDSM 
> organize the metadata of the volumes in the LUN with LVM2 and 
> StoragePoolManager. How about openstack?
> 
> Best regards,
> 

The standard protocols (iSCSI, NFS, CIFS, etc.) generally only address
the user plane and partially the control plane. Standardizing the
management plane is left to the user or vendors. One of the roles
of OpenStack is to fill that gap.

Cinder addresses block storage.
The proposed Manila project would deal with NAS.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Operation offload to the SAN. RE: Wiping of old cinder volumes

2013-11-03 Thread Qixiaozhen
Hi,all

David said: users will simply try to get rid of their volumes ALL at the same 
time and this is putting a lot of pressure on the SAN servicing those volumes 
and since the hardware isn't replying fast enough, the process then fall in D 
state and are waiting for IOs to complete which slows down everything.
The system must tolerate this kind of behavior. The status of process "dd" will 
fall in D state with the pressure of  SAN.

In my opinion, we should rethink the way of wiping the data in the volumes. 
Filling in the device with /dev/zero with "dd" command was the most primitive 
method.  The standard scsi command "write same" could be taken into considered.

Once the LBA was provided and the command was sent to the SAN , the storage 
device(SAN) could write the same-data into the LUN or volumes. The "dd" 
operation  can be offloaded to the storage array to execute.

Thanks,

Qi


Reference:

1)   http://manpages.ubuntu.com/manpages/karmic/man8/sg_write_same.8.html

2)   http://storagegaga.wordpress.com/2012/01/06/why-vaai/



Qi Xiaozhen
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
Mobile: +86 13609283376Tel: +86 29-89191578
Email: qixiaoz...@huawei.com 


From: David Hill [mailto:david.h...@ubisoft.com]
Sent: Saturday, November 02, 2013 6:21 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Wiping of old cinder volumes

Hi guys,

I was wondering there was some better way of wiping the content 
of an old EBS volume before actually deleting the logical volume in cinder ?  
Or perhaps, configure or add the possibility to configure the number of 
parallel "dd" processes that will be spawn at the same time...
Sometimes, users will simply try to get rid of their volumes ALL at the same 
time and this is putting a lot of pressure on the SAN servicing those volumes 
and since the hardware isn't replying fast enough, the process then fall in D 
state and are waiting for IOs to complete which slows down everything.
Since this process isn't (in my opinion) as critical as a EBS write or read, 
perhaps we should be able to throttle the speed of disk wiping or number of 
parallel wipings to something that wouldn't affect the other read/write that 
are most probably more critical.

Here is a small capture of the processes :
cinder   23782  0.7  0.2 248868 20588 ?SOct24  94:23 
/usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf 
--logfile /var/log/cinder/volume.log
cinder   23790  0.0  0.5 382264 46864 ?SOct24   9:16  \_ 
/usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf 
--logfile /var/log/cinder/volume.log
root 32672  0.0  0.0 175364  2648 ?S21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 
count=102400 bs=1M co
root 32675  0.0  0.1 173636  8672 ?S21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d7
root 32681  3.2  0.0 106208  1728 ?D21:48   0:47  |   |   
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--2e86d686--de67--4ee4--992d--72818c70d791 
count=102400 bs=1M conv=fdatasync
root 32674  0.0  0.0 175364  2656 ?S21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf 
count=102400 bs=1M co
root 32676  0.0  0.1 173636  8672 ?S21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dc
root 32683  3.2  0.0 106208  1724 ?D21:48   0:47  |   |   
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--d54a1c96--63ca--45cb--a597--26194d45dcdf 
count=102400 bs=1M conv=fdatasync
root 32693  0.0  0.0 175364  2656 ?S21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd 
count=102400 bs=1M co
root 32694  0.0  0.1 173632  8668 ?S21:48   0:00  |   |   \_ 
/usr/bin/python /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd 
if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6
root 32707  3.2  0.0 106208  1728 ?D21:48   0:46  |   |   
\_ /bin/dd if=/dev/zero 
of=/dev/mapper/cinder--volumes-volume--048dae36--b225--4266--b21e--af4b66eae6cd 
count=102400 bs=1M conv=fdatasync
root   342  0.0  0.0 175364  2648 ?S21:48   0:00  |   \_ sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero 
of=/dev/mapper

[Openstack] [multipath] Could I use the multipath software provided by the SAN vendors instead of dm-multipath in openstack?

2013-12-16 Thread Qixiaozhen
Hi,all

The storage array used by cinder in my experiment is produced by Huawei. The 
vendor releases its own multipath named Ultrapath with the SAN.

Could I use the Ultrapath instead of dm-multipath in openstack?

Best wishes,

Qi



Qi Xiaozhen
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
enterprise.huawei.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack