Dear git-harry,

You are right. This issue was solved.

Thanks for your help.


Regards,
Johnson

-----Original Message-----
From: git harry [mailto:git-ha...@live.co.uk] 
Sent: Saturday, July 19, 2014 4:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] multiple backend issue

Ah, okay I misunderstood. It looks like you've used the same config file on 
both the controller and compute nodes, notice how the output of cinder-manage 
gives you hosts corresponding to both backends on your two nodes.

 controller@lvmdriver-2 nova
 controller@lvmdriver-1 nova
 Compute@lvmdriver-1 nova
 Compute@lvmdriver-2 nova

Each cinder-volume service you are running has tried to setup both backends 
even though only one of the volume groups is available to them. The 
enabled_backends should correspond to what that particular cinder-volume 
service is responsible for and you only need to specify the backend 
configuration groups that that specific volume group will use.

controller:


enabled_backends=lvmdriver-1

[lvmdriver-1]

volume_group=cinder-volumes-1

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI


compute:


enabled_backends=lvmdriver-2

[lvmdriver-2]

volume_group=cinder-volumes-2

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI_b

----------------------------------------
> From: johnson.ch...@qsantechnology.com
> To: openstack-dev@lists.openstack.org
> Date: Fri, 18 Jul 2014 16:33:10 +0000
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> Dear git-harry,
>
> My confuse is why I can successfully create volume on both controller node 
> and compute node, but it still has error message in cinder-volume.log?
>
> The below is my environment,
> Controller node:
> Install cinder-api, cinder-schedule, cinder-volume Create 
> cinder-volume-1 volume group Compute node:
> Install cinder-volume
> Create cinder-volume-2 volume group
>
> The below is the output of "cinder extra-specs-list",
> +--------------------------------------+----------------+------------------------------------------+
> | ID | Name | extra_specs |
> +--------------------------------------+----------------+------------------------------------------+
> | 30faffa9-7955-484f-9c96-3f40507aa62e | lvm_compute | 
> | {u'volume_backend_name': u'LVM_iSCSI_b'} | 
> | c2341962-b15e-4003-882f-08a8a36d3a0f | lvm_controller | 
> | {u'volume_backend_name': u'LVM_iSCSI'} |
> +--------------------------------------+----------------+------------------------------------------+
>
> The below is the output of " cinder-manage host list"
> host zone
> controller nova
> Compute nova
> controller@lvmdriver-2 nova
> controller@lvmdriver-1 nova
> Compute@lvmdriver-1 nova
> Compute@lvmdriver-2 nova
>
> So I just make sure if everything is right at my environment.
>
> Regards,
> Johnson
>
>
> -----Original Message-----
> From: git harry [mailto:git-ha...@live.co.uk]
> Sent: Friday, July 18, 2014 4:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> I don't know what you mean by side effects, if the fact that it (lvmdriver-2) 
> doesn't work is not a problem for you. You will also continue to get entries 
> in the log informing you the driver is uninitialised.
>
> The volume group needs to be on the same host as the cinder-volume service - 
> so it sounds like the service is running on your controller only. If you want 
> to locate volumes on the compute host you will need to install the service 
> there.
>
>
> ----------------------------------------
>> From: johnson.ch...@qsantechnology.com
>> To: openstack-dev@lists.openstack.org
>> Date: Thu, 17 Jul 2014 15:39:40 +0000
>> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>>
>> Dear git-harry,
>>
>> I have created a volume group "cinder-volume-1" at my controller node, and 
>> another volume group "cinder-volume-2" at my compute node.
>>
>> I can create volume successfully on dedicated backend.
>> Of course I can ignore the error message, but I have to know if any 
>> side-effect?
>>
>> Regards,
>> Johnson
>>
>> -----Original Message-----
>> From: git harry [mailto:git-ha...@live.co.uk]
>> Sent: Thursday, July 17, 2014 7:32 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>>
>> You are using multibackend but it appears you haven't created both volume 
>> groups:
>>
>> Stderr: ' Volume group "cinder-volumes-2" not found\n'
>>
>> If you can create volumes it suggest the other backend is correctly 
>> configured. So you can ignore the error if you want but you will not be able 
>> to use the second backend you have attempted to setup.
>>
>> ________________________________
>>> From: johnson.ch...@qsantechnology.com
>>> To: openstack-dev@lists.openstack.org
>>> Date: Thu, 17 Jul 2014 11:03:41 +0000
>>> Subject: [openstack-dev] [Cinder] multiple backend issue
>>>
>>>
>>> Dear All,
>>>
>>>
>>>
>>> I have two machines as below,
>>>
>>> Machine1 (192.168.106.20): controller node (cinder node and volume
>>> node)
>>>
>>> Machine2 (192.168.106.30): compute node (volume node)
>>>
>>>
>>>
>>> I can successfully create a cinder volume, but there is an error in 
>>> cinder-volume.log.
>>>
>>> 2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting 
>>> cinder-volume node (version 2014.1)
>>>
>>> 2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager
>>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume 
>>> driver L
>>>
>>> VMISCSIDriver (2.0.0)
>>>
>>> 2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting 
>>> cinder-volume node (version 2014.1)
>>>
>>> 2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager
>>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume 
>>> driver L
>>>
>>> VMISCSIDriver (2.0.0)
>>>
>>> 2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager
>>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error 
>>> encountered durin
>>>
>>> g initialization of driver: LVMISCSIDriver
>>>
>>> 2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager
>>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected 
>>> error while
>>>
>>> running command.
>>>
>>> Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C 
>>> vgs --noheadings -o name cinder-volumes-2
>>>
>>> Exit code: 5
>>>
>>> Stdout: ''
>>>
>>> Stderr: ' Volume group "cinder-volumes-2" not found\n'
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback 
>>> (most recent call last):
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 
>>> 243
>>>
>>> , in init_host
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>>> self.driver.check_for_setup_error()
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", 
>>> line
>>>
>>> 83, in check_for_setup_error
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>>> executor=self._execute)
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", 
>>> lin
>>>
>>> e 81, in __init__
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if
>>> self._vg_exists() is False:
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", 
>>> lin
>>>
>>> e 106, in _vg_exists
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
>>> self.vg_name, root_helper=self._root_helper, run_as_root=True)
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 136, in 
>>> exec
>>>
>>> ute
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager return 
>>> processutils.execute(*cmd, **kwargs)
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
>>> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/processuti
>>> l
>>>
>>> s.py", line 173, in execute
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager cmd='
>>> '.join(cmd))
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>>> ProcessExecutionError: Unexpected error while running command.
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Command:
>>> sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --n
>>>
>>> oheadings -o name cinder-volumes-2
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Exit code: 
>>> 5
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stdout: ''
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stderr: '
>>> Volume group "cinder-volumes-2" not found\n'
>>>
>>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>>>
>>> 2014-07-17 18:49:03.236 5765 INFO 
>>> oslo.messaging._drivers.impl_rabbit
>>> [-] Connected to AMQP server on controller:5672
>>>
>>> 2014-07-17 18:49:03.890 5764 INFO cinder.volume.manager
>>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
>>> 5811b9af-b24a-44f
>>>
>>> e-a424-a61f011f7a4c: skipping export
>>>
>>> 2014-07-17 18:49:03.891 5764 INFO cinder.volume.manager
>>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume
>>> 8266e05b-6c87-421
>>>
>>> a-a625-f5d6e94f2c9f: skipping export
>>>
>>> 2014-07-17 18:49:03.892 5764 INFO cinder.volume.manager
>>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Updating volume 
>>> status
>>>
>>> 2014-07-17 18:49:04.081 5764 INFO 
>>> oslo.messaging._drivers.impl_rabbit
>>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Connected
>>>
>>> to AMQP server on controller:5672
>>>
>>> 2014-07-17 18:49:04.136 5764 INFO 
>>> oslo.messaging._drivers.impl_rabbit
>>> [-] Connected to AMQP server on controller:5672
>>>
>>> 2014-07-17 18:49:18.258 5765 INFO cinder.volume.manager 
>>> [req-00ee01b9-9601-42f5-baf7-169086ac53bb - - - - -] Updating volume 
>>> status
>>>
>>> 2014-07-17 18:49:18.259 5765 WARNING cinder.volume.manager 
>>> [req-00ee01b9-9601-42f5-baf7-169086ac53bb - - - - -] Unable to 
>>> update stat
>>>
>>> s, LVMISCSIDriver -2.0.0 (config name lvmdriver-2) driver is uninitialized.
>>>
>>>
>>>
>>>
>>>
>>> Should I ignore it?
>>>
>>>
>>>
>>> Here is my cinder.conf
>>>
>>> [DEFAULT]
>>>
>>> rootwrap_config = /etc/cinder/rootwrap.conf
>>>
>>> api_paste_confg = /etc/cinder/api-paste.ini
>>>
>>> #iscsi_helper = tgtadm
>>>
>>> iscsi_helper = ietadm
>>>
>>> volume_name_template = volume-%s
>>>
>>> volume_group = cinder-volumes
>>>
>>> verbose = True
>>>
>>> auth_strategy = keystone
>>>
>>> #state_path = /var/lib/cinder
>>>
>>> #lock_path = /var/lock/cinder
>>>
>>> #volumes_dir = /var/lib/cinder/volumes
>>>
>>> iscsi_ip_address=192.168.106.20
>>>
>>>
>>>
>>> rpc_backend = cinder.openstack.common.rpc.impl_kombu
>>>
>>> rabbit_host = controller
>>>
>>> rabbit_port = 5672
>>>
>>> rabbit_userid = guest
>>>
>>> rabbit_password = demo
>>>
>>>
>>>
>>> glance_host = controller
>>>
>>>
>>>
>>> enabled_backends=lvmdriver-1,lvmdriver-2
>>>
>>> [lvmdriver-1]
>>>
>>> volume_group=cinder-volumes-1
>>>
>>> volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
>>>
>>> volume_backend_name=LVM_iSCSI
>>>
>>> [lvmdriver-2]
>>>
>>> volume_group=cinder-volumes-2
>>>
>>> volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
>>>
>>> volume_backend_name=LVM_iSCSI_b
>>>
>>>
>>>
>>> [database]
>>>
>>> connection = mysql://cinder:demo@controller/cinder
>>>
>>>
>>>
>>> [keystone_authtoken]
>>>
>>> auth_uri = http://controller:5000
>>>
>>> auth_host = controller
>>>
>>> auth_port = 35357
>>>
>>> auth_protocol = http
>>>
>>> admin_tenant_name = service
>>>
>>> admin_user = cinder
>>>
>>> admin_password = demo
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Johnson
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________ OpenStack-dev 
>>> mailing list OpenStack-dev@lists.openstack.org 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
                                          
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to