Thanks, Josh. I will give your suggestion a try with multiple cinder-volume 
instances though I am still not sure if cinder-scheduler is smart enough to 
know which instance an API request should be routed to when volume-type is 
specified.
--weiguo

> Date: Fri, 28 Jun 2013 14:10:12 -0700
> From: josh.dur...@inktank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
> 
> On 06/27/2013 05:54 PM, w sun wrote:
> > Thanks Josh. That explains. So I guess right now with Grizzly, you can
> > only use one rbd backend pool (assume with different cephx key for
> > different pool) on a single Cinder node unless you are willing to modify
> > cinder-volume.conf and restart cinder service all the time.
> 
> cinder-volume can use a different config file with the --config-file
> option, and you can set different CEPH_ARGS environment variables for
> each, so you could run more than one per host, it's just a bit more
> work to set up.
> 
> > --weiguo
> >
> >  > Date: Wed, 26 Jun 2013 15:08:56 -0700
> >  > From: josh.dur...@inktank.com
> >  > To: ws...@hotmail.com
> >  > CC: ceph-users@lists.ceph.com; sebastien....@enovance.com
> >  > Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
> >  >
> >  > On 06/21/2013 09:48 AM, w sun wrote:
> >  > > Josh & Sebastien,
> >  > >
> >  > > Does either of you have any comments on this cephx issue with multi-rbd
> >  > > backend pools?
> >  > >
> >  > > Thx. --weiguo
> >  > >
> >  > >
> > ------------------------------------------------------------------------
> >  > > From: ws...@hotmail.com
> >  > > To: ceph-users@lists.ceph.com
> >  > > Date: Thu, 20 Jun 2013 17:58:34 +0000
> >  > > Subject: [ceph-users] Openstack Multi-rbd storage backend
> >  > >
> >  > > Anyone saw the same issue as below?
> >  > >
> >  > > We are trying to test the multi backend feature with two RBD pools on
> >  > > Grizzly release. At this point, it seems that rbd.py does not take
> >  > > separate cephx users for the two RBD pools for authentication as it
> >  > > defaults to the single ID defined in /etc/init/cinder-volume.conf,
> > which
> >  > > is documented here with "env CEPH_ARGS="--id volume"
> >  > >
> >  > >
> > http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder-nova-volume
> >  > >
> >  > > It seems to us that rbd.py is ignoring the separate "rbd_user="
> >  > > configuration for each storage backend section,
> >  >
> >  > In Grizzly, this option is only used to tell nova which user to connect
> >  > as. cinder-volume requires CEPH_ARGS="--id user" to set the ceph user
> >  > you want it to use. This has changed in Havana, where the rbd_user
> >  > option is used by Cinder as well, but for Grizzly you'll need to set
> >  > the CEPH_ARGS environment variable differently if you want
> >  > different users for each backend.
> >  >
> >  > Josh
> >  >
> >  > > [svl-stack-mgmt-openstack-volumes-2]
> >  > > volume_driver=cinder.volume.drivers.rbd.RBDDriver
> >  > > rbd_pool=stack-mgmt-openstack-volumes-2
> >  > > rbd_user=stack-mgmt-openstack-volumes-2
> >  > > rbd_secret_uuid=e1124cad-55e8-d4ce-6c68-5f40491b15ef
> >  > > volume_backend_name=RBD_CINDER_VOLUMES_3
> >  > >
> >  > > Here is the error from cinder-volume.log,
> >  > >
> >  > > -----------------------------------------
> >  > > File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py",
> >  > > line 144, in delete_volume
> >  > > volume['name'])
> >  > > File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 190, in
> >  > > execute
> >  > > cmd=' '.join(cmd))
> >  > > ProcessExecutionError: Unexpected error while running command.
> >  > > Command: rbd snap ls --pool svl-stack-mgmt-openstack-volumes-2
> >  > > volume-9f1735ae-b31f-4cd5-a279-f879692839c3
> >  > > Exit code: 1
> >  > > Stdout: ''
> >  > > Stderr: 'rbd: error opening image
> >  > > volume-9f1735ae-b31f-4cd5-a279-f879692839c3: (1) Operation not
> >  > > permitted\n2013-06-20 10:41:46.591363 7f68117a9780 -1 librbd::ImageCtx:
> >  > > error finding header: (1) Operation not permitted\n'
> >  > > -------------------------------------------
> >  >
> 
                                          
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to