[ceph-users] Ceph bock storage and Openstack Cinder Scheduler issue
Hi there, Can someone possibly shed some light on and issue we are experiencing with the way Cinder is scheduling Ceph volumes in our environment. We are running cinder-volume on each of our compute nodes, and they are all configured to make use of our Ceph cluster. As far as we can tell the Ceph cluster is working as it should, however the problem we are having is that each and every Ceph volume gets attached to only one of the Compute nodes. This is not idea as it will create a bottle-neck on the one host. >From what I have read the default Cinder scheduler should pick the cinder-volume node with the most available space, but since all compute nodes should report the same, as per the space available in the Ceph volume pool, how is this meant to work then ? We have also tried to implement the Cinder chance scheduler in the hope that Cinder will randomly pick another storage node, but this did not make any difference. Has anyone else experienced the same issue or similar ? Is there perhaps a way that we can round-robin the volume attachments ? Openstack version: Grizzly using Ubuntu LTS and Cloud PPA. Ceph version: Cuttlefish from Ceph PPA. Thanks in advance, Gavin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] [SOLVED] Re: Ceph bock storage and Openstack Cinder Scheduler issue
On 19 September 2013 11:57, Gavin wrote: > Hi there, > > Can someone possibly shed some light on and issue we are experiencing > with the way Cinder is scheduling Ceph volumes in our environment. > > We are running cinder-volume on each of our compute nodes, and they > are all configured to make use of our Ceph cluster. > > As far as we can tell the Ceph cluster is working as it should, > however the problem we are having is that each and every Ceph volume > gets attached to only one of the Compute nodes. > > This is not idea as it will create a bottle-neck on the one host. > > From what I have read the default Cinder scheduler should pick the > cinder-volume node with the most available space, but since all > compute nodes should report the same, as per the space available in > the Ceph volume pool, how is this meant to work then ? > > We have also tried to implement the Cinder chance scheduler in the > hope that Cinder will randomly pick another storage node, but this did > not make any difference. > > Has anyone else experienced the same issue or similar ? > > Is there perhaps a way that we can round-robin the volume attachments ? > > Openstack version: Grizzly using Ubuntu LTS and Cloud PPA. > > Ceph version: Cuttlefish from Ceph PPA. Hi, Please excuse/disregard my previous email, I just needed a clarification on my understanding of how this all fits together. I was kindly pointed in the right direction by a friendly gentleman from Rackspace. Thanks Darren. :) The reason for my confusion was due to the way that the volumes are displayed in the Horizon dashboard. The dashboard shows that all volumes are attached to one Compute node, which obviously led to my initial concerns. Now that I know that the connections come from libvirt on the compute node where the instances live, I have one less thing to worry about. Thanks, Gavin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] radosgw: s3 lifecycle and swift delete-after support
Hi all. I was reviewing the s3 and swift api support matrices: http://ceph.com/docs/master/radosgw/s3/ http://ceph.com/docs/master/radosgw/swift and I noticed that there is no support for s3 'lifecycle' or swift 'delete-after' capabilities. I was curious to know if these are no a road map, or if they are not expected to be supported. Cheers Gavin [http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html] [ http://docs.openstack.org/api/openstack-object-storage/1.0/content/Expiring_Objects-e1e3228.html ] ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com