[ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Darren Birkett
Hi All, tl;dr - does glance/rbd and cinder/rbd play together nicely in grizzly? I'm currently testing a ceph/rados back end with an openstack installation. I have the following things working OK: 1. cinder configured to create volumes in RBD 2. nova configured to boot from RBD backed cinder vol

Re: [ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Darren Birkett
_direct_url = True) does work in > Grizzly. > > It sounds like you are close. To check permissions, run 'ceph auth list', > and reply with "client.images" and "client.volumes" (or whatever keys you > use in Glance and Cinder). > > Cheers, >

Re: [ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Darren Birkett
ed when cloning a glance image into a cinder volume is a bug? It means that the cinder client doesn't show the volume as bootable, though I'm not sure what other detrimental effect it actually has (clearly the volume can be booted from). Thanks Darren On 10 September 2013 21:04, Darre

[ceph-users] live migration with rbd/cinder/nova - not supported?

2013-09-12 Thread Darren Birkett
Hi, It seems that the combination of libvirt and ceph will happily do live migrations. However, when using openstack, and a nova instance is booted from a cinder volume that itself exists in rbd, it appears from the nova code that nova itself does not have support for instance migration due to th

Re: [ceph-users] live migration with rbd/cinder/nova - not supported?

2013-09-12 Thread Darren Birkett
Hi Maciej, I'm using Grizzly, but the live migration doesn't appear to be changed even in trunk. It seems to check if you are using shared storage by writing a test file on the destination host (in /var/lib/nova/instances) and then trying to read it on the source host, and will fail if this test

Re: [ceph-users] live migration with rbd/cinder/nova - not supported?

2013-09-12 Thread Darren Birkett
13 15:15, Darren Birkett > wrote: > > Hi Maciej, > > > > I'm using Grizzly, but the live migration doesn't appear to be changed > even > > in trunk. It seems to check if you are using shared storage by writing a > > test file on the destination host (in /var

Re: [ceph-users] Help with radosGW

2013-09-18 Thread Darren Birkett
Hi Alexis, Great to hear you fixed your problem! Would you care to describe in more detail what the fix was, in case other people experience the same issues as you did. Thanks Darren On 18 September 2013 10:12, Alexis GÜNST HORN wrote: > Hello to all, > Thanks for your answers. > > Well... af

Re: [ceph-users] [SOLVED] Re: Ceph bock storage and Openstack Cinder Scheduler issue

2013-09-19 Thread Darren Birkett
On 19 September 2013 11:51, Gavin wrote: > > Hi, > > Please excuse/disregard my previous email, I just needed a > clarification on my understanding of how this all fits together. > > I was kindly pointed in the right direction by a friendly gentleman > from Rackspace. Thanks Darren. :) > > The re

Re: [ceph-users] OpenStack Grizzly Authentication (Keystone PKI) with RADOS Gateway

2013-10-03 Thread Darren Birkett
Hi Amit, It can, but at the moment there is some issue with keystone token caching (in Dumpling), so every auth call hits keystone and does not cache the token. See here: http://www.spinics.net/lists/ceph-users/msg04531.html and here: http://tracker.ceph.com/issues/6360 Thanks Darren On

Re: [ceph-users] Rados gw upload problems

2013-10-04 Thread Darren Birkett
Hi Warren, Try using the ceph specific fastcgi module as detailed here: http://ceph.com/docs/next/radosgw/manual-install/ And see if that helps. There was a similar discussion on the list previously: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000360.html Thanks Darren

Re: [ceph-users] ceph access using curl

2013-10-04 Thread Darren Birkett
Try using passing '--debug' to the swift command. It should output the equivalent curl command for you to use. - Darren "Snider, Tim" wrote: >I'm having pilot error with getting the path correct using curl. >Bucket listing using "radosgw-admin bucket list" works as does the >swift API. >Can som

[ceph-users] radosgw keystone authtoken caching still not working with 0.67.4

2013-10-07 Thread Darren Birkett
Hi All, In our prior tests with 0.67.3, keystone authtoken caching was broken causing dreadful performance - see http://www.spinics.net/lists/ceph-users/msg04531.html We upgraded to release 0.67.4 as we wanted to test the apparent fix to authtoken caching that was included in the release notes.

Re: [ceph-users] radosgw keystone authtoken caching still not working with 0.67.4

2013-10-08 Thread Darren Birkett
ct set. Thanks, Darren On 7 October 2013 14:28, Darren Birkett wrote: > Hi All, > > In our prior tests with 0.67.3, keystone authtoken caching was broken > causing dreadful performance - see > http://www.spinics.net/lists/ceph-users/msg04531.html > > We upgraded to release 0.67

Re: [ceph-users] radosgw keystone authtoken caching still not working with 0.67.4

2013-10-10 Thread Darren Birkett
Is anyone else using keystone authentication with radosgw? Anyone having any luck getting the authtoken caching working? - Darren On 8 October 2013 10:17, Darren Birkett wrote: > Hi All, > > What's the best way to try and track down why this isn't working for us? >

Re: [ceph-users] cephforum.com

2013-10-11 Thread Darren Birkett
Hi, I'd have to say in general I agree with the other responders. Not really for reasons of preferring a ML over a forum necessarily, but just because the ML already exists. One of the biggest challenges for anyone new coming in to an open source project such as ceph is availability of informati

[ceph-users] Disk Density Considerations

2013-11-06 Thread Darren Birkett
Hi, I understand from various reading and research that there are a number of things to consider when deciding how many disks one wants to put into a single chassis: 1. Higher density means higher failure domain (more data to re-replicate if you lose a node) 2. More disks means more CPU/memory ho

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Darren Birkett
On 6 November 2013 14:08, Andrey Korolyov wrote: > > We are looking at building high density nodes for small scale 'starter' > > deployments for our customers (maybe 4 or 5 nodes). High density in this > > case could mean a 2u chassis with 2x external 45 disk JBOD containers > > attached. That'

[ceph-users] qemu-kvm packages for centos

2013-12-02 Thread Darren Birkett
Hi List, Any chance the following will be updated with the latest packages for dumpling/emperor: http://ceph.com/packages/qemu-kvm/centos/x86_64/ Using CentOS 6.4 and dumpling with OpenStack Havana, I am unable to boot from rbd volumes until I install an rbd-ified qemu-kvm. I have grabbed the l