Hi All,
tl;dr - does glance/rbd and cinder/rbd play together nicely in grizzly?
I'm currently testing a ceph/rados back end with an openstack installation.
I have the following things working OK:
1. cinder configured to create volumes in RBD
2. nova configured to boot from RBD backed cinder vol
_direct_url = True) does work in
> Grizzly.
>
> It sounds like you are close. To check permissions, run 'ceph auth list',
> and reply with "client.images" and "client.volumes" (or whatever keys you
> use in Glance and Cinder).
>
> Cheers,
>
ed when cloning a glance image into a cinder volume is
a bug? It means that the cinder client doesn't show the volume as
bootable, though I'm not sure what other detrimental effect it actually has
(clearly the volume can be booted from).
Thanks
Darren
On 10 September 2013 21:04, Darre
Hi,
It seems that the combination of libvirt and ceph will happily do live
migrations.
However, when using openstack, and a nova instance is booted from a cinder
volume that itself exists in rbd, it appears from the nova code that nova
itself does not have support for instance migration due to th
Hi Maciej,
I'm using Grizzly, but the live migration doesn't appear to be changed even
in trunk. It seems to check if you are using shared storage by writing a
test file on the destination host (in /var/lib/nova/instances) and then
trying to read it on the source host, and will fail if this test
13 15:15, Darren Birkett
> wrote:
> > Hi Maciej,
> >
> > I'm using Grizzly, but the live migration doesn't appear to be changed
> even
> > in trunk. It seems to check if you are using shared storage by writing a
> > test file on the destination host (in /var
Hi Alexis,
Great to hear you fixed your problem! Would you care to describe in more
detail what the fix was, in case other people experience the same issues as
you did.
Thanks
Darren
On 18 September 2013 10:12, Alexis GÜNST HORN wrote:
> Hello to all,
> Thanks for your answers.
>
> Well... af
On 19 September 2013 11:51, Gavin wrote:
>
> Hi,
>
> Please excuse/disregard my previous email, I just needed a
> clarification on my understanding of how this all fits together.
>
> I was kindly pointed in the right direction by a friendly gentleman
> from Rackspace. Thanks Darren. :)
>
> The re
Hi Amit,
It can, but at the moment there is some issue with keystone token caching
(in Dumpling), so every auth call hits keystone and does not cache the
token.
See here:
http://www.spinics.net/lists/ceph-users/msg04531.html
and here:
http://tracker.ceph.com/issues/6360
Thanks
Darren
On
Hi Warren,
Try using the ceph specific fastcgi module as detailed here:
http://ceph.com/docs/next/radosgw/manual-install/
And see if that helps.
There was a similar discussion on the list previously:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000360.html
Thanks
Darren
Try using passing '--debug' to the swift command. It should output the
equivalent curl command for you to use.
- Darren
"Snider, Tim" wrote:
>I'm having pilot error with getting the path correct using curl.
>Bucket listing using "radosgw-admin bucket list" works as does the
>swift API.
>Can som
Hi All,
In our prior tests with 0.67.3, keystone authtoken caching was broken
causing dreadful performance - see
http://www.spinics.net/lists/ceph-users/msg04531.html
We upgraded to release 0.67.4 as we wanted to test the apparent fix to
authtoken caching that was included in the release notes.
ct set.
Thanks,
Darren
On 7 October 2013 14:28, Darren Birkett wrote:
> Hi All,
>
> In our prior tests with 0.67.3, keystone authtoken caching was broken
> causing dreadful performance - see
> http://www.spinics.net/lists/ceph-users/msg04531.html
>
> We upgraded to release 0.67
Is anyone else using keystone authentication with radosgw? Anyone having
any luck getting the authtoken caching working?
- Darren
On 8 October 2013 10:17, Darren Birkett wrote:
> Hi All,
>
> What's the best way to try and track down why this isn't working for us?
>
Hi,
I'd have to say in general I agree with the other responders. Not really
for reasons of preferring a ML over a forum necessarily, but just because
the ML already exists. One of the biggest challenges for anyone new coming
in to an open source project such as ceph is availability of informati
Hi,
I understand from various reading and research that there are a number of
things to consider when deciding how many disks one wants to put into a
single chassis:
1. Higher density means higher failure domain (more data to re-replicate if
you lose a node)
2. More disks means more CPU/memory ho
On 6 November 2013 14:08, Andrey Korolyov wrote:
> > We are looking at building high density nodes for small scale 'starter'
> > deployments for our customers (maybe 4 or 5 nodes). High density in this
> > case could mean a 2u chassis with 2x external 45 disk JBOD containers
> > attached. That'
Hi List,
Any chance the following will be updated with the latest packages for
dumpling/emperor:
http://ceph.com/packages/qemu-kvm/centos/x86_64/
Using CentOS 6.4 and dumpling with OpenStack Havana, I am unable to boot
from rbd volumes until I install an rbd-ified qemu-kvm. I have grabbed the
l
18 matches
Mail list logo