Hello,
The problem was in the Ceph documentation. "default_store = rbd" must be in
the "glance_store" section and not in the default section for Openstack
Mitaka and Ceph Jewel.
Thanks,
Fran.
2016-06-15 11:54 GMT+02:00 Fran Barrera :
> Hi,
>
> Thanks for all replies, I'm using KVM so there not
Hi,
Thanks for all replies, I'm using KVM so there not be a problem. If I use
Glance without Ceph it's working well so the problem is with the
integration with Ceph. The service is running but the glance-api not
appear's work well and gives the error "Unable to connect.."
Regards,
Fran.
2016-06-
Hi Jon,
Then this is not the issue, RDB was supported on KVM long time ago.
Cheers, I
2016-06-14 21:40 GMT+02:00 Jonathan D. Proulx :
> On Tue, Jun 14, 2016 at 05:48:11PM +0200, Iban Cabrillo wrote:
> :Hi Jon,
> : Which is the hypervisor used for your Openstack deployment? We have
> lots
>
On Tue, Jun 14, 2016 at 05:48:11PM +0200, Iban Cabrillo wrote:
:Hi Jon,
: Which is the hypervisor used for your Openstack deployment? We have lots
:of troubles with xen until latest libvirt ( in libvirt < 1.3.2 package, RDB
:driver was not supported )
we're using kvm (Ubuntu 14.04, libvirt 1.2.1
Hi Jon,
Which is the hypervisor used for your Openstack deployment? We have lots
of troubles with xen until latest libvirt ( in libvirt < 1.3.2 package, RDB
driver was not supported )
Regards, I
2016-06-14 17:38 GMT+02:00 Jonathan D. Proulx :
> On Tue, Jun 14, 2016 at 02:15:45PM +0200, Fran B
On Tue, Jun 14, 2016 at 02:15:45PM +0200, Fran Barrera wrote:
:Hi all,
:
:I have a problem integration Glance with Ceph.
:
:Openstack Mitaka
:Ceph Jewel
:
:I've following the Ceph doc (
:http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/) but when I try to list
:or create images, I have an error "U
On Tue, Jun 14, 2016 at 8:15 AM, Fran Barrera wrote:
> 2016-06-14 14:02:54.634 2256 DEBUG glance_store.capabilities [-] Store
> glance_store._drivers.rbd.Store doesn't support updating dynamic storage
> capabilities. Please overwrite 'update_capabilities' method of the store to
> implement updatin
Hi all,
I have a problem integration Glance with Ceph.
Openstack Mitaka
Ceph Jewel
I've following the Ceph doc (
http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/) but when I try to list
or create images, I have an error "Unable to establish connection to
http://IP:9292/v2/images";, and in the
Geddes
If you are still struggling with this , ping me in IRC #CEPH ( ksingh )
Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 50
Well, 100% may be overstating things. When I try to create a volume from an
image it fails. I'm digging through the logs right now. glance alone works
(I can upload and delete images) and cinder alone works (I can create and
delete volumes) but when cinder tries to get the glance service it fails,
Thanks Erik,
Maybe this is related as I have:
[DEFAULT]
verbose = True
notification_driver = noop
default_store = rbd
show_image_direct_url = true
debug=True
[database]
connection = mysql://glance:glancepw@ps-sw-ctrl1/glance
[keystone_authtoken]
auth_uri = http://ps-sw-ctrl
On Thu, Apr 2, 2015 at 12:18 PM, Quentin Hartman <
qhart...@direwolfdigital.com> wrote:
> Hm, even lacking the mentions of rbd in the glance docs, and the lack of
> cephx auth information in the config, glance seems to be working after all.
> S, hooray! It was probably working all along, I jus
Hm, even lacking the mentions of rbd in the glance docs, and the lack of
cephx auth information in the config, glance seems to be working after all.
S, hooray! It was probably working all along, I just hadn't gotten to
really testing it since I was getting blocked by my typo on the cinder
confi
Glance should just require something like the following under [default]
rbd_store_user=glance
rbd_store_pool=images
rbd_store_ceph_conf=/etc/ceph/ceph.conf
rbd_store_chunk_size=8
default_store=rbd
Also make sure the keyring is in /etc/ceph and you may want to explicitly
define the user and keyrin
The RDO glance-store package had a bug in it that miscalculated the chunk
size. I should hope that it's been patched by Redhat now since the fix was
committed upstream before the first Juno rleease, but perhaps not. The
symptom of the bug was horribly slow uploads to glance.
Run this and send back
As expected I had a typo in my config for cinder. Correcting that got
cinder working. Everything in glance looks correct according to the above
referenced page, but I'm not seeing any mention of rbd in the logs, and I
notice that the cephx authentication pieces that are present for cinder and
consp
Oh, apologies, I missed the versions ...
# glance --version : 0.14.2
# cinder --version : 1.1.1
# ceph -v: ceph version 0.87.1
(283c2e7cfa2457799f534744d7d549f83ea1335e)
>From rpm I can confirm that Cinder and Glance are both of the February 2014
vintage:
# rpm -qa |grep -e ceph -e
Thanks Karan/Quentin/Erik,
I admit up front that this is all new to me as my background is optical
transport rather than server/storage admin!
I'm reassured to know that it should work and this is why I'm completely
willing to believe that it's something that I'm doing wrong ... but
unfortunately
Fortunately Ceph Giant + OpenStack Juno works flawlessly for me.
If you have configured cinder / glance correctly , then after restarting
cinder and glance services , you should see something like this in cinder and
glance logs.
Cinder logs :
volume.log:2015-04-02 13:20:43.943 2085 INFO cin
Can you both set Cinder and / or Glance logging to debug and provide some
logs? There was an issue with the first Juno release of Glance in some
vendor packages, so make sure you're fully updated to 2014.2.2
On Apr 1, 2015 7:12 PM, "Quentin Hartman"
wrote:
> I am conincidentally going through the
I am conincidentally going through the same process right now. The best
reference I've found is this: http://ceph.com/docs/master/rbd/rbd-openstack/
When I did Firefly / icehouse, this (seemingly) same guide Just Worked(tm),
but now with Giant / Juno I'm running into similar trouble to that which
All,
Apologies for my ignorance but I don't seem to be able to search an
archive.
I've spent a lot of time trying but am having difficulty in integrating
Ceph (Giant) into Openstack (Juno). I don't appear to be recording any
errors anywhere, but simply don't seem to be writing to the cluster if I
On 17/04/13 10:53, Stuart Longland wrote:
> rbd and cephfs (unless you use FUSE) do live in the kernel, but I'm not
> sure about ceph-mon and ceph-ods.
Gah... s/ods/osd/g seems I'm having a dyslexic moment this morning. ;-)
--
## -,--. ## Stuart Longland, Software Engineer
##. : ##
Hi John, thanks for your reply...
On 17/04/13 06:45, John Wilkins wrote:
> It's covered here too:
> http://ceph.com/docs/master/faq/#how-can-i-give-ceph-a-try
Yes I did see that. There used to be a big fat warning in the
quick-start guides which had me rather worried.
What I was curious about is
Stuart,
It's covered here too:
http://ceph.com/docs/master/faq/#how-can-i-give-ceph-a-try
That comment only applies to the quick start--e.g., someone spinning up a
Ceph cluster on their laptop to try it out. One of the things we've tried
to provide to the community is a way to try Ceph out on the
Hi all,
I've been doing quite a bit of research and planning for a new virtual
computing cluster that my company is building for their production
infrastructure.
We're looking to use OpenStack to manage the virtual machines across a
small cluster of nodes.
Currently we're looking at having 3 sto
26 matches
Mail list logo