Geddes

If you are still struggling with this , ping me in IRC #CEPH ( ksingh )  

****************************************************************
Karan Singh 
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9 4572302
http://www.csc.fi/
****************************************************************

> On 02 Apr 2015, at 19:48, Quentin Hartman <qhart...@direwolfdigital.com> 
> wrote:
> 
> Well, 100% may be overstating things. When I try to create a volume from an 
> image it fails. I'm digging through the logs right now. glance alone works (I 
> can upload and delete images) and cinder alone works (I can create and delete 
> volumes) but when cinder tries to get the glance service it fails, it seems 
> to be trying to contact the completely wrong IP:
> 
> 2015-04-02 16:39:05.033 24986 TRACE cinder.api.middleware.fault 
> CommunicationError: Error finding address for 
> http://192.168.1.18:9292/v2/schemas/image 
> <http://192.168.1.18:9292/v2/schemas/image>: 
> HTTPConnectionPool(host='192.168.1.18', port=9292): Max retries exceeded with 
> url: /v2/schemas/image (Caused by <class 'socket.error'>: [Errno 111] 
> ECONNREFUSED)
> 
> Which I would expect to fail, since my glance service is not on that machine. 
> I assume that cinder gets this information out of keystone's endpoint 
> registry, but that lists the correct IP for glance:
> 
> | cf833cf63944490ba69a49a7af7fa2f5 |  office   |          
> http://glance-host:9292 <http://glance-host:9292/>         |          
> http://192.168.1.20:9292 <http://192.168.1.20:9292/>         |          
> http://glance-host:9292 <http://glance-host:9292/>         | 
> a2a74e440b134e08bd526d6dd36540d2 |
> 
> But this is probably something to move to an Openstack list. Thanks for all 
> the ideas and talking through things.
> 
> QH
> 
> On Thu, Apr 2, 2015 at 10:41 AM, Erik McCormick <emccorm...@cirrusseven.com 
> <mailto:emccorm...@cirrusseven.com>> wrote:
> 
> 
> On Thu, Apr 2, 2015 at 12:18 PM, Quentin Hartman 
> <qhart...@direwolfdigital.com <mailto:qhart...@direwolfdigital.com>> wrote:
> Hm, even lacking the mentions of rbd in the glance docs, and the lack of 
> cephx auth information in the config, glance seems to be working after all. 
> Soooo, hooray! It was probably working all along, I just hadn't gotten to 
> really testing it since I was getting blocked by my typo on the cinder config.
> 
> 
> 
> Glance sets defaults for almost everything, so just enabling the default 
> store will work. I thought you needed to specify a username still, but maybe 
> that's defaulted now as well. Glad it's working. So Quentin is 100% working 
> now and  Iain has no Cinder and slow Glance. Right? 
> 
> 
> Erik - 
> 
> Here's my output for the requested grep (though I am on Ubuntu, so the path 
> was slightly different:
> 
>     cfg.IntOpt('rbd_store_chunk_size', default=DEFAULT_CHUNKSIZE,
>     def __init__(self, name, store, chunk_size=None):
>         self.chunk_size = chunk_size or store.READ_CHUNKSIZE
>                             length = min(self.chunk_size, bytes_left)
>             chunk = self.conf.glance_store.rbd_store_chunk_size
>             self.chunk_size = chunk * (1024 ** 2)
>             self.READ_CHUNKSIZE = self.chunk_size
>     def get(self, location, offset=0, chunk_size=None, context=None):
>         return (ImageIterator(loc.image, self, chunk_size=chunk_size),
>                 chunk_size or self.get_size(location))
> 
> 
> This all looks correct, so any slowness isn't the bug I was thinking of.  
> 
> QH
> 
> On Thu, Apr 2, 2015 at 10:06 AM, Erik McCormick <emccorm...@cirrusseven.com 
> <mailto:emccorm...@cirrusseven.com>> wrote:
> The RDO glance-store package had a bug in it that miscalculated the chunk 
> size. I should hope that it's been patched by Redhat now since the fix was 
> committed upstream before the first Juno rleease, but perhaps not. The 
> symptom of the bug was horribly slow uploads to glance.
> 
> Run this and send back the output:
> 
> grep chunk_size /usr/lib/python2.7/site-packages/glance_store/_drivers/rbd.py
> 
> -Erik
> 
> On Thu, Apr 2, 2015 at 7:34 AM, Iain Geddes <iain.ged...@cyaninc.com 
> <mailto:iain.ged...@cyaninc.com>> wrote:
> Oh, apologies, I missed the versions ...
> 
> # glance --version   :   0.14.2
> # cinder --version   :   1.1.1
> # ceph -v    :   ceph version 0.87.1 
> (283c2e7cfa2457799f534744d7d549f83ea1335e)
> 
> From rpm I can confirm that Cinder and Glance are both of the February 2014 
> vintage:
> 
> # rpm -qa |grep -e ceph -e glance -e cinder
> ceph-0.87.1-0.el7.x86_64
> libcephfs1-0.87.1-0.el7.x86_64
> ceph-common-0.87.1-0.el7.x86_64
> python-ceph-0.87.1-0.el7.x86_64
> openstack-cinder-2014.2.2-1.el7ost.noarch
> python-cinder-2014.2.2-1.el7ost.noarch
> python-cinderclient-1.1.1-1.el7ost.noarch
> python-glanceclient-0.14.2-2.el7ost.noarch
> python-glance-2014.2.2-1.el7ost.noarch
> python-glance-store-0.1.10-2.el7ost.noarch
> openstack-glance-2014.2.2-1.el7ost.noarch
> 
> On Thu, Apr 2, 2015 at 4:24 AM, Iain Geddes <iain.ged...@cyaninc.com 
> <mailto:iain.ged...@cyaninc.com>> wrote:
> Thanks Karan/Quentin/Erik,
> 
> I admit up front that this is all new to me as my background is optical 
> transport rather than server/storage admin! 
> 
> I'm reassured to know that it should work and this is why I'm completely 
> willing to believe that it's something that I'm doing wrong ... but 
> unfortunately I can't see it based on the RDO Havana/Ceph integration guide 
> or http://ceph.com/docs/master/rbd/rbd-openstack/ 
> <http://ceph.com/docs/master/rbd/rbd-openstack/>. Essentially I have 
> extracted everything so that it can be copy/pasted so I am guaranteed 
> consistency - and this has the added advantage that it's easy to compare what 
> was done with what was documented.
> 
> Just to keep everything clean, I've just restarted the Cinder and Glance 
> processes and do indeed see them establish with the same responses that you 
> showed:
> Cinder
> 
> 2015-04-02 10:50:54.990 16447 INFO cinder.openstack.common.service [-] Caught 
> SIGTERM, stopping children
> 2015-04-02 10:50:54.992 16447 INFO cinder.openstack.common.service [-] 
> Waiting on 1 children to exit
> 2015-04-02 10:52:25.273 17366 INFO cinder.openstack.common.service [-] 
> Starting 1 workers
> 2015-04-02 10:52:25.274 17366 INFO cinder.openstack.common.service [-] 
> Started child 17373
> 2015-04-02 10:52:25.275 17373 INFO cinder.service [-] Starting cinder-volume 
> node (version 2014.2.2)
> 2015-04-02 10:52:25.276 17373 INFO cinder.volume.manager 
> [req-1b0774ff-1bd6-43bb-a271-e6d030aaa5e1 - - - - -] Starting volume driver 
> RBDDriver (1.1.0)
> 
> Glance
> 
> 2015-04-02 10:58:37.141 18302 DEBUG glance.common.config [-] 
> glance_store.default_store     = rbd log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 2015-04-02 10:58:37.141 18302 DEBUG glance.common.config [-] 
> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 2015-04-02 10:58:37.142 18302 DEBUG glance.common.config [-] 
> glance_store.rbd_store_chunk_size = 8 log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 2015-04-02 10:58:37.142 18302 DEBUG glance.common.config [-] 
> glance_store.rbd_store_pool    = images log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 2015-04-02 10:58:37.142 18302 DEBUG glance.common.config [-] 
> glance_store.rbd_store_user    = glance log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 2015-04-02 10:58:37.143 18302 DEBUG glance.common.config [-] 
> glance_store.stores            = ['rbd'] log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 
> 
> Debug of the api really doesn't reveal anything either as far as I can see. 
> Attempting an image-create from the CLI:
> glance image-create --name "cirros-0.3.3-x86_64" --file 
> cirros-0.3.3-x86_64-disk.raw --disk-format raw --container-format bare 
> --is-public True --progress
> 
> returns log entries that can be seen in the attached which appears to show 
> that the process has started ... but progress never moves beyond 4% and I 
> haven't seen any further log messages. openstack-status shows all the 
> processes to be up, and Glance images as saving. Given that the top one was 
> through the GUI yesterday I'm guessing it's not going to finish any time soon!
> 
> == Glance images ==
> +--------------------------------------+---------------------+-------------+------------------+----------+--------+
> | ID                                   | Name                | Disk Format | 
> Container Format | Size     | Status |
> +--------------------------------------+---------------------+-------------+------------------+----------+--------+
> | f77429b2-17fd-4ef6-97a8-f710862182c6 | Cirros Raw          | raw         | 
> bare             | 41126400 | saving |
> | 1b12e65a-01cd-4d05-91e8-9e9d86979229 | cirros-0.3.3-x86_64 | raw         | 
> bare             | 41126400 | saving |
> | fd23c0f3-54b9-4698-b90b-8cdbd6e152c6 | cirros-0.3.3-x86_64 | raw         | 
> bare             | 41126400 | saving |
> | db297a42-5242-4122-968e-33bf4ad3fe1f | cirros-0.3.3-x86_64 | raw         | 
> bare             | 41126400 | saving |
> +--------------------------------------+---------------------+-------------+------------------+----------+--------+
> 
> Was there a particular document that you referenced to perform your install 
> Karan? This should be the easy part ... but I've been saying that about 
> nearly everything for the past month or two!!
> 
> Kind regards
> 
> 
> Iain
> 
> 
> 
> On Thu, Apr 2, 2015 at 3:28 AM, Karan Singh <karan.si...@csc.fi 
> <mailto:karan.si...@csc.fi>> wrote:
> Fortunately Ceph Giant + OpenStack Juno works flawlessly for me.
> 
> If you have configured cinder / glance correctly , then after restarting  
> cinder and glance services , you should see something like this in cinder and 
> glance logs.
> 
> 
> Cinder logs : 
> 
> volume.log:2015-04-02 13:20:43.943 2085 INFO cinder.volume.manager 
> [req-526cb14e-42ef-4c49-b033-e9bf2096be8f - - - - -] Starting volume driver 
> RBDDriver (1.1.0)
> 
> 
> Glance Logs:
> 
> api.log:2015-04-02 13:20:50.448 1266 DEBUG glance.common.config [-] 
> glance_store.default_store     = rbd log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] 
> glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] 
> glance_store.rbd_store_chunk_size = 8 log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] 
> glance_store.rbd_store_pool    = images log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] 
> glance_store.rbd_store_user    = glance log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> api.log:2015-04-02 13:20:50.451 1266 DEBUG glance.common.config [-] 
> glance_store.stores            = ['rbd'] log_opt_values 
> /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
> 
> 
> If Cinder and Glance are able to initialize RBD driver , then everything 
> should work like charm.
> 
> 
> ****************************************************************
> Karan Singh 
> Systems Specialist , Storage Platforms
> CSC - IT Center for Science,
> Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
> mobile: +358 503 812758
> tel. +358 9 4572001 <tel:%2B358%209%204572001>
> fax +358 9 4572302 <tel:%2B358%209%204572302>
> http://www.csc.fi/ <http://www.csc.fi/>
> ****************************************************************
> 
>> On 02 Apr 2015, at 03:10, Erik McCormick <emccorm...@cirrusseven.com 
>> <mailto:emccorm...@cirrusseven.com>> wrote:
>> 
>> Can you both set Cinder and / or Glance logging to debug and provide some 
>> logs? There was an issue with the first Juno release of Glance in some 
>> vendor packages, so make sure you're fully updated to 2014.2.2
>> 
>> On Apr 1, 2015 7:12 PM, "Quentin Hartman" <qhart...@direwolfdigital.com 
>> <mailto:qhart...@direwolfdigital.com>> wrote:
>> I am conincidentally going through the same process right now. The best 
>> reference I've found is this: http://ceph.com/docs/master/rbd/rbd-openstack/ 
>> <http://ceph.com/docs/master/rbd/rbd-openstack/>
>> 
>> When I did Firefly / icehouse, this (seemingly) same guide Just Worked(tm), 
>> but now with Giant / Juno I'm running into similar trouble  to that which 
>> you describe. Everything _seems_ right, but creating volumes via openstack 
>> just sits and spins forever, never creating anything and (as far as i've 
>> found so far) not logging anything interesting. Normal Rados operations work 
>> fine.
>> 
>> Feel free to hit me up off list if you want to confer and then we can return 
>> here if we come up with anything to be shared with the group.
>> 
>> QH
>> 
>> On Wed, Apr 1, 2015 at 3:43 PM, Iain Geddes <iain.ged...@cyaninc.com 
>> <mailto:iain.ged...@cyaninc.com>> wrote:
>> All,
>> 
>> Apologies for my ignorance but I don't seem to be able to search an archive. 
>> 
>> I've spent a lot of time trying but am having difficulty in integrating Ceph 
>> (Giant) into Openstack (Juno). I don't appear to be recording any errors 
>> anywhere, but simply don't seem to be writing to the cluster if I try 
>> creating a new volume or importing an image. The cluster is good and I can 
>> create a static rbd mapping so I know the key components are in place. My 
>> problem is almost certainly finger trouble on my part but am completely lost 
>> and wondered if there was a well thumbed guide to integration?
>> 
>> Thanks
>> 
>> 
>> Iain
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> 
> -- 
> Iain Geddes
> Customer Support Engineer      <http://cyaninc.com/>
> 
> 1383 North McDowell Blvd.
> Petaluma, CA 94954    
> M     +353 89 432 6811 <>
> E     iain.ged...@cyaninc.com <mailto:iain.ged...@cyaninc.com>
> www.cyaninc.com <http://www.cyaninc.com/>      
> <http://www.facebook.com/CyanInc>  
> <http://www.linkedin.com/company/cyan-inc?trk=hb_tab_compy_id_2171992>  
> <http://twitter.com/CyanNews>
> 
> 
> 
> -- 
> Iain Geddes
> Customer Support Engineer      <http://cyaninc.com/>
> 
> 1383 North McDowell Blvd.
> Petaluma, CA 94954    
> M     +353 89 432 6811 <>
> E     iain.ged...@cyaninc.com <mailto:iain.ged...@cyaninc.com>
> www.cyaninc.com <http://www.cyaninc.com/>      
> <http://www.facebook.com/CyanInc>  
> <http://www.linkedin.com/company/cyan-inc?trk=hb_tab_compy_id_2171992>  
> <http://twitter.com/CyanNews>
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to