> Thanks a lot, Jens. Do I have to have cephx authentication enabled? Did you > enable it? Which user from the node that contains cinder-api or glance-api > are you using to create volumes and images? The documentation at > http://ceph.com/docs/master/rbd/rbd-openstack/ mentions creating new users > client.volumes and client.images for cinder and glance respectively. Did you > do that?
we have cephx authentication enabled: Here's the /etc/ceph/ceph.conf file that our cluster has (we have OSDs on our compute nodes - we shouldn't, but this is a test cluster only) root@h1:~# cat /etc/ceph/ceph.conf [global] fsid = 6b3bd327-2f97-44f6-a8fc-xxxxxxxxxxxx mon_initial_members = hxs, h0s, h1s mon_host = xxxx:yyy:0:6::11c,xxxx:yyy:0:6::11e,xxxx:yyy:0:6::11d auth_supported = cephx osd_journal_size = 1024 filestore_xattr_use_omap = true ms_bind_ipv6 = true rgw_print_continue = false [client] rbd cache = true [client.images] keyring = /etc/ceph/ceph.client.images.keyring [client.volumes] keyring = /etc/ceph/ceph.client.volumes.keyring [client.radosgw.gateway] host = hxs keyring = /etc/ceph/keyring.radosgw.gateway rgw_socket_path = /tmp/radosgw.sock log_file = /var/log/ceph/radosgw.log Make sure that /etc/ceph/ceph.conf is readable by other processes - ceph-deploy sets it to 0600 or 0400 (which makes nova really really unhappy) root@h1:~# ls -l /etc/ceph/ceph.conf -rw-r--r-- 1 root root 592 Nov 8 16:32 /etc/ceph/ceph.conf We have a volumes and an images user as you can see (with the necessary rights on the volumes and images pool, as described in the ceph-openstack documentation) A really good overview over the current state of ceph and OpenStack Havana was posted by Sebastien Hen yesterday: http://techs.enovance.com/6424/back-from-the-summit-cephopenstack-integration - it cleared a bunch of things for me cheers jc > > Thanks again! > Narendra > > From: Jens-Christian Fischer [mailto:jens-christian.fisc...@switch.ch] > Sent: Monday, November 25, 2013 8:19 AM > To: Trivedi, Narendra > Cc: ceph-users@lists.ceph.com; Rüdiger Rissmann > Subject: Re: [ceph-users] Openstack Havana, boot from volume fails > > Hi Narendra > > rbd for cinder and glance are according to the ceph documentation here: > http://ceph.com/docs/master/rbd/rbd-openstack/ > > rbd for VM images configured like so: https://review.openstack.org/#/c/36042/ > > config sample (nova.conf): > > --- cut --- > > volume_driver=nova.volume.driver.RBDDriver > rbd_pool=volumes > rbd_user=volumes > rbd_secret_uuid=xxxx-yyyy-zzzz > > > libvirt_images_type=rbd > # the RADOS pool in which rbd volumes are stored (string value) > libvirt_images_rbd_pool=volumes > # path to the ceph configuration file to use (string value) > libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf > > > # dont inject stuff into partions, RBD backed partitions don't work that way > libvirt_inject_partition = -2 > > --- cut --- > > and finally, used the following files from this repository: > https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd > > image/glance.py > virt/images.py > virt/libvirt/driver.py > virt/libvirt/imagebackend.py > virt/libvirt/utils.py > > good luck :) > > cheers > jc > > -- > SWITCH > Jens-Christian Fischer, Peta Solutions > Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland > phone +41 44 268 15 15, direct +41 44 268 15 71 > jens-christian.fisc...@switch.ch > http://www.switch.ch > > http://www.switch.ch/socialmedia > > On 22.11.2013, at 17:41, "Trivedi, Narendra" <narendra.triv...@savvis.com> > wrote: > > > Hi Jean, > > Could you please tell me which link you followed to install RBD etc. for > Havana? > > Thanks! > Narendra > > From: ceph-users-boun...@lists.ceph.com > [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jens-Christian Fischer > Sent: Thursday, November 21, 2013 8:06 AM > To: ceph-users@lists.ceph.com > Cc: Rüdiger Rissmann > Subject: [ceph-users] Openstack Havana, boot from volume fails > > Hi all > > I'm playing with the boot from volume options in Havana and have run into > problems: > > (Openstack Havana, Ceph Dumpling (0.67.4), rbd for glance, cinder and > experimental ephemeral disk support) > > The following things do work: > - glance images are in rbd > - cinder volumes are in rbd > - creating a VM from an image works > - creating a VM from a snapshot works > > > However, the booting from volume fails: > > Steps to reproduce: > > Boot from image > Create snapshot from running instance > Create volume from this snapshot > Start a new instance with "boot from volume" and the volume just created: > > The boot process hangs after around 3 seconds, and the console.log of the > instance shows this: > > [ 0.000000] Linux version 3.11.0-12-generic (buildd@allspice) (gcc version > 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu7) ) #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC > 2013 (Ubuntu 3.11.0-12.19-generic 3.11.3) > [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-12-generic > root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0 > ... > [ 0.098221] Brought up 1 CPUs > [ 0.098964] smpboot: Total of 1 processors activated (4588.94 BogoMIPS) > [ 0.100408] NMI watchdog: enabled on all CPUs, permanently consumes one > hw-PMU counter. > [ 0.102667] devtmpfs: initialized > … > [ 0.560202] Linux agpgart interface v0.103 > [ 0.562276] brd: module loaded > [ 0.563599] loop: module loaded > [ 0.565315] vda: vda1 > [ 0.568386] scsi0 : ata_piix > [ 0.569217] scsi1 : ata_piix > [ 0.569972] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14 > [ 0.571289] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15 > … > [ 0.742082] Freeing unused kernel memory: 1040K (ffff8800016fc000 - > ffff880001800000) > [ 0.746153] Freeing unused kernel memory: 836K (ffff880001b2f000 - > ffff880001c00000) > Loading, please wait... > [ 0.764177] systemd-udevd[95]: starting version 204 > [ 0.787913] floppy: module verification failed: signature and/or required > key missing - tainting kernel > [ 0.825174] FDC 0 is a S82078B > … > [ 1.448178] tsc: Refined TSC clocksource calibration: 2294.376 MHz > error: unexpectedly disconnected from boot status daemon > Begin: Loading essential drivers ... done. > Begin: Running /scripts/init-premount ... done. > Begin: Mounting root file system ... Begin: Running /scripts/local-top ... > done. > Begin: Running /scripts/local-premount ... done. > [ 2.384452] EXT4-fs (vda1): mounted filesystem with ordered data mode. > Opts: (null) > Begin: Running /scripts/local-bottom ... done. > done. > Begin: Running /scripts/init-bottom ... done. > [ 3.021268] init: mountall main process (193) killed by FPE signal > General error mounting filesystems. > A maintenance shell will now be started. > CONTROL-D will terminate this shell and reboot the system. > root@box-web1:~# > The console is stuck, I can't get to the rescue shell > > I can "rbd map" the volume and mount it from a physical host - the filesystem > etc all is in good order. > > Any ideas? > > cheers > jc > > -- > SWITCH > Jens-Christian Fischer, Peta Solutions > Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland > phone +41 44 268 15 15, direct +41 44 268 15 71 > jens-christian.fisc...@switch.ch > http://www.switch.ch > > http://www.switch.ch/socialmedia > > > This message contains information which may be confidential and/or > privileged. Unless you are the intended recipient (or authorized to receive > for the intended recipient), you may not read, use, copy or disclose to > anyone the message or any information contained in the message. If you have > received the message in error, please advise the sender by reply e-mail and > delete the message and any attachment(s) thereto without retaining any copies. > > > This message contains information which may be confidential and/or > privileged. Unless you are the intended recipient (or authorized to receive > for the intended recipient), you may not read, use, copy or disclose to > anyone the message or any information contained in the message. If you have > received the message in error, please advise the sender by reply e-mail and > delete the message and any attachment(s) thereto without retaining any copies.
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com