Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa' Paterno'
... and BTW, I know it's my fault that I haven't done the mds newfs, but I think it would be better to print an error rather that going in core dump with a trace. Just my eur 0.02 :) Cheers, Giuseppe ___ ceph-users mailing list ceph-users@lists.ceph.c

Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa' Paterno'
Hi Greg, just for your own information, ceph mds newfs has disappeared from the help screen of the "ceph" command and it was a nightmare to understand the syntax (that has changed)... luckily sources were there :) For the "flight log": ceph mds newfs --yes-i-really-mean-it Cheers, Gippa ___

[ceph-users] Fwd: Fwd: some problem install ceph-deploy(china)

2013-05-30 Thread Dan Mick
I think you meant this to go to ceph-users: Original Message Subject:Fwd: some problem install ceph-deploy(china) Date: Fri, 31 May 2013 02:54:56 +0800 From: 张鹏 To: dan.m...@inktank.com hello everyone I come from china,when i install ceph-deploy in my server

Re: [ceph-users] cephfs file system snapshots?

2013-05-30 Thread Gregory Farnum
On Thu, May 30, 2013 at 3:10 PM, K Richard Pixley wrote: > Hi. I've been following ceph from a distance for several years now. Kudos > on the documentation improvements and quick start stuff since the last time > I looked. > > However, I'm a little confused about something. > > I've been making

[ceph-users] cephfs file system snapshots?

2013-05-30 Thread K Richard Pixley
Hi. I've been following ceph from a distance for several years now. Kudos on the documentation improvements and quick start stuff since the last time I looked. However, I'm a little confused about something. I've been making heavy use of btrfs file system snapshots for several years now and

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin
On 05/30/2013 02:50 PM, Martin Mailand wrote: Hi Josh, now everything is working, many thanks for your help, great work. Great! I added those settings to http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure out in the future. -martin On 30.05.2013 23:24, Josh Durgin wr

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh, now everything is working, many thanks for your help, great work. -martin On 30.05.2013 23:24, Josh Durgin wrote: >> I have to more things. >> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated, >> update your configuration to the new path. What is the new path? > > cind

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin
On 05/30/2013 02:18 PM, Martin Mailand wrote: Hi Josh, that's working. I have to more things. 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated, update your configuration to the new path. What is the new path? cinder.volume.drivers.rbd.RBDDriver 2. I have in the glance-api.c

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh, that's working. I have to more things. 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated, update your configuration to the new path. What is the new path? 2. I have in the glance-api.conf show_image_direct_url=True, but the volumes are not clones of the original which ar

Re: [ceph-users] increasing stability

2013-05-30 Thread Sage Weil
Hi everyone, I wanted to mention just a few things on this thread. The first is obvious: we are extremely concerned about stability. However, Ceph is a big project with a wide range of use cases, and it is difficult to cover them all. For that reason, Inktank is (at least for the moment) foc

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin
On 05/30/2013 01:50 PM, Martin Mailand wrote: Hi Josh, I found the problem, nova-compute tries to connect to the publicurl (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from the management network. I thought the internalurl is the one, which is used for the internal communi

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh, I found the problem, nova-compute tries to connect to the publicurl (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from the management network. I thought the internalurl is the one, which is used for the internal communication of the openstack components and the publi

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh, On 30.05.2013 21:17, Josh Durgin wrote: > It's trying to talk to the cinder api, and failing to connect at all. > Perhaps there's a firewall preventing that on the compute host, or > it's trying to use the wrong endpoint for cinder (check the keystone > service and endpoint tables for the

Re: [ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Stefan Priebe
Am 30.05.2013 21:10, schrieb Josh Durgin: On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote: Hi, under bobtail rbd snap rollback shows the progress going on. Since cuttlefish i see no progress anymore. Listing the rbd help it only shows me a no-progress option but it seems no pogress

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi, telnet is working. But how does nova know where to find the cinder-api? I have no cinder conf on the compute node, just nova. telnet 192.168.192.2 8776 Trying 192.168.192.2... Connected to 192.168.192.2. Escape character is '^]'. get Error response Error response Error code 400. Message: Ba

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Weiguo, my answers are inline. -martin On 30.05.2013 21:20, w sun wrote: > I would suggest on nova compute host (particularly if you have > separate compute nodes), > > (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is > readable by user nova!! yes to both > (2) make sure you can

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread w sun
I would suggest on nova compute host (particularly if you have separate compute nodes), (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is readable by user nova!!(2) make sure you can start up a regular ephemeral instance on the same nova node (ie, nova-compute is working correctly)(

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin
On 05/30/2013 07:37 AM, Martin Mailand wrote: Hi Josh, I am trying to use ceph with openstack (grizzly), I have a multi host setup. I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/. Glance is working without a problem. With cinder I can create and delete volumes without

Re: [ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Josh Durgin
On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote: Hi, under bobtail rbd snap rollback shows the progress going on. Since cuttlefish i see no progress anymore. Listing the rbd help it only shows me a no-progress option but it seems no pogress is the default so i need a progress option.

Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Gregory Farnum
On Wed, May 29, 2013 at 11:20 PM, Giuseppe 'Gippa' Paterno' wrote: > Hi Greg, >> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at >> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html >> Although if you were re-creating pools and things, I think that would >>

[ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh, I am trying to use ceph with openstack (grizzly), I have a multi host setup. I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/. Glance is working without a problem. With cinder I can create and delete volumes without a problem. But I cannot boot from volumes. I do

Re: [ceph-users] RADOS Gateway Configuration

2013-05-30 Thread John Wilkins
Do you have your admin keyring in the /etc/ceph directory of your radosgw host? That sounds like step 1 here: http://ceph.com/docs/master/start/quick-rgw/#generate-a-keyring-and-key I think I encountered an issue there myself, and did a sudo chmod 644 on the keyring. On Wed, May 29, 2013 at 1:17

Re: [ceph-users] ceph-deploy

2013-05-30 Thread John Wilkins
Dewan, I encountered this too. I just did umount and reran the command and it worked for me. I probably need to add a troubleshooting section for ceph-deploy. On Fri, May 24, 2013 at 4:00 PM, John Wilkins wrote: > ceph-deploy does have an ability to push the client keyrings. I > haven't encounte

Re: [ceph-users] v0.63 released

2013-05-30 Thread Wido den Hollander
On 05/30/2013 03:26 PM, 大椿 wrote: Hi, Sage. I didn't find the 0.63 update for Debian/Unbuntu in http://ceph.com/docs/master/install/debian. The package version is still 0.61.2 . Hi, The packages are there already: http://ceph.com/debian-testing/pool/main/c/ceph/ http://eu.ceph.com/debian-t

Re: [ceph-users] v0.63 released

2013-05-30 Thread ??????
Hi, Sage. I didn't find the 0.63 update for Debian/Unbuntu in http://ceph.com/docs/master/install/debian. The package version is still 0.61.2 . Thanks! -- Original -- From: "Sage Weil"; Date: Wed, May 29, 2013 12:05 PM To: "ceph-devel"; "ceph-users"; Su

[ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Stefan Priebe - Profihost AG
Hi, under bobtail rbd snap rollback shows the progress going on. Since cuttlefish i see no progress anymore. Listing the rbd help it only shows me a no-progress option but it seems no pogress is the default so i need a progress option... Greets, Stefan ___