... and BTW, I know it's my fault that I haven't done the mds newfs, but
I think it would be better to print an error rather that going in core
dump with a trace.
Just my eur 0.02 :)
Cheers,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.c
Hi Greg,
just for your own information, ceph mds newfs has disappeared from the
help screen of the "ceph" command and it was a nightmare to understand
the syntax (that has changed)... luckily sources were there :)
For the "flight log":
ceph mds newfs --yes-i-really-mean-it
Cheers,
Gippa
___
I think you meant this to go to ceph-users:
Original Message
Subject:Fwd: some problem install ceph-deploy(china)
Date: Fri, 31 May 2013 02:54:56 +0800
From: 张鹏
To: dan.m...@inktank.com
hello everyone
I come from china,when i install ceph-deploy in my server
On Thu, May 30, 2013 at 3:10 PM, K Richard Pixley wrote:
> Hi. I've been following ceph from a distance for several years now. Kudos
> on the documentation improvements and quick start stuff since the last time
> I looked.
>
> However, I'm a little confused about something.
>
> I've been making
Hi. I've been following ceph from a distance for several years now.
Kudos on the documentation improvements and quick start stuff since the
last time I looked.
However, I'm a little confused about something.
I've been making heavy use of btrfs file system snapshots for several
years now and
On 05/30/2013 02:50 PM, Martin Mailand wrote:
Hi Josh,
now everything is working, many thanks for your help, great work.
Great! I added those settings to
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure
out in the future.
-martin
On 30.05.2013 23:24, Josh Durgin wr
Hi Josh,
now everything is working, many thanks for your help, great work.
-martin
On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
>
> cind
On 05/30/2013 02:18 PM, Martin Mailand wrote:
Hi Josh,
that's working.
I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?
cinder.volume.drivers.rbd.RBDDriver
2. I have in the glance-api.c
Hi Josh,
that's working.
I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?
2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which ar
Hi everyone,
I wanted to mention just a few things on this thread.
The first is obvious: we are extremely concerned about stability.
However, Ceph is a big project with a wide range of use cases, and it is
difficult to cover them all. For that reason, Inktank is (at least for
the moment) foc
On 05/30/2013 01:50 PM, Martin Mailand wrote:
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communi
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publi
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
> It's trying to talk to the cinder api, and failing to connect at all.
> Perhaps there's a firewall preventing that on the compute host, or
> it's trying to use the wrong endpoint for cinder (check the keystone
> service and endpoint tables for the
Am 30.05.2013 21:10, schrieb Josh Durgin:
On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote:
Hi,
under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.
Listing the rbd help it only shows me a no-progress option but it seems
no pogress
Hi,
telnet is working. But how does nova know where to find the cinder-api?
I have no cinder conf on the compute node, just nova.
telnet 192.168.192.2 8776
Trying 192.168.192.2...
Connected to 192.168.192.2.
Escape character is '^]'.
get
Error response
Error response
Error code 400.
Message: Ba
Hi Weiguo,
my answers are inline.
-martin
On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can
I would suggest on nova compute host (particularly if you have separate compute
nodes),
(1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is readable by user
nova!!(2) make sure you can start up a regular ephemeral instance on the same
nova node (ie, nova-compute is working correctly)(
On 05/30/2013 07:37 AM, Martin Mailand wrote:
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without
On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote:
Hi,
under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.
Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option.
On Wed, May 29, 2013 at 11:20 PM, Giuseppe 'Gippa' Paterno'
wrote:
> Hi Greg,
>> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
>> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
>> Although if you were re-creating pools and things, I think that would
>>
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.
But I cannot boot from volumes.
I do
Do you have your admin keyring in the /etc/ceph directory of your
radosgw host? That sounds like step 1 here:
http://ceph.com/docs/master/start/quick-rgw/#generate-a-keyring-and-key
I think I encountered an issue there myself, and did a sudo chmod 644
on the keyring.
On Wed, May 29, 2013 at 1:17
Dewan,
I encountered this too. I just did umount and reran the command and it
worked for me. I probably need to add a troubleshooting section for
ceph-deploy.
On Fri, May 24, 2013 at 4:00 PM, John Wilkins wrote:
> ceph-deploy does have an ability to push the client keyrings. I
> haven't encounte
On 05/30/2013 03:26 PM, 大椿 wrote:
Hi, Sage.
I didn't find the 0.63 update for Debian/Unbuntu in
http://ceph.com/docs/master/install/debian.
The package version is still 0.61.2 .
Hi,
The packages are there already:
http://ceph.com/debian-testing/pool/main/c/ceph/
http://eu.ceph.com/debian-t
Hi, Sage.
I didn't find the 0.63 update for Debian/Unbuntu in
http://ceph.com/docs/master/install/debian.
The package version is still 0.61.2 .
Thanks!
-- Original --
From: "Sage Weil";
Date: Wed, May 29, 2013 12:05 PM
To: "ceph-devel";
"ceph-users";
Su
Hi,
under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.
Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option...
Greets,
Stefan
___
26 matches
Mail list logo