Hi,Greg.
sorry for my fault in the email. the four disks are all of a size---1TB,not
'GB',I make a mistakes in the email content. Sorry.
At 2013-06-28 22:52:17,"Gregory Farnum" wrote:
It sounds like you just built a 4GB (or 6GB?) RADOS cluster and then tried to
put 4GB of data into it. That won
I have only 'ceph healht details' from previous crash.
ceph health details
HEALTH_WARN 6 pgs peering; 9 pgs stuck unclean
pg 3.c62 is stuck unclean for 583.220063, current state active, last
acting [57,23,51]
pg 4.269 is stuck unclean for 4842.519837, current state peering, last
acting [23,57,106]
Hi, all!
Inktank is preparing a series of Ceph Days this fall in the US and Europe,
intended for users and developers who want to learn about Ceph and meet
other members of the community. Our current working agenda contains a look
into the future of Ceph, stories from real world users, an install
> Date: Fri, 28 Jun 2013 14:10:12 -0700
> From: josh.dur...@inktank.com
> To: ws...@hotmail.com
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Openstack Multi-rbd storage backend
>
> On 06/27/2013 05:54 PM, w sun wrote:
> > Thanks Josh. That explains. So I guess right now with Grizzly
> Ver. 0.56.6
> Hmm, osd not died, 1 or more pg stack on peereng on it.
Can you get a pgid from 'ceph health detail' and then do 'ceph pg
query' and attach that output?
Thanks!
sage
>
> Regards
> Dominik
>
> On Jun 28, 2013 11:28 PM, "Sage Weil" wrote:
> On Sat, 29 Jun 2013, Andrey K
Ver. 0.56.6
Hmm, osd not died, 1 or more pg stack on peereng on it.
Regards
Dominik
On Jun 28, 2013 11:28 PM, "Sage Weil" wrote:
> On Sat, 29 Jun 2013, Andrey Korolyov wrote:
> > There is almost same problem with the 0.61 cluster, at least with same
> > symptoms. Could be reproduced quite easily
On Sat, 29 Jun 2013, Andrey Korolyov wrote:
> There is almost same problem with the 0.61 cluster, at least with same
> symptoms. Could be reproduced quite easily - remove an osd and then
> mark it as out and with quite high probability one of neighbors will
> be stuck at the end of peering process
Today I have peereng problem not when I put osd.71 out, but in normal CEPH work.
Regards
Dominik
2013/6/28 Andrey Korolyov :
> There is almost same problem with the 0.61 cluster, at least with same
> symptoms. Could be reproduced quite easily - remove an osd and then
> mark it as out and with qui
There is almost same problem with the 0.61 cluster, at least with same
symptoms. Could be reproduced quite easily - remove an osd and then
mark it as out and with quite high probability one of neighbors will
be stuck at the end of peering process with couple of peering pgs with
primary copy on it.
On 06/27/2013 05:54 PM, w sun wrote:
Thanks Josh. That explains. So I guess right now with Grizzly, you can
only use one rbd backend pool (assume with different cephx key for
different pool) on a single Cinder node unless you are willing to modify
cinder-volume.conf and restart cinder service all
Hi,
We took osd.71 out and now problem is on osd.57.
Something curious, op_rw on osd.57 is much higher than other.
See here: https://www.dropbox.com/s/o5q0xi9wbvpwyiz/op_rw_osd57.PNG
On data on this osd I found:
> data/osd.57/current# du -sh omap/
> 2.3Gomap/
That much higher op_rw on one osd
On Fri, 28 Jun 2013, Gregory Farnum wrote:
> It's the tool. We definitely want a way to find out if a config option is
> set or not, but I think patches here would be welcome. :)
> You can also look at ceph-disk's get_conf and get_conf_with_default methods
> for how I handle this.
If you do
ceph
On Fri, Jun 28, 2013 at 1:11 AM, Vadim Izvekov wrote:
> Hello!
>
>
> We got a issue with integration of RadosGW and Keystone. Can you support us?
>
> We have such ceph configuration:
>
> [global]
>
> rgw socket path = /tmp/radosgw.sock
>
> [client.radosgw.gateway]
> host = fuel-contr
It's the tool. We definitely want a way to find out if a config option is
set or not, but I think patches here would be welcome. :)
You can also look at ceph-disk's get_conf and get_conf_with_default methods
for how I handle this.
-Greg
On Friday, June 28, 2013, Wido den Hollander wrote:
> Hi,
>
It sounds like you just built a 4GB (or 6GB?) RADOS cluster and then tried
to put 4GB of data into it. That won't work; the underlying local
filesystems probably started having trouble with allocation issues as soon
as you go to 2GB free.
-Greg
On Friday, June 28, 2013, 华仔 wrote:
> Hello, I am fr
Hi,Alex.Here comes the new update:
We apply cache='writeback' to the vm. then the performance improve,however, not
quite much,the speed is 9.9MB/s,increased by 3.9MB/s. any other advise for us?
thanks a lot.
Best regards.
--
Allen
在 2013-06-28 15:54:07,"Alex Bligh" 写道:
>
>On 28 Jun 2013, at 0
Hi,
I was doing some Bash scripting and I'm not sure if it's me or the
ceph-conf tool.
I'm trying to retrieve the "osd data" dir for osd.0, but that fails
since I haven't declared that variable in the conf file because I'm
using the default setting.
root@data1:~# ceph-conf --name osd.0 --l
thank you for your instant reply.
1. so far,we haven't used cache yet.
2. about the version:
qemu--QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4)
ceph---ceph version 0.61.4
the xml file of the vm is as belows:
i-2-49-VM
12608
Hello!
We got a issue with integration of RadosGW and Keystone. Can you support us?
We have such ceph configuration:
[global]
rgw socket path = /tmp/radosgw.sock
[client.radosgw.gateway]
host = fuel-controller-01
user = www-data
keyring = /etc/ceph/client.radosgw.gateway.key
On 28 Jun 2013, at 08:41, 华仔 wrote:
> write speed: wget http://remote server ip/2GB.file ,we get a write speed
> at an average speed of 6MB/s.(far behind expected)
> (we must get something wrong there, we would appreciate a lot if any help
> comes from you. we think the problems comes from
Hello, I am from China, I hope you can read my poor english as belows .
We are doing a basic test with ceph and cloudstack.
experimental environment :
1. four ceph-osds running on two nodes(centos6.2),both of them get three 1GB
phisical disks(we build osds on /dev/sdb and /dev/sdc).
so we
21 matches
Mail list logo