ceph-users@lists.ceph.com

2013-06-28 Thread 华仔
Hello, I am from China, I hope you can read my poor english as belows . We are doing a basic test with ceph and cloudstack. experimental environment : 1. four ceph-osds running on two nodes(centos6.2),both of them get three 1GB phisical disks(we build osds on /dev/sdb and /dev/sdc). so we

ceph-users@lists.ceph.com

2013-06-28 Thread Alex Bligh
On 28 Jun 2013, at 08:41, 华仔 wrote: > write speed: wget http://remote server ip/2GB.file ,we get a write speed > at an average speed of 6MB/s.(far behind expected) > (we must get something wrong there, we would appreciate a lot if any help > comes from you. we think the problems comes from

[ceph-users] ceph + openstack integration

2013-06-28 Thread Vadim Izvekov
Hello! We got a issue with integration of RadosGW and Keystone. Can you support us? We have such ceph configuration: [global] rgw socket path = /tmp/radosgw.sock [client.radosgw.gateway] host = fuel-controller-01 user = www-data keyring = /etc/ceph/client.radosgw.gateway.key

ceph-users@lists.ceph.com

2013-06-28 Thread 华仔
thank you for your instant reply. 1. so far,we haven't used cache yet. 2. about the version: qemu--QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4) ceph---ceph version 0.61.4 the xml file of the vm is as belows: i-2-49-VM 12608

[ceph-users] ceph-conf doesn't return not declared variables?

2013-06-28 Thread Wido den Hollander
Hi, I was doing some Bash scripting and I'm not sure if it's me or the ceph-conf tool. I'm trying to retrieve the "osd data" dir for osd.0, but that fails since I haven't declared that variable in the conf file because I'm using the default setting. root@data1:~# ceph-conf --name osd.0 --l

ceph-users@lists.ceph.com

2013-06-28 Thread 华仔
Hi,Alex.Here comes the new update: We apply cache='writeback' to the vm. then the performance improve,however, not quite much,the speed is 9.9MB/s,increased by 3.9MB/s. any other advise for us? thanks a lot. Best regards. -- Allen 在 2013-06-28 15:54:07,"Alex Bligh" 写道: > >On 28 Jun 2013, at 0

ceph-users@lists.ceph.com

2013-06-28 Thread Gregory Farnum
It sounds like you just built a 4GB (or 6GB?) RADOS cluster and then tried to put 4GB of data into it. That won't work; the underlying local filesystems probably started having trouble with allocation issues as soon as you go to 2GB free. -Greg On Friday, June 28, 2013, 华仔 wrote: > Hello, I am fr

Re: [ceph-users] ceph-conf doesn't return not declared variables?

2013-06-28 Thread Gregory Farnum
It's the tool. We definitely want a way to find out if a config option is set or not, but I think patches here would be welcome. :) You can also look at ceph-disk's get_conf and get_conf_with_default methods for how I handle this. -Greg On Friday, June 28, 2013, Wido den Hollander wrote: > Hi, >

Re: [ceph-users] ceph + openstack integration

2013-06-28 Thread Yehuda Sadeh
On Fri, Jun 28, 2013 at 1:11 AM, Vadim Izvekov wrote: > Hello! > > > We got a issue with integration of RadosGW and Keystone. Can you support us? > > We have such ceph configuration: > > [global] > > rgw socket path = /tmp/radosgw.sock > > [client.radosgw.gateway] > host = fuel-contr

Re: [ceph-users] ceph-conf doesn't return not declared variables?

2013-06-28 Thread Sage Weil
On Fri, 28 Jun 2013, Gregory Farnum wrote: > It's the tool. We definitely want a way to find out if a config option is > set or not, but I think patches here would be welcome. :) > You can also look at ceph-disk's get_conf and get_conf_with_default methods > for how I handle this. If you do ceph

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Dominik Mostowiec
Hi, We took osd.71 out and now problem is on osd.57. Something curious, op_rw on osd.57 is much higher than other. See here: https://www.dropbox.com/s/o5q0xi9wbvpwyiz/op_rw_osd57.PNG On data on this osd I found: > data/osd.57/current# du -sh omap/ > 2.3Gomap/ That much higher op_rw on one osd

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-28 Thread Josh Durgin
On 06/27/2013 05:54 PM, w sun wrote: Thanks Josh. That explains. So I guess right now with Grizzly, you can only use one rbd backend pool (assume with different cephx key for different pool) on a single Cinder node unless you are willing to modify cinder-volume.conf and restart cinder service all

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Andrey Korolyov
There is almost same problem with the 0.61 cluster, at least with same symptoms. Could be reproduced quite easily - remove an osd and then mark it as out and with quite high probability one of neighbors will be stuck at the end of peering process with couple of peering pgs with primary copy on it.

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Dominik Mostowiec
Today I have peereng problem not when I put osd.71 out, but in normal CEPH work. Regards Dominik 2013/6/28 Andrey Korolyov : > There is almost same problem with the 0.61 cluster, at least with same > symptoms. Could be reproduced quite easily - remove an osd and then > mark it as out and with qui

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Sage Weil
On Sat, 29 Jun 2013, Andrey Korolyov wrote: > There is almost same problem with the 0.61 cluster, at least with same > symptoms. Could be reproduced quite easily - remove an osd and then > mark it as out and with quite high probability one of neighbors will > be stuck at the end of peering process

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Dominik Mostowiec
Ver. 0.56.6 Hmm, osd not died, 1 or more pg stack on peereng on it. Regards Dominik On Jun 28, 2013 11:28 PM, "Sage Weil" wrote: > On Sat, 29 Jun 2013, Andrey Korolyov wrote: > > There is almost same problem with the 0.61 cluster, at least with same > > symptoms. Could be reproduced quite easily

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Sage Weil
> Ver. 0.56.6 > Hmm, osd not died, 1 or more pg stack on peereng on it. Can you get a pgid from 'ceph health detail' and then do 'ceph pg query' and attach that output? Thanks! sage > > Regards > Dominik > > On Jun 28, 2013 11:28 PM, "Sage Weil" wrote: > On Sat, 29 Jun 2013, Andrey K

Re: [ceph-users] Openstack Multi-rbd storage backend

2013-06-28 Thread w sun
> Date: Fri, 28 Jun 2013 14:10:12 -0700 > From: josh.dur...@inktank.com > To: ws...@hotmail.com > CC: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Openstack Multi-rbd storage backend > > On 06/27/2013 05:54 PM, w sun wrote: > > Thanks Josh. That explains. So I guess right now with Grizzly

[ceph-users] Speakers needed: Ceph Days (US and UK)

2013-06-28 Thread Ross Turk
Hi, all! Inktank is preparing a series of Ceph Days this fall in the US and Europe, intended for users and developers who want to learn about Ceph and meet other members of the community.  Our current working agenda contains a look into the future of Ceph, stories from real world users, an install

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-28 Thread Dominik Mostowiec
I have only 'ceph healht details' from previous crash. ceph health details HEALTH_WARN 6 pgs peering; 9 pgs stuck unclean pg 3.c62 is stuck unclean for 583.220063, current state active, last acting [57,23,51] pg 4.269 is stuck unclean for 4842.519837, current state peering, last acting [23,57,106]

ceph-users@lists.ceph.com

2013-06-28 Thread 华仔
Hi,Greg. sorry for my fault in the email. the four disks are all of a size---1TB,not 'GB',I make a mistakes in the email content. Sorry. At 2013-06-28 22:52:17,"Gregory Farnum" wrote: It sounds like you just built a 4GB (or 6GB?) RADOS cluster and then tried to put 4GB of data into it. That won