Re: [ceph-users] how configure cephfs to strip data across osd's?

2013-04-18 Thread George Shuklin
18.04.2013 10:49, Wolfgang Hennerbichler пишет: Ceph doesn't support data stripes, and you probably also don't need it. Ceph distributes reads of data anyways, because large objects are spread automatically to the OSDs, reads happen concurrently, this is somehow like striping, but better :) Well

Re: [ceph-users] Unable to read file on Ceph FS

2013-04-18 Thread Li, Chen
Can you explain more? Because I found here : http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html It says: "Shared storage: NOVA-INST-DIR/instances/ (eg /var/lib/nova/instances) has to be mounted by shared storage." And from here: http://www.mail-arc

[ceph-users] rbd over xfs slow performances

2013-04-18 Thread Emmanuel Lacour
Dear ceph users, I just set up a small cluster with two osds and 3 mon. (0.56.4-1~bpo70+1) OSDs are xfs (defaults mkfs options, mounted defaults,noatime) over lvm over hwraid. dd if=/dev/zero of=... bs=1M count=1 conv=fdatasync on each ceph-* osd mounted partitions show 120MB/s on one ser

Re: [ceph-users] Ceph Illustrations

2013-04-18 Thread Wolfgang Hennerbichler
Thanks, it's just the thing I was searching for. On 04/17/2013 05:29 PM, Patrick McGarry wrote: > Hey Wolfgang, > > There are several slide decks with associated imagery floating around > out there. I'd be happy to get you images that correspond to what you > want to focus on. A good place to s

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Mark Nelson
On 04/18/2013 05:19 AM, Emmanuel Lacour wrote: Dear ceph users, I just set up a small cluster with two osds and 3 mon. (0.56.4-1~bpo70+1) OSDs are xfs (defaults mkfs options, mounted defaults,noatime) over lvm over hwraid. dd if=/dev/zero of=... bs=1M count=1 conv=fdatasync on each ceph

[ceph-users] Format 2 Image support in the RBD driver

2013-04-18 Thread Whelan, Ryan
I've not been following the list for long, so forgive me if this has been covered, but is there a plan for image 2 support in the kernel RBD driver? I assume with Linux 3.9 in the RC phase, its not likely to appear there? Thanks! NOTICE: Protect the information in this message in accordance wi

Re: [ceph-users] Format 2 Image support in the RBD driver

2013-04-18 Thread Olivier B.
If I well understand the roadmap ( http://tracker.ceph.com/projects/ceph/roadmap ), it's planed for Ceph v0.62B : Le jeudi 18 avril 2013 à 09:28 -0400, Whelan, Ryan a écrit : > I've not been following the list for long, so forgive me if this has been > covered, but is there a plan for image 2 su

Re: [ceph-users] Format 2 Image support in the RBD driver

2013-04-18 Thread Whelan, Ryan
Does this mean its in linux-next? (released in 3.10?) - Original Message - From: "Olivier B." To: "Ryan Whelan" Cc: ceph-users@lists.ceph.com Sent: Thursday, April 18, 2013 9:36:22 AM Subject: Re: [ceph-users] Format 2 Image support in the RBD driver If I well understand the roadmap ( h

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Emmanuel Lacour
On Thu, Apr 18, 2013 at 08:25:50AM -0500, Mark Nelson wrote: > thanks for your answer! > It makes me a bit nervous that you are seeing such a discrepancy > between the drives. Were you expecting that one server would be so > much faster than the other? If a drive is is starting to fail your >

[ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Dan van der Ster
Hi, tl;dr: something deleted the objects from the .rgw.gc and then the pgs went inconsistent. Is this normal??!! Just now we had scrub errors and resulting inconsistencies on many of the pgs belonging to our .rgw.gc pool. HEALTH_ERR 119 pgs inconsistent; 119 scrub errors pg 11.1f0 is active+clea

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Mark Nelson
On 04/18/2013 08:42 AM, Emmanuel Lacour wrote: On Thu, Apr 18, 2013 at 08:25:50AM -0500, Mark Nelson wrote: thanks for your answer! It makes me a bit nervous that you are seeing such a discrepancy between the drives. Were you expecting that one server would be so much faster than the other

Re: [ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Dan van der Ster
Replying to myself... I just noticed this: [root@ceph-radosgw01 ceph]# ls -lh /var/log/ceph/ total 27G -rw-r--r--. 1 root root 27G Apr 18 16:08 radosgw.log -rw-r--r--. 1 root root 20 Apr 5 03:13 radosgw.log-20130405.gz -rw-r--r--. 1 root root 20 Apr 6 03:14 radosgw.log-20130406.gz -rw-r--r--.

[ceph-users] Ceph configure RAM for each daemon instance ?

2013-04-18 Thread konradwro
Hello, it is possible to configure ceph.conf RAM for each daemon instance ?___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Dan van der Ster
Sorry for the noise.. we now have a better idea what happened here. For those that might care, basically we had one client looping while trying to list the / bucket with an incorrect key. rgw was handling this at 1kHz, so congratulations on that. I will now go and read how to either decrease the l

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Emmanuel Lacour
On Thu, Apr 18, 2013 at 09:05:12AM -0500, Mark Nelson wrote: > > So Ceph pseudo-randomly distributes data to different OSDs, which > means that you are more or less limited by the slowest OSD in your > system. IE if one node can only process X objects per second, > outstanding operations will slo

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Emmanuel Lacour
On Thu, Apr 18, 2013 at 04:19:09PM +0200, Emmanuel Lacour wrote: > > 1) If you put your journals on the same devices, you are doing 2 > > writes for every incoming write since we do full data journalling. > > Assuming that's the case we are down to 25MB/s. > > > to reduce this double write over

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Mark Nelson
On 04/18/2013 10:12 AM, Emmanuel Lacour wrote: On Thu, Apr 18, 2013 at 04:19:09PM +0200, Emmanuel Lacour wrote: 1) If you put your journals on the same devices, you are doing 2 writes for every incoming write since we do full data journalling. Assuming that's the case we are down to 25MB/s.

[ceph-users] Health problem .. how to fix ?

2013-04-18 Thread Stephane Boisvert
Hi, I configured a test 'cluster' and did play with it (moving osd folders around ie. journal file) and did break something. Now I think that this can occurs again when we go prod. so I would like to know how I can fix it.. I don't care about loosing my files.. Anyone can help? here's the

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Emmanuel Lacour
On Thu, Apr 18, 2013 at 10:18:29AM -0500, Mark Nelson wrote: > > SSD journals definitely help, especially when doing large writes and > targeting high throughput. > clusters I will build will be used mainly for kvm servers images ;) > If you get a chance, it still may be worth giving 0.60 a try

Re: [ceph-users] Format 2 Image support in the RBD driver

2013-04-18 Thread Gregory Farnum
I believe Alex just merged format 2 reading into our testing branch, and is working on writes now. -Greg On Thursday, April 18, 2013, Whelan, Ryan wrote: > Does this mean its in linux-next? (released in 3.10?) > > - Original Message - > From: "Olivier B." > > To: "Ryan Whelan" > > Cc: cep

Re: [ceph-users] rbd over xfs slow performances

2013-04-18 Thread Mark Nelson
On 04/18/2013 10:29 AM, Emmanuel Lacour wrote: On Thu, Apr 18, 2013 at 10:18:29AM -0500, Mark Nelson wrote: SSD journals definitely help, especially when doing large writes and targeting high throughput. clusters I will build will be used mainly for kvm servers images ;) If you get a chanc

Re: [ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Gregory Farnum
What version was this on? -Greg On Thursday, April 18, 2013, Dan van der Ster wrote: > Sorry for the noise.. we now have a better idea what happened here. > > For those that might care, basically we had one client looping while > trying to list the / bucket with an incorrect key. rgw was handling

[ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread Makkelie, R - SPLXL
Hi, Has anyone successfully installed Ceph using the ceph-barclamp with crowbar. if yes what version are you using and how did you created the barclamp and did you integrated it with Openstack folsom/Grizzly? GreetZ Ramonskie For informati

Re: [ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Arne Wiebalck
This is 0.56.4 on a RHEL6 derivative. Cheers, Arne From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on behalf of Gregory Farnum [g...@inktank.com] Sent: 18 April 2013 17:34 To: Dan van der Ster Cc: ceph-users@lists.ceph.com Subject: R

Re: [ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread Gregory Farnum
The barclamps were written against the crowbar "Betty" release, OpenStack Essex (which is the last one supported by Crowbar), and Ceph "argonaut". JJ has updated them to use "Bobtail", but I don't think anybody's run them against newer versions of Openstack. :( You should be able to find built vers

Re: [ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread Makkelie, R - SPLXL
well i tried to build the barclamp from https://github.com/ceph/barclamp-ceph and pacakge it with https://github.com/ceph/package-ceph-barclamp but the install fails so i also found a barclamp that is installing argonaut and it installs ceph but when i manually try to add a image in the volumes

Re: [ceph-users] spontaneous pg inconstancies in the rgw.gc pool

2013-04-18 Thread Yehuda Sadeh
On Thu, Apr 18, 2013 at 7:57 AM, Dan van der Ster wrote: > > Sorry for the noise.. we now have a better idea what happened here. > > For those that might care, basically we had one client looping while > trying to list the / bucket with an incorrect key. rgw was handling > this at 1kHz, so congrat

Re: [ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread John Wilkins
Keep me posted on this, and I'll update the docs when we have a resolution. On Thu, Apr 18, 2013 at 8:55 AM, Makkelie, R - SPLXL wrote: > ** > well i tried to build the barclamp from > https://github.com/ceph/barclamp-ceph > and pacakge it with https://github.com/ceph/package-ceph-barclamp > >

Re: [ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread Gregory Farnum
Oh, yeah. Bobtail isn't going to play nicely without some modifications, but I'll have to wait for JJ to speak about those. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Apr 18, 2013 at 8:55 AM, Makkelie, R - SPLXL wrote: > well i tried to build the barclamp from > h

Re: [ceph-users] No rolling updates from v0.56 to v0.60+?

2013-04-18 Thread Gregory Farnum
On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann wrote: > Hi, > > I just tried upgrading parts of our experimental ceph cluster from 0.56.1 to > 0.60, and it looks like the new mon-daemon from 0.60 cannot talk to those from > 0.56.1 at all. > > Long story short, we had to move some hardware aroun

Re: [ceph-users] No rolling updates from v0.56 to v0.60+?

2013-04-18 Thread Joao Eduardo Luis
On 04/18/2013 05:28 PM, Gregory Farnum wrote: On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann wrote: Hi, I just tried upgrading parts of our experimental ceph cluster from 0.56.1 to 0.60, and it looks like the new mon-daemon from 0.60 cannot talk to those from 0.56.1 at all. Long story shor

Re: [ceph-users] has anyone successfully installed ceph with the crowbar

2013-04-18 Thread JuanJose Galvez
We're making sure that the modified barclamps are successfully going through the Tempest tests, once they do I'll be sending a pull request with all the changes for a bobtail enabled barclamp to the repo. The main problem with using bobtail is actually with the Nova package, it currently includes

Re: [ceph-users] No rolling updates from v0.56 to v0.60+?

2013-04-18 Thread Stefan Priebe - Profihost AG
Isn't the new leveldb tuning part of cuttlefish. Stefan Am 18.04.2013 um 19:40 schrieb Joao Eduardo Luis : > On 04/18/2013 05:28 PM, Gregory Farnum wrote: >> On Wed, Apr 17, 2013 at 7:40 AM, Guido Winkelmann >> wrote: >>> Hi, >>> >>> I just tried upgrading parts of our experimental ceph cluste

Re: [ceph-users] Ceph configure RAM for each daemon instance ?

2013-04-18 Thread Wido den Hollander
On 04/18/2013 04:23 PM, konradwro wrote: Hello, it is possible to configure ceph.conf RAM for each daemon instance ? No, the daemons will use as much as they need and is available. You can put the daemons in a cgroup to limit their memory usage, but that comes with the problem that they could

Re: [ceph-users] Health problem .. how to fix ?

2013-04-18 Thread John Wilkins
Stephane, The monitoring section of operations explains what's happening, but I think I probably need to do a better job of explaining unfound objects. http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/ http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#unfound-objects

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread John Wilkins
Bryan, It seems you got crickets with this question. Did you get any further? I'd like to add it to my upcoming CRUSH troubleshooting section. On Wed, Apr 3, 2013 at 9:27 AM, Bryan Stillwell wrote: > I have two test clusters running Bobtail (0.56.4) and Ubuntu Precise > (12.04.2). The problem

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread Gregory Farnum
Seeing this go by again it's simple enough to provide a quick answer/hint — by setting the tunables it's of course getting a better distribution of data, but the reason they're optional to begin with is that older clients won't support them. In this case, the kernel client being run; so it returns

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread Bryan Stillwell
John, Thanks for your response. I haven't spent a lot of time on this issue since then, so I'm still in the same situation. I do remember seeing an error message about an unsupported feature at one point after setting the tunables to bobtail. Bryan On Thu, Apr 18, 2013 at 1:51 PM, John Wilkin

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread Bryan Stillwell
What's the fix for people running precise (12.04)? I believe I see the same issue with quantal (12.10) as well. On Thu, Apr 18, 2013 at 1:56 PM, Gregory Farnum wrote: > Seeing this go by again it's simple enough to provide a quick > answer/hint — by setting the tunables it's of course getting

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread Gregory Farnum
There's not really a fix — either update all your clients so they support the tunables (I'm not sure how new a kernel you need), or else run without the tunables. In setups where your branching factors aren't very close to your replication counts they aren't normally needed, if you want to reshape

[ceph-users] Backups

2013-04-18 Thread Craig Lewis
I'm new to Ceph, and considering using it to store a bunch of static files in the RADOS Gateway. My files are all versioned, so we never modify files. We only add new files, and delete unused files. I'm trying to figure out how to back everything up, to protect against administrative and ap

[ceph-users] RDMA

2013-04-18 Thread Gandalf Corvotempesta
Hi, will RDMA be supported in the shortterm? I'm planning an infrastructure and I don't know if starting with IB QDR or 10GbE. IB is much cheaper than 10GbE and with RDMA should be 4x faster, but with IPoIB as workaround I've read that is very very heavy on CPU and very slow (15gbit more or less)

Re: [ceph-users] RDMA

2013-04-18 Thread Mark Nelson
On 04/18/2013 03:40 PM, Gandalf Corvotempesta wrote: Hi, will RDMA be supported in the shortterm? I'm planning an infrastructure and I don't know if starting with IB QDR or 10GbE. Depends on your definition of RDMA, supported, and short term. ;) We like the idea of using rsockets as it would b

Re: [ceph-users] Bobtail & Precise

2013-04-18 Thread Bryan Stillwell
Ahh, I think I have a better understanding now. I had my crush map set up like this: default basement rack1 server1 osd.0 osd.1 osd.2 osd.3 osd.4 server2 osd.5

Re: [ceph-users] RDMA

2013-04-18 Thread Gandalf Corvotempesta
2013/4/18 Mark Nelson : > 10GbE is fully supported and widely used with Ceph while IB is a bit more > complicated with fewer users. Having said that, IPoIB seems to work just > fine, and there is potential in the future for even better performance. > Which one is right for you probably depends on

Re: [ceph-users] RDMA

2013-04-18 Thread Mark Nelson
On 04/18/2013 04:15 PM, Gandalf Corvotempesta wrote: 2013/4/18 Mark Nelson : 10GbE is fully supported and widely used with Ceph while IB is a bit more complicated with fewer users. Having said that, IPoIB seems to work just fine, and there is potential in the future for even better performance.

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Gregory Farnum
Hey guys, I finally had enough time to coordinate with a few other people and figure out what's going on with the ceph-create-keys access denied messages and create a ticket: http://tracker.ceph.com/issues/4752. (I believe your monitor crash is something else, Matthew; if that hasn't been dealt wit

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Joao Eduardo Luis
On 04/18/2013 10:36 PM, Gregory Farnum wrote: (I believe your monitor crash is something else, Matthew; if that hasn't been dealt with yet. Unfortunately all that log has is messages, so it probably needs a bit more. Can you check it out, Joao? The stack trace below is #3495, and Matthew is alr

Re: [ceph-users] RDMA

2013-04-18 Thread Gandalf Corvotempesta
2013/4/18 Mark Nelson : > SDP is deprecated: > > http://comments.gmane.org/gmane.network.openfabrics.enterprise/5371 > > rsockets is the future I think. I don't know rsockets. Any plans about support for this or are they "transparent" like SDP? ___ ceph-

Re: [ceph-users] RDMA

2013-04-18 Thread Gandalf Corvotempesta
2013/4/18 Sage Weil : > I'm no expert, but I've heard SDP is not likely to be supported/maintained > by anyone in the long-term. (Please, anyone, correct me if that is not > true!) That said, one user has tested it successfully (with kernel and > userland ceph) and it does seem to work.. Do you

Re: [ceph-users] RDMA

2013-04-18 Thread Mark Nelson
On 04/18/2013 04:46 PM, Gandalf Corvotempesta wrote: 2013/4/18 Mark Nelson : SDP is deprecated: http://comments.gmane.org/gmane.network.openfabrics.enterprise/5371 rsockets is the future I think. I don't know rsockets. Any plans about support for this or are they "transparent" like SDP? I

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Gregory Farnum
On Thu, Apr 18, 2013 at 2:46 PM, Joao Eduardo Luis wrote: > On 04/18/2013 10:36 PM, Gregory Farnum wrote: >> >> (I believe your monitor crash is something else, Matthew; if that >> hasn't been dealt with yet. Unfortunately all that log has is >> messages, so it probably needs a bit more. Can you c

Re: [ceph-users] RDMA

2013-04-18 Thread Sage Weil
On Thu, 18 Apr 2013, Gandalf Corvotempesta wrote: > 2013/4/18 Sage Weil : > > I'm no expert, but I've heard SDP is not likely to be supported/maintained > > by anyone in the long-term. (Please, anyone, correct me if that is not > > true!) That said, one user has tested it successfully (with kerne

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Joao Eduardo Luis
On 04/18/2013 10:49 PM, Gregory Farnum wrote: On Thu, Apr 18, 2013 at 2:46 PM, Joao Eduardo Luis wrote: On 04/18/2013 10:36 PM, Gregory Farnum wrote: (I believe your monitor crash is something else, Matthew; if that hasn't been dealt with yet. Unfortunately all that log has is messages, so it

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Matthew Roy
On 04/18/2013 06:03 PM, Joao Eduardo Luis wrote: > > There's definitely some command messages being forwarded, but AFAICT > they're being forwarded to the monitor, not by the monitor, which by > itself is a good omen towards the monitor being the leader :-) > > In any case, nothing in the trace's

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Gregory Farnum
There's a little bit of python called ceph-create-keys, which is invoked by the upstart scripts. You can kill the running processes, and edit them out of the scripts, without direct harm. (Their purpose is to create some standard keys which the newer deployment tools rely on to do things like creat

Re: [ceph-users] RDMA

2013-04-18 Thread Gandalf Corvotempesta
the user land preloader library like sdp isn't enough? Is the kernel version needed just for librbd? Il giorno 18/apr/2013 23:48, "Mark Nelson" ha scritto: > On 04/18/2013 04:46 PM, Gandalf Corvotempesta wrote: > >> 2013/4/18 Mark Nelson : >> >>> SDP is deprecated: >>> >>> http://comments.gmane.o

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Mike Dawson
Greg, Looks like Sage has a fix for this problem. In case it matters, I have seen a few cases that conflict with your notes in this thread and the bug report. I have seen the bug exclusively on new Ceph installs (without upgrading from bobtail), so it is not isolated to upgrades. Further,