Re: [ceph-users] Ceph RBD map debug: error -22 on auth protocol 2 init

2014-10-07 Thread Ilya Dryomov
On Tue, Oct 7, 2014 at 9:46 AM, Christopher Armstrong wrote: > Hi folks, > > I'm trying to gather additional information surrounding > http://tracker.ceph.com/issues/9355 so we can hopefully find the root of > what's preventing us from successfully mapping RBD volumes inside a Linux > container. >

Re: [ceph-users] Network hardware recommendations

2014-10-07 Thread Carl-Johan Schenström
On 2014-10-07 03:58, Ariel Silooy wrote: I'm sorry, but I just have to ask, what kind of 10GbE NIC do you use? If you dont mind I/we would like to know the exact model/number. Thank you in advance. They're Intel X540-AT2's. That's a dual-port card. They weren't that much more expensive than t

Re: [ceph-users] SSD MTBF

2014-10-07 Thread Martin B Nielsen
A bit late getting back on this one. On Wed, Oct 1, 2014 at 5:05 PM, Christian Balzer wrote: > > smartctl states something like > > Wear = 092%, Hours = 12883, Datawritten = 15321.83 TB avg on those. I > > think that is ~30TB/day if I'm doing the calc right. > > > Something very much does not ad

Re: [ceph-users] SSD MTBF

2014-10-07 Thread Emmanuel Lacour
On Tue, Oct 07, 2014 at 05:24:40PM +0200, Martin B Nielsen wrote: > >I don't disagree with the above - but the table assumes you'll wear out >your SSD. Adjust the wear level and the price will change proportionally - >if you're only writing 50-100TB/year pr ssd then the value will heav

Re: [ceph-users] Ceph RBD map debug: error -22 on auth protocol 2 init

2014-10-07 Thread Christopher Armstrong
Thank you Ilya! Please let me know if I can help. To give you some background, I'm one of the core maintainers of Deis, an open-source PaaS built on Docker and CoreOS. We have Ceph running quite successfully as implemented in https://github.com/deis/deis/pull/1910 based on Seán McCord's containeriz

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-07 Thread Gregory Farnum
Sorry; I guess this fell off my radar. The issue here is not that it's waiting for an osdmap; it got the requested map and went into replay mode almost immediately. In fact the log looks good except that it seems to finish replaying the log and then simply fail to transition into active. Generate

Re: [ceph-users] Network hardware recommendations

2014-10-07 Thread Massimiliano Cuttini
Hi Christian, When you say "10 gig infiniband", do you mean QDRx4 Infiniband (usually flogged as 40Gb/s even though it is 32Gb/s, but who's counting), which tends to be the same basic hardware as the 10Gb/s Ethernet offerings from Mellanox? A brand new 18 port switch of that caliber will only co

[ceph-users] v0.86 released (Giant release candidate)

2014-10-07 Thread Sage Weil
This is a release candidate for Giant, which will hopefully be out in another week or two (s v0.86). We did a feature freeze about a month ago and since then have been doing only stabilization and bug fixing (and a handful on low-risk enhancements). A fair bit of new functionality went into t

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-10-07 Thread Daniel Schneller
Hi! I have re-run our test as follows: * 4 Rados Gateways, on 4 baremetal machines which have a total of 48 spinning rust OSDs. * Benchmark run on a virtual machine talking to HAProxy which balances the requests across the 4 Rados GWs. * Three instances of the benchmark run in parallel. Eac

Re: [ceph-users] Network hardware recommendations

2014-10-07 Thread Scott Laird
I've done this two ways in the past. Either I'll give each machine an Infiniband network link and a 1000baseT link and use the Infiniband one as the private network for Ceph, or I'll throw an Infiniband card into a PC and run something like Vyatta/VyOS on it and make it a router, so IP traffic can

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-10-07 Thread Yehuda Sadeh
The logs here don't include the messenger (debug ms = 1). It's hard to tell what going on from looking at the outliers. Also, in your previous mail you described a different benchmark, you tested writing large number of objects into a single bucket, whereas in this test you're testing multiple buck

Re: [ceph-users] Multi node dev environment

2014-10-07 Thread Johnu George (johnugeo)
Even when I try ceph-deploy install --dev , I am seeing that it is getting installed from official ceph repo. How can I install ceph from my github repo or my local repo in all ceph nodes? (Or any other possibility? ). Someone can help me in setting this? Johnu On 10/2/14, 1:55 PM, "Somnath Roy

Re: [ceph-users] Multi node dev environment

2014-10-07 Thread Alfredo Deza
On Tue, Oct 7, 2014 at 5:05 PM, Johnu George (johnugeo) wrote: > Even when I try ceph-deploy install --dev , I > am seeing that it is getting installed from official ceph repo. How can I > install ceph from my github repo or my local repo in all ceph nodes? (Or > any other possibility? ). Someone

Re: [ceph-users] Multi node dev environment

2014-10-07 Thread Johnu George (johnugeo)
Thanks Alfredo. Is there any other possible way that will work for my situation? Anything would be helpful Johnu On 10/7/14, 2:25 PM, "Alfredo Deza" wrote: >On Tue, Oct 7, 2014 at 5:05 PM, Johnu George (johnugeo) > wrote: >> Even when I try ceph-deploy install --dev , I >> am seeing that it is

[ceph-users] RBD on openstack glance+cinder CoW?

2014-10-07 Thread Jonathan Proulx
Hi All, We're running Firefly on the ceph side and Icehouse on the OpenStack side & I've pulled the recommended nova branch from https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse according to http://ceph.com/docs/master/rbd/rbd-openstack/#booting-from-a-block-device: "Wh

[ceph-users] Openstack keystone with Radosgw

2014-10-07 Thread lakshmi k s
I am trying to integrate OpenStack Keystone with Ceph Object Store using the link - http://ceph.com/docs/master/radosgw/keystone. Swift V1.0 (without keystone) works quite fine. But for some reason, Swift v2.0 keystone calls to Ceph Object Store always results in 401 - Unauthorized message. I ha

[ceph-users] rbd and libceph kernel api

2014-10-07 Thread Shawn Edwards
Are there any docs on what is possible by writing/reading from the rbd driver's sysfs paths? Is it documented anywhere? I've seen at least one blog post: http://www.sebastien-han.fr/blog/2012/06/24/use-rbd-on-a-client/ about how you can attach to an rbd using the sysfs interface, but I haven't fo

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-10-07 Thread Yehuda Sadeh
This operation stalled quite a bit, seems that it was waiting for the osd: 2.547155 7f036ffc7700 1 -- 10.102.4.11:0/1009401 --> 10.102.4.14:6809/7428 -- osd_op(client.78418684.0:27514711 .bucket.meta.:default.78418684.122043 [call version.read,getxattrs,stat] 5.3b7d1197 ack+read e16034) v4 -- ?+0

Re: [ceph-users] Network hardware recommendations

2014-10-07 Thread Christian Balzer
On Tue, 07 Oct 2014 20:40:31 + Scott Laird wrote: > I've done this two ways in the past. Either I'll give each machine an > Infiniband network link and a 1000baseT link and use the Infiniband one > as the private network for Ceph, or I'll throw an Infiniband card into a > PC and run something

Re: [ceph-users] Network hardware recommendations

2014-10-07 Thread Scott Laird
IIRC, one thing to look out for is that there are two ways to do IP over Infiniband. You can either do IP over Infiniband directly (IPoIB), or encapsulate Ethernet in Infiniband (EoIB), and then do IP over the fake Ethernet network. IPoIB is more common, but I'd assume that IB<->Ethernet bridges

[ceph-users] Basic Ceph questions

2014-10-07 Thread Marcus White
Hello, Some basic Ceph questions, would appreciate your help:) Sorry about the number and detail in advance! a. Ceph RADOS is strongly consistent and different from usual object, does that mean all metadata also, container and account etc is all consistent and everything is updated in the path of

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-10-07 Thread Daniel Schneller
Hi! > By looking at these logs it seems that there are only 8 pgs on the > .rgw pool, if this is correct then you may want to change that > considering your workload. Thanks. See out pg_num configuration below. We had already suspected that the 1600 that we had previously (48 OSDs * 100 / triple

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-07 Thread Mark Kirkwood
On 08/10/14 11:02, lakshmi k s wrote: I am trying to integrate OpenStack Keystone with Ceph Object Store using the link - http://ceph.com/docs/master/radosgw/keystone. Swift V1.0 (without keystone) works quite fine. But for some reason, Swift v2.0 ke

[ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-07 Thread Mark Kirkwood
I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for authentication: $ cat ceph.conf ... [client.radosgw.gateway] host = ceph4 keyring = /etc/ceph/ceph.rados.gateway.keyring rgw_socket_path = /var/run/ceph/$name.sock log_file = /var/log/ceph/radosgw.log rgw_data = /var/lib/ce

Re: [ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-07 Thread Mark Kirkwood
On 08/10/14 18:46, Mark Kirkwood wrote: I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for authentication: $ cat ceph.conf ... [client.radosgw.gateway] host = ceph4 keyring = /etc/ceph/ceph.rados.gateway.keyring rgw_socket_path = /var/run/ceph/$name.sock log_file = /var/log