Re: [ceph-users] Kernel rbd & cephx signatures

2014-02-10 Thread Kurt Bauer
Hi, I found two maybe related bugs in the tracker (#4287, #3657) but both are resolved, so I'm wondering if there's spmething I'm doing wrong. Has anybody sucessfully mapped rbd images with kernel rbd, when cephx require signatures is set to true in the cluster? Thanks for your help, best regards

Re: [ceph-users] Ceph cluster performance degrade (radosgw) > after running some time

2014-02-10 Thread Guang Yang
Thanks all for the help. We finally identified the root cause of the issue was due to a lock contention happening at folder splitting and here is a tracking ticket (thanks Inktank for the fix!): http://tracker.ceph.com/issues/7207 Thanks, Guang On Tuesday, December 31, 2013 8:22 AM, Guang Yan

Re: [ceph-users] ceph deploy - new osds - do not mount

2014-02-10 Thread Alfredo Deza
On Sat, Feb 8, 2014 at 11:37 AM, Manuel Lanazca wrote: > Hello Team, > > I am building a new cluster with cep-deply (emperor). I successfully > added 24 osds from a host, but when I have tried to add others OSDs from > the next host they do not mount. The new osds are created but they state >

Re: [ceph-users] FW: ceph-deploy osd prepare error... umount fails (device busy)

2014-02-10 Thread Alfredo Deza
On Sat, Feb 8, 2014 at 5:35 PM, Rosengaus, Eliezer wrote: > > > > > From: Rosengaus, Eliezer > Sent: Friday, February 07, 2014 2:15 PM > To: ceph-users-j...@lists.ceph.com > Subject: ceph-deploy osd prepare error > > > > I am following the quick=start guides on debian wheezy. When attemping > ceph

Re: [ceph-users] Kernel rbd & cephx signatures

2014-02-10 Thread Sage Weil
Hi Kurt, Your original analysis is correct: cephx signatures aren't yet implemented in the kernel client. I don't have a good indication of when this will be prioritized, unfortunately. I'm not aware of anybody who has targetted this or has even made note of the potential vulnerability. It r

Re: [ceph-users] keyring generation

2014-02-10 Thread Alfredo Deza
On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto wrote: > > (2014/02/05 23:49), Alfredo Deza wrote: >> >> On Mon, Feb 3, 2014 at 11:28 AM, Kei.masumoto >> wrote: >>> >>> Hi Alfredo, >>> >>> Thanks for your reply! >>> >>> I think I pasted all logs from ceph.log, but anyway, I re-executed >>> "ceph-de

Re: [ceph-users] Kernel rbd & cephx signatures

2014-02-10 Thread Kurt Bauer
Hi Sage, thanks for your answer. Am I right, that the communication between nodes that support cephx signatures is still signed, although the option is set to false? So only the communication between the client, mapping the rbd, and the relevant OSDs and MONs is not signed? Thanks, best regards,

[ceph-users] Radosgw / Chunked transfer / RHEL / Swift

2014-02-10 Thread alistair.whittle
All, My radosgw seems to be working, generally, however I have been experiencing problems when trying to connect to it from CTERA via OpenStack Swift. I get the following errors: [client 10.125.190.59] chunked Transfer-Encoding forbidden: /swift/v1/Ctera_ceph01/fileMaps/1266/bad10d636c9373

Re: [ceph-users] Kernel rbd & cephx signatures

2014-02-10 Thread Sage Weil
Correct. During the intiial handshake, the to ends will decide whether to use signatures based on whether it is supported by both ends. That option allows them to continue even if it is not. You probably want the more specific options: cephx_require_signatures = false cephx_cluster_require_

[ceph-users] periodic strange message in log

2014-02-10 Thread zorg
hello Have already seen this issue in forum on bug but don' really know what to do I have ceph health always HEALTH_OK but in my syslog Feb 10 03:07:14 dcceph1 kernel: [1589377.227270] libceph: osd0 192.168.3.22:6809 connect authorization failure Feb 10 03:22:15 dcceph1 kernel: [1590276.664061]

Re: [ceph-users] periodic strange message in log

2014-02-10 Thread zorg
One more info osd0 is use for rbd map and for test one block is map on dcceph1 maybe it's due to this Le 10/02/2014 20:30, zorg a écrit : hello Have already seen this issue in forum on bug but don' really know what to do I have ceph health always HEALTH_OK but in my syslog Feb 10 03:07:14

[ceph-users] Spam

2014-02-10 Thread Patrick McGarry
Hey ceph-user/ceph-community, I just wanted to let you know that we're under a bit of a spam attack on these two lists so I have ratched up the spam filter just a tad. Please be alert to make sure that your messages are making it to the list. If you send something and it doesn't show up, please l

Re: [ceph-users] pg is stuck unclean since forever

2014-02-10 Thread Dietmar Maurer
> On my test cluster, some PGs are stuck unclean forever (pool 24, size=2). > > Directory  /var/lib/ceph/osd/ceph-X/current/24.126_head/ is empty on all OSDs. > > Any idea what is wrong? And how can I recover from that state? The interesting thing is that all OSDs are up, and those PGs does not