Hi,
I found two maybe related bugs in the tracker (#4287, #3657) but both
are resolved, so I'm wondering if there's spmething I'm doing wrong.
Has anybody sucessfully mapped rbd images with kernel rbd, when cephx
require signatures is set to true in the cluster?
Thanks for your help,
best regards
Thanks all for the help.
We finally identified the root cause of the issue was due to a lock contention
happening at folder splitting and here is a tracking ticket (thanks Inktank for
the fix!): http://tracker.ceph.com/issues/7207
Thanks,
Guang
On Tuesday, December 31, 2013 8:22 AM, Guang Yan
On Sat, Feb 8, 2014 at 11:37 AM, Manuel Lanazca wrote:
> Hello Team,
>
> I am building a new cluster with cep-deply (emperor). I successfully
> added 24 osds from a host, but when I have tried to add others OSDs from
> the next host they do not mount. The new osds are created but they state
>
On Sat, Feb 8, 2014 at 5:35 PM, Rosengaus, Eliezer
wrote:
>
>
>
>
> From: Rosengaus, Eliezer
> Sent: Friday, February 07, 2014 2:15 PM
> To: ceph-users-j...@lists.ceph.com
> Subject: ceph-deploy osd prepare error
>
>
>
> I am following the quick=start guides on debian wheezy. When attemping
> ceph
Hi Kurt,
Your original analysis is correct: cephx signatures aren't yet implemented
in the kernel client. I don't have a good indication of when this will be
prioritized, unfortunately.
I'm not aware of anybody who has targetted this or has even made note of
the potential vulnerability. It r
On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto wrote:
>
> (2014/02/05 23:49), Alfredo Deza wrote:
>>
>> On Mon, Feb 3, 2014 at 11:28 AM, Kei.masumoto
>> wrote:
>>>
>>> Hi Alfredo,
>>>
>>> Thanks for your reply!
>>>
>>> I think I pasted all logs from ceph.log, but anyway, I re-executed
>>> "ceph-de
Hi Sage,
thanks for your answer.
Am I right, that the communication between nodes that support cephx
signatures is still signed, although the option is set to false?
So only the communication between the client, mapping the rbd, and the
relevant OSDs and MONs is not signed?
Thanks,
best regards,
All,
My radosgw seems to be working, generally, however I have been experiencing
problems when trying to connect to it from CTERA via OpenStack Swift.
I get the following errors:
[client 10.125.190.59] chunked Transfer-Encoding forbidden:
/swift/v1/Ctera_ceph01/fileMaps/1266/bad10d636c9373
Correct. During the intiial handshake, the to ends will decide whether
to use signatures based on whether it is supported by both ends. That
option allows them to continue even if it is not. You probably want
the more specific options:
cephx_require_signatures = false
cephx_cluster_require_
hello
Have already seen this issue in forum on bug
but don' really know what to do
I have
ceph health always HEALTH_OK
but in my syslog
Feb 10 03:07:14 dcceph1 kernel: [1589377.227270] libceph: osd0
192.168.3.22:6809 connect authorization failure
Feb 10 03:22:15 dcceph1 kernel: [1590276.664061]
One more info
osd0 is use for rbd map
and for test one block is map on dcceph1
maybe it's due to this
Le 10/02/2014 20:30, zorg a écrit :
hello
Have already seen this issue in forum on bug
but don' really know what to do
I have
ceph health always HEALTH_OK
but in my syslog
Feb 10 03:07:14
Hey ceph-user/ceph-community,
I just wanted to let you know that we're under a bit of a spam attack
on these two lists so I have ratched up the spam filter just a tad.
Please be alert to make sure that your messages are making it to the
list. If you send something and it doesn't show up, please l
> On my test cluster, some PGs are stuck unclean forever (pool 24, size=2).
>
> Directory /var/lib/ceph/osd/ceph-X/current/24.126_head/ is empty on all OSDs.
>
> Any idea what is wrong? And how can I recover from that state?
The interesting thing is that all OSDs are up, and those PGs does not
13 matches
Mail list logo