Resolved. It looks like radosgw-admin returns keys with escape characters.
In my case, the secret key was
pARGxDCQ+D3fS+s6EQjeCGWLnEhMWdbncXeB\/hQu
and all I needed to do is to remove the backslash. Since I was using
Python, forward slash is not escaped and the back slash became part of the
string.
Hey,
I am having problems too - non-ceph dependencies cannot be satified
(newer package versions are required than exists in distro):
# aptitude install ceph
The following NEW packages will be installed:
libboost-program-options1.55.0{a} libboost-system1.55.0{a}
libboost-thread1.55.0{a}
Th
i had the same issues
in my case i just upgraded to 14.04
and then installed the deleted ceph packages again
and this worked pretty smooth
everything restarted again and worked as normal
greetz
Ramon
-Original Message-
From: Henrik Korkuc
mailto:henrik%20korkuc%20%3cli...@kirneh.eu%3e>
On Wed, Jun 24, 2015 at 10:29 PM, Stefan Priebe wrote:
>
> Am 24.06.2015 um 19:53 schrieb Ilya Dryomov:
>>
>> On Wed, Jun 24, 2015 at 8:38 PM, Stefan Priebe
>> wrote:
>>>
>>>
>>> Am 24.06.2015 um 16:55 schrieb Nick Fisk:
That kernel probably has the bug where tcp_nodelay is not ena
I need to map an existing RBD into a docker container. I'm running
Fedora 21 both as the host and inside the container (i.e. the kernel
matches ceph user space tools) and I get this error:
$ rbd map foo
rbd: add failed: (22) Invalid argument
$ strace rbd map foo
...
stat("/sys/bus/rbd", {st_mode=
On Thu, Jun 25, 2015 at 5:11 PM, Jan Safranek wrote:
> I need to map an existing RBD into a docker container. I'm running
> Fedora 21 both as the host and inside the container (i.e. the kernel
> matches ceph user space tools) and I get this error:
>
> $ rbd map foo
> rbd: add failed: (22) Invalid
On 06/25/2015 04:17 PM, Ilya Dryomov wrote:
> On Thu, Jun 25, 2015 at 5:11 PM, Jan Safranek wrote:
>> I need to map an existing RBD into a docker container. I'm running
>> Fedora 21 both as the host and inside the container (i.e. the kernel
>> matches ceph user space tools) and I get this error:
>
INKozin:
> Where can I find the rules for escape chars in keys?
http://json.org/ clearly states that / must be quoted. What kind
of parser are you using?
Cheers,
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.
I use zone_reclaim_mode=7 all time on boot (I use qemu with NUMA memory locking
on same nodes, so need to keep proportional RAM use). Now I trying to use
migratepages for all ceph daemons (with tcmalloc -DTCMALLOC_SMALL_BUT_SLOW - to
avoid OSDs memory abuse). There my script (migrate to node, ke
Hi,
I'm looking for pros and cons of combining MON and OSD functionality on the
same nodes. Mostly recommended configuration is to have dedicated, odd
number MON nodes. What I'm thinking is more like single node deployment but
consist more than one node, if we have 3 nodes we have 3 MONs with 3 OS
The biggest downside that I've found is the log volume that mons create
eats a lot of io. I was running mons on my OSDs previously, but in my
current dpeloyment I've moved them to other hardware and noticed a
perceptible load reduction on those nodes that were formerly running mons.
QH
On Thu, Ju
It would be really interesting if you could give jemalloc a try.
Originally tcmalloc was used to get around some serious memory
fragmentation issues in the OSD. You can read the original bug tracker
entry from 5 years ago here:
http://tracker.ceph.com/issues/138
It's definitely possible that
For a small deployment this might be ok - but as mentioned, mon logging might
be an issue. Consider the following:
* disk resources for mon logging (maybe dedicate a disk to logging, to avoid
disk IO contention for OSDs)
* CPU resources, some Filesystem types for OSDs can eat a lot of CPU
IMHO there must be tested in different glibc. Old glibc has optional
"experimental" threaded extensions for malloc, default disabled (and have no
options to enable even in Gentoo without hack, may be some distros was compiled
so - don't know). But now this malloc features mostly ON by default, s
Our first thought was jemalloc when we became aware of the issue, but that one
requires support in code which is AFAIK not present in Dumpling. Am I right?
We did try simply preloading jemalloc when starting OSD and that experient
ended with SIGSEGV within minutes, we didn’t investigate it any fu
Hello everybody,
I'm looking at Ceph as an alternative for my current storage solution, but
I'm wondering if it is the right choice for me. I'm hoping you guys can
help me decide.
The current setup is a FreeBSD 10.1 machine running entirely on ZFS. The
function of the machine is offsite backup f
Hi everybody:
I followed the instructions of Federated Configuration
(http://ceph.com/docs/master/radosgw/federated-config/) to build single region
and two zones environment. I have configured all settings successfully. In the
final step, when the synchronization agent, 'radosgw-agent -c
regi
I would not do this, MONs are very important and any load or stability
issues on OSD nodes would interfere with the cluster uptime. I found
it acceptable to run MONs on virtual machines with local storage. But
since MONs oversee OSD nodes, I believe combining them is a recipe for
disaster, FWIW.
On 06/26/2015 08:23 AM, Alex Gorbachev wrote:
> I would not do this, MONs are very important and any load or stability
> issues on OSD nodes would interfere with the cluster uptime. I found
> it acceptable to run MONs on virtual machines with local storage. But
> since MONs oversee OSD nodes, I b
19 matches
Mail list logo