Hello,
I’m running a 3 node cluster with 2 hdd/osd and one mon on each node.
Sadly the fsyncs done by mon-processes eat my hdd.
I was able to disable this impact by moving the mon-data-dir to ramfs.
This should work until at least 2 nodes are running, but I want to implement
some kind of disaste
Hi,
Am 23.05.2014 um 16:09 schrieb Dan Van Der Ster :
> Hi,
> I think you’re rather brave (sorry, foolish) to store the mon data dir in
> ramfs. One power outage and your cluster is dead. Even with good backups of
> the data dir I wouldn't want to go through that exercise.
>
I know - I’m stil
Hi,
> Am 23.05.2014 um 17:31 schrieb "Wido den Hollander" :
>
> I wrote a blog about this:
> http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
so you assume restoring the old data is working, or did you proof this?
Fabian
___
ceph-
Hi,
Oh, when did they switch the default sched to deadline? Thanks for the hint,
moved to cfq - tests are running.
and finished.
Result: No performance difference between mon on ramfs or btrfs. So the problem
has to be somewhere else and there is no reason to think further about ramfs.
neverth
Hi,
if I want to clone a running vm-hdd, would it be enough to "cp" or do I
have to "snap, protect, flatten, unprotect, rm" the snapshot to get a as
consistent as possible clone?
Or: Does cp use a internal snapshot while copying the blocks?
Thanks,
Fabian
__
Hi,
if I understand the pg-system correctly it's impossible to create a
file/volume which is bigger than the smallest osd of a pg, isn't it?
What could I do to get rid of this limitation?
Thanks,
Fabian
___
ceph-users mailing list
ceph-users@lists.c
Hi,
Am 19.01.15 um 12:47 schrieb Luis Periquito:
> Each object will get mapped to a different PG. The size of an OSD will
> affect its weight and the number of PGs assigned to it, so a smaller OSD
> will get less PGs.
Great! Good to know, thanks a lot!
> And BTW, with a replica of 3, a 2TB will ne
Hi,
Am 19.01.15 um 13:08 schrieb Luis Periquito:
>> What is the current issue? Cluster near-full? cluster too-full? Can you
> send the output of ceph -s?
cluster 0d75b6f9-83fb-4287-aa01-59962bbff4ad
health HEALTH_ERR 1 full osd(s); 1 near full osd(s)
monmap e1: 3 mons at
{ceph0=10.
Hello,
i’m trying to backup hdfs to ceph/radosgw/s3, but I run into different
problems. Currently I’m fighting against an segfault of radosgw.
Some details about my setup:
* nginx, because apache2 isn’t returning an „content-length: 0“ header on head
as required by hadoop (http://tracker.ceph.