[ceph-users] Added OSD's, weighting

2015-01-03 Thread Lindsay Mathieson
I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per node). Given its the weekend and not in use, I've set them all to weight 1, but looks like it going to take a while to rebalance ... :) Is having them all at weight 1 the fastest way to get back to health, or is it causing

Re: [ceph-users] rbd map hangs

2015-01-03 Thread Max Power
Am 03.01.2015 um 00:36 schrieb Dyweni - Ceph-Users: > Your OSDs are full. The cluster will block, until space is freed up and > both OSDs leave full state. Okay, I did not know that a "rbd map" alone is too much for a full cluster. That makes things a bit hard to work around because reducing the

[ceph-users] OSD weights and space usage

2015-01-03 Thread Max Power
Ceph is a cool software but from time to time I am getting gray hairs with it. And I hope that's because of a misunderstanding. This time I want to balance the load between three osd's evenly (same usage %). Two OSD are 2GB, one is 4GB (test environment). By the way: The pool is erasure coded (k=2,

Re: [ceph-users] Is there an negative relationship between storage utilization and ceph performance?

2015-01-03 Thread Andrey Korolyov
Hi, you can reduce reserved space for ext4 via tune2fs and gain a little more space, up to 5%. By the way, if you are using Centos7, it reserves ridiculously high disk percentage for ext4 (at least during instalation). Performance probably should be compared on smaller allocsize mount option for x

[ceph-users] Stuck with active+remapped

2015-01-03 Thread Max Power
In my test environment I changed the reweights of an osd. After this some PGs get stucked in 'active+remapped' state. I can only repair it by stepping back to the old value of the reweight. Here is my ceph tree: > # idweight type name up/down reweight > -1 12 root default > -4

[ceph-users] Avoid several RBD mapping - Auth & Namespace

2015-01-03 Thread Florent MONTHEL
Hi Team, I’ve 3 servers that have to mount 3 block image into sandevices pool : SERVER FALCON : sandevices/falcon_lun0 sandevices/falcon_lun1 sandevices/falcon_lun2 SERVER RAVEN : sandevices/raven_lun0 sandevices/raven_lun1 sandevices/raven_lun2 SERVER OSPREY : sandevices/osprey_lun0 sandevices

Re: [ceph-users] OSD weights and space usage

2015-01-03 Thread Gregory Farnum
On Saturday, January 3, 2015, Max Power < mailli...@ferienwohnung-altenbeken.de> wrote: > Ceph is a cool software but from time to time I am getting gray hairs > with it. And I hope that's because of a misunderstanding. This time I > want to balance the load between three osd's evenly (same usage

Re: [ceph-users] Added OSD's, weighting

2015-01-03 Thread Gregory Farnum
You might try temporarily increasing the backfill allowance params so that the stuff can move around more quickly. Given the cluster is idle it's definitely hitting those limits. ;) -Greg On Saturday, January 3, 2015, Lindsay Mathieson wrote: > I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes

Re: [ceph-users] Added OSD's, weighting

2015-01-03 Thread Lindsay Mathieson
On Sat, 3 Jan 2015 10:40:30 AM Gregory Farnum wrote: > You might try temporarily increasing the backfill allowance params so that > the stuff can move around more quickly. Given the cluster is idle it's > definitely hitting those limits. ;) -Greg Thanks Greg, but it finished overnight anyway :) O

Re: [ceph-users] Added OSD's, weighting

2015-01-03 Thread Christian Balzer
On Sat, 3 Jan 2015 16:21:29 +1000 Lindsay Mathieson wrote: > I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per > node). > > Given its the weekend and not in use, I've set them all to weight 1, but > looks like it going to take a while to rebalance ... :) > > Is having the