On Sat, 3 Jan 2015 16:21:29 +1000 Lindsay Mathieson wrote:
> I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per
> node).
>
> Given its the weekend and not in use, I've set them all to weight 1, but
> looks like it going to take a while to rebalance ... :)
>
> Is having the
On Sat, 3 Jan 2015 10:40:30 AM Gregory Farnum wrote:
> You might try temporarily increasing the backfill allowance params so that
> the stuff can move around more quickly. Given the cluster is idle it's
> definitely hitting those limits. ;) -Greg
Thanks Greg, but it finished overnight anyway :) O
You might try temporarily increasing the backfill allowance params so that
the stuff can move around more quickly. Given the cluster is idle it's
definitely hitting those limits. ;)
-Greg
On Saturday, January 3, 2015, Lindsay Mathieson
wrote:
> I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes
On Saturday, January 3, 2015, Max Power <
mailli...@ferienwohnung-altenbeken.de> wrote:
> Ceph is a cool software but from time to time I am getting gray hairs
> with it. And I hope that's because of a misunderstanding. This time I
> want to balance the load between three osd's evenly (same usage
Hi Team,
I’ve 3 servers that have to mount 3 block image into sandevices pool :
SERVER FALCON :
sandevices/falcon_lun0
sandevices/falcon_lun1
sandevices/falcon_lun2
SERVER RAVEN :
sandevices/raven_lun0
sandevices/raven_lun1
sandevices/raven_lun2
SERVER OSPREY :
sandevices/osprey_lun0
sandevices
In my test environment I changed the reweights of an osd. After this
some PGs get stucked in 'active+remapped' state. I can only repair it by
stepping back to the old value of the reweight.
Here is my ceph tree:
> # idweight type name up/down reweight
> -1 12 root default
> -4
Hi,
you can reduce reserved space for ext4 via tune2fs and gain a little
more space, up to 5%. By the way, if you are using Centos7, it
reserves ridiculously high disk percentage for ext4 (at least during
instalation). Performance probably should be compared on smaller
allocsize mount option for x
Ceph is a cool software but from time to time I am getting gray hairs
with it. And I hope that's because of a misunderstanding. This time I
want to balance the load between three osd's evenly (same usage %). Two
OSD are 2GB, one is 4GB (test environment). By the way: The pool is
erasure coded (k=2,
Am 03.01.2015 um 00:36 schrieb Dyweni - Ceph-Users:
> Your OSDs are full. The cluster will block, until space is freed up and
> both OSDs leave full state.
Okay, I did not know that a "rbd map" alone is too much for a full
cluster. That makes things a bit hard to work around because reducing
the
I just added 4 OSD's to my 2 OSD "cluster" (2 Nodes, now have 3 OSD's per
node).
Given its the weekend and not in use, I've set them all to weight 1, but
looks like it going to take a while to rebalance ... :)
Is having them all at weight 1 the fastest way to get back to health, or is
it causing
10 matches
Mail list logo