Hello,
On Thu, 26 May 2016 15:42:03 +0700 Никитенко Виталий wrote:
> Hello!
> >>mon_osd_down_out_subtree_limit = host
> Thanks! This realy help me!!!
>
Glad to help. ^.^
> >>So again, not a full duplication of the data, but a significant amount.
> If on the host who was left alone, will step d
Hello!
>>mon_osd_down_out_subtree_limit = host
Thanks! This realy help me!!!
>>So again, not a full duplication of the data, but a significant amount.
If on the host who was left alone, will step down at this time one OSD. ALL
data will available? Or part of data, pgs which are marked as 'active
Hello,
I've expanded the cache-tier in my test cluster from a single node
to 2, increased the pool size from 1 to 2, then waited until all the data
was rebalanced/duplicated and the cluster was healthy again.
Then I stopped all OSDs on one of the 2 nodes and nothing other than
degraded/undersiz
Hello,
Thanks for the update and I totally agree that it should try to do 2x
replication on the single storage node.
I'll try to reproduce what you're seeing tomorrow on my test cluster, need
to move some data around first.
Christian
On Wed, 25 May 2016 08:58:54 +0700 Никитенко Виталий wrote:
I'm sorry it was not right map, that map right
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
de
Hello,
On Tue, 24 May 2016 10:28:02 +0700 Никитенко Виталий wrote:
> Hello!
> I have a cluster of 2 nodes with 3 OSD each. The cluster full about 80%.
>
According to your CRUSH map that's not quite true, namely ceph1-node2
entry.
And while that again according to your CRUSH map isn't in the de
Hello!
I have a cluster of 2 nodes with 3 OSD each. The cluster full about 80%.
df -H
/dev/sdc127G 24G 3.9G 86% /var/lib/ceph/osd/ceph-1
/dev/sdd127G 20G 6.9G 75% /var/lib/ceph/osd/ceph-2
/dev/sdb127G 24G 3.5G 88% /var/lib/ceph/osd/ceph-0
When I switch off one