Unless your min_size is set to 3, then you are not hitting the bug in the
tracker you linked. Most likely you are running with a min_size of 2 which
means that bug is not relevant to your cluster. Upload this if you
wouldn't mind. `ceph osd pool get {pool_name} all`
On Thu, Oct 19, 2017 at 5:03
Yes, I am trying it over luminous.
Well the bug has been going for 8 month and it hasn't been merged yet.
Idk if that is whats preventing me to make it work. Tomorrow I will try
to prove it again.
El 19/10/2017 a las 23:00, David Turner escribió:
> Running a cluster on various versions of Hammer
Running a cluster on various versions of Hammer and Jewel I haven't had any
problems. I haven't upgraded to Luminous quite yet, but I'd be surprised
if there is that severe of a regression especially since they did so many
improvements to Erasure Coding.
On Thu, Oct 19, 2017 at 4:59 PM Jorge Pini
Well I was trying it some days ago and it didn't work for me.
maybe because of this:
http://tracker.ceph.com/issues/18749
https://github.com/ceph/ceph/pull/17619
I don't know if now it's actually working
El 19/10/2017 a las 22:55, David Turner escribió:
> In a 3 node cluster with EC k=2 m=1,
In a 3 node cluster with EC k=2 m=1, you can turn off one of the nodes and
the cluster will still operate normally. If you lose a disk during this
state or another server goes offline, then you lose access to your data.
But assuming that you bring up the third node and let it finish
backfilling/re