Hi
The "ec unable to recover when below min size" thing has very recently
been fixed for octopus.
See https://tracker.ceph.com/issues/18749 and
https://github.com/ceph/ceph/pull/17619
Docs has been updated with a section on this issue
http://docs.ceph.com/docs/master/rados/operations/erasure-
On Friday, July 5, 2019 11:50:44 AM CDT Paul Emmerich wrote:
> * There are virtually no use cases for ec pools with m=1, this is a bad
> configuration as you can't have both availability and durability
I'll have to look into this more. The cluster only has 4 hosts, so it might be
worth switching
On Friday, July 5, 2019 11:28:32 AM CDT Caspar Smit wrote:
> Kyle,
>
> Was the cluster still backfilling when you removed osd 6 or did you only
> check its utilization?
Yes, still backfilling.
>
> Running an EC pool with m=1 is a bad idea. EC pool min_size = k+1 so losing
> a single OSD results
* There are virtually no use cases for ec pools with m=1, this is a bad
configuration as you can't have both availability and durability
* Due to weird internal restrictions ec pools below their min size can't
recover, you'll probably have to reduce min_size temporarily to recover it
* Depending
Kyle,
Was the cluster still backfilling when you removed osd 6 or did you only
check its utilization?
Running an EC pool with m=1 is a bad idea. EC pool min_size = k+1 so losing
a single OSD results in inaccessible data.
Your incomplete PG's are probably all EC pool pgs, please verify.
If the ab
Hello,
I'm working with a small ceph cluster (about 10TB, 7-9 OSDs, all Bluestore on
lvm) and recently ran into a problem with 17 pgs marked as incomplete after
adding/removing OSDs.
Here's the sequence of events:
1. 7 osds in the cluster, health is OK, all pgs are active+clean
2. 3 new osds on