Paul, many thanks for your reply.
Thinking about it, I can't decide if I'd prefer to operate the storage
server without redundancy, or have it automatically force a downtime,
subjecting me to a rage of my users and my boss.
But I think that the typical expectation is that system serves the
data while it is able to do so. Since ceph by default does otherwise,
may I suggest that this is explained in the docs? As things are now, I
needed a trial-and-error approach to figure out why ceph was not
working in a setup that I think was hardly exotic, and in fact
resembled an ordinary RAID 6.

Which leaves us with a mishmash of PG states. Is it normal? If not,
would I have avoided it if I created the pool with min_size=k=3 from
the start? In other words, does min_size influence the assignment of
PGs to OSDs? Or is it only used to force I/O shutdown in the event of
OSDs failures?

Thank you very much

Maciej Puzio


On Mon, May 7, 2018 at 5:00 PM, Paul Emmerich <paul.emmer...@croit.io> wrote:
> The docs seem wrong here. min_size is available for erasure coded pools and
> works like you'd expect it to work.
> Still, it's not a good idea to reduce it to the number of data chunks.
>
>
> Paul
>
> --
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to