Good point. Thanks!
Triple-failure is essentially what I've faced about a months ago. So
now I want to make sure that the new cephfs setup I am deploying at the
moment will handle this kind of things better.
On Wed, Dec 9, 2015 at 2:41 PM, John Spray <jsp...@redhat.com> wrote:
On Wed, Dec 9, 2015 at 1:25 PM, Mykola Dvornik
<mykola.dvor...@gmail.com> wrote:
Hi Jan,
Thanks for the reply. I see your point about replicas. However my
motivation
was a bit different.
Consider some given amount of objects that are stored in the
metadata pool.
If I understood correctly ceph data placement approach, the number
of
objects per PG should decrease with the amount of PGs per pool.
So my concern is that in catastrophic event of some PG(s) being
lost I will
loose more objects if the amount of PGs per pool is small. At the
same time
I don't want to have too few objects per PG to keep things disk IO,
but not
CPU bounded.
If you are especially concerned about triple-failures (i.e. permanent
PG loss), I would suggest you look at doing things like a size=4 pool
for your metadata (maybe on SSDs).
You could also look at simply segregating your size=3 metadata on to
separate spinning drives, so that these comparatively less loaded OSDs
will be able to undergo recovery faster in the event of a failure than
an ordinary data drive that's full of terabytes of data, and have a
lower probability of a triple failure.
John
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com