We can go with 8 OSD nodes. We just can't afford cluster not available
events.


Also, since we will be using CephFS to mount storage data on client nodes,
so having 2 dedicated  servers  with mds daemon running on each will
provide more protection. Correct?

Thanks

On Tue, 22 Apr, 2025, 10:16 pm Anthony D'Atri, <anthony.da...@gmail.com>
wrote:

> It’s the same protection, really, just a matter of flexibility.
>
> With 4+2 EC, 7+ hosts are ideal for multiple reasons.  You would likely be
> fine with 6 hosts, so long as you have the ability to quickly repair a host
> if/when it fails.
>
>
>
>
> > On Apr 22, 2025, at 12:03 PM, gagan tiwari <
> gagan.tiw...@mathisys-india.com> wrote:
> >
> > Hi Janne,
> >                     Thanks for your advice.
> >
> > So, you mean with with K=4 M =2 EC, we need 8 OSD nodes to have better
> > protection
> >
> > Thanks,
> > Gagan
> >
> >
> >
> > On Tue, 22 Apr, 2025, 7:22 pm Janne Johansson, <icepic...@gmail.com>
> wrote:
> >
> >>> So, I need to know what will be data safely level with the above
> set-up (
> >>> i.e.  6 OSDs with  4X2 EC  ). How many OSDs ( disks ) and nodes
> failure ,
> >>> above set-up can withstand.
> >>
> >> With EC N+2 you can lose one drive or host, and the cluster will go on
> >> with degraded mode until it has been able to recreate the missing data
> >> on another OSD, if you lose two drives or hosts, I believe the EC pool
> >> with go readonly, again until it has rebuilt copies elsewhere.
> >>
> >> Still, if you have EC 4+2 and only 6 OSD hosts, this means if a host
> >> dies, the cluster can not recreate data anywhere without violating
> >> "one copy per host" default placement, so the cluster will be degraded
> >> until this host comes back or another one replaces it. For a N+M EC
> >> cluster, I would suggest having N+M+1 or even +2 number of hosts, so
> >> that you can do maintenance on a host or lose a host and still be able
> >> to recover without visiting the server room.
> >>
> >> --
> >> May the most significant bit of your life be positive.
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to