On Fri, Jan 26, 2024 at 3:35 AM Torkil Svensgaard wrote:
>
> The most weird one:
>
> Pool rbd_ec_data stores 683TB in 4096 pgs -> warn should be 1024
> Pool rbd_internal stores 86TB in 1024 pgs-> warn should be 2048
>
> That makes no sense to me based on the amount of data stored. Is this a
> bug
Disclaimer: I'm fairly new to Ceph, but I've read a bunch of threads
on the min_size=1 issue as that was perplexing me when I started, as
one replica is generally considered fine in many other applications.
However, there really are some unique concerns to Ceph beyond just the
number of disks you c
On Wed, Dec 6, 2023 at 9:25 AM Patrick Begou
wrote:
>
> My understood was that k and m were for EC chunks not hosts. 🙁 Of
> course if k and m are hosts the best choice would be k=2 and m=2.
A few others have already replied - as they said if the failure domain
is set to host then it will put only
On Tue, Dec 5, 2023 at 6:35 AM Patrick Begou
wrote:
>
> Ok, so I've misunderstood the meaning of failure domain. If there is no
> way to request using 2 osd/node and node as failure domain, with 5 nodes
> k=3+m=1 is not secure enough and I will have to use k=2+m=2, so like a
> raid1 setup. A litt
On Tue, Dec 5, 2023 at 5:16 AM Patrick Begou
wrote:
>
> On my side, I'm working on building my first (small) Ceph cluster using
> E.C. and I was thinking about 5 nodes and k=4 m=2. With a failure domain
> on host and several osd by nodes, in my mind this setup may run degraded
> with 3 nodes using
On Tue, Nov 28, 2023 at 6:25 PM Anthony D'Atri wrote:
> Looks like one 100GB SSD OSD per host? This is AIUI the screaming minimum
> size for an OSD. With WAL, DB, cluster maps, and other overhead there
> doesn’t end up being much space left for payload data. On larger OSDs the
> overhead is m
On Tue, Nov 28, 2023 at 3:52 PM Anthony D'Atri wrote:
>
> Very small and/or non-uniform clusters can be corner cases for many things,
> especially if they don’t have enough PGs. What is your failure domain — host
> or OSD?
Failure domain is host, and PG number should be fairly reasonable.
>
>
I'm fairly new to Ceph and running Rook on a fairly small cluster
(half a dozen nodes, about 15 OSDs). I notice that OSD space use can
vary quite a bit - upwards of 10-20%.
In the documentation I see multiple ways of managing this, but no
guidance on what the "correct" or best way to go about thi